neelakolkar2001@gmail.com

Email copied!

Human-Centered Innovation

Blending AI, Machine Learning, and Human-Centered Design to create intuitive, transformative user experiences

About

I’ve always been fascinated by how technology can feel more like an extension of ourselves. "How can we make devices not just work for us, but understand us?" This question has driven my passion for Human-Computer Interaction (HCI), where I combine my background in Electronics and Telecommunications with a deep interest in creating intuitive, human-centered experiences.

Each project I’ve undertaken has been a step toward creating technology that intuitively understands and adapts to the user. Whether it's AI-driven moderation tools, gesture-based interaction systems, AR/VR interfaces, or AI models designed for real-world applications, every project blends creativity, usability, and technical expertise to enhance how users engage with the digital world.

The work showcased here reflects my commitment to creating empathetic, inclusive, and innovative solutions that not only meet user needs but anticipate them, all while prioritizing user-centered design and accessibility. I invite you to explore these projects and see how I’m pushing the boundaries of what technology can achieve—creating seamless, safe, and intuitive experiences for the user, with a particular focus on augmented and virtual reality to shape the future of human interaction with technology.

ROLE

HCI Developer, Researcher

TIMELINE

4+ Years

LOCATION

NA

5+

Real-world problems solved with AI

300+

Hours of HCI Research

My Vision in AI, ML, AR/VR, and HCI

My passion for Artificial Intelligence (AI), Machine Learning (ML), and Human-Computer Interaction (HCI) is driven by a deep desire to create innovative, user-centric solutions that address real-world challenges. With a solid foundation in Electronics and Telecommunications (EnTC), my goal is to bridge the gap between cutting-edge technologies and human-centered design, focusing on areas like accessibility, healthcare, and immersive technologies such as AR/VR.

I am particularly excited about how AI and ML can revolutionize user experiences by enabling systems to learn and adapt to individual behaviors. In the realm of HCI, I see AI and UX converging as a powerful force to create dynamic, intelligent environments that intuitively respond to human needs. As technologies like AR/VR continue to evolve, the interaction between the digital and physical worlds will become increasingly seamless, unlocking exciting possibilities for immersive and intuitive user experiences. My vision is to push these boundaries further, using the principles of AI, ML, and HCI to design systems that are not only functional but also deeply engaging, inclusive, and accessible.

My Goals

Exploring AI-Driven UX in Immersive Technology:
One of my primary goals is to dive deeper into how AI and ML can elevate the user experience in AR/VR environments. I believe that by applying these technologies, we can create more personalized, adaptive, and intuitive experiences. Systems will not only respond to user input but anticipate needs, offering a truly seamless interaction between the physical and virtual worlds.

Enhancing Accessibility through AI and HCI:
Creating inclusive technologies is at the heart of my work. I am particularly focused on how AI can be used to design tools that make digital interactions more accessible to individuals with disabilities. My aim is to develop assistive technologies that not only adapt to users' needs but empower them, enabling easier, more intuitive interactions with technology.

Driving Cross-Disciplinary Research:
I strive to contribute to research at the intersection of AI, ML, HCI, and immersive technologies. By prioritizing human-centered design, my goal is to develop technologies that are not only advanced but also user-friendly and impactful. I want to solve real-world problems through cross-disciplinary collaboration, ensuring that my solutions resonate with users and improve everyday life.

Fostering Innovation in Smart Systems:
Another area that excites me is the development of smart systems that intelligently respond to users and their environments. Whether it’s through assistive mobility devices, smart homes, or adaptive interfaces, I aim to contribute to the creation of systems that make technology more intuitive, responsive, and user-first.

Certifications & My AI/ML Journey

To further deepen my expertise and stay at the cutting edge of technology, I have completed several certifications, including:

  • Deep Learning Specialization – deeplearning.ai by Andrew Ng

  • TensorFlow Developer – deeplearning.ai by Andrew Ng

  • AWS Fundamentals Specialization – Amazon AWS

  • Game Playing AI with Swift for TensorFlow – IBM

  • Google IT Automation with Python – Google

These certifications have not only expanded my technical expertise but have also reinforced my commitment to creating intelligent, user-centered systems that adapt to and anticipate user needs. I am excited to continue my journey of combining AI, AR/VR, and HCI principles to create meaningful, personalized, and accessible experiences that will help shape the future of human-centered technology in the digital age

Past related works

Neuro-Mobile: Brain-Computer Interfaces for Assistive Mobility (Electronics, BCI & AI)

Undergraduate Thesis

Developed as my undergraduate mega-thesis project, published under university research initiatives, and explored for patent potential, Neuro-Mobile advances the intersection of Brain-Computer Interfaces (BCI) and Human-Computer Interaction (HCI) to create transformative assistive technology solutions.

Using the MindWave 2.0 EEG headset, this system enables thought-controlled mobility, translating real-time brainwave signals into motion. It integrates IFTTT and Google Assistant, allowing seamless hands-free control, enhancing autonomy for individuals with physical disabilities.

Research Focus & HCI Applications

  • BCI in HCI: Explored neuroscientific and cognitive models for optimizing brain-to-machine interactions, aligning with usability, cognitive workload, and adaptive interaction principles.

  • Human-Centered Assistive Design: Applied UX research, accessibility heuristics, and real-time feedback loops to refine interface intuitiveness and user adaptation.

  • Emerging Technology for Real-World Impact: Leveraged EEG signal processing, smart integrations, and AI-driven user interactions to enhance accessibility and independence.

Key Insights & Outcomes

  • Achieved 95% accuracy in EEG signal interpretation, improving reliability for assistive mobility.

  • Live prototype demonstration to 150+ attendees, validating real-world feasibility.

  • Thesis published under university research projects, contributing to HCI, AI-driven accessibility, and neurotechnology innovation.

  • Patent exploration initiated, ensuring research contributes to scalable, real-world applications.

Use Cases

  • Assistive Mobility Devices:
    The core application of Neuro-Mobile was enabling individuals with mobility impairments to control their wheelchair purely through thought. This BCI-based control system empowered users to move forward and backward with ease, demonstrating how technology can dramatically improve the lives of individuals with disabilities. The live demonstration of the prototype underscored its potential to enhance autonomy and reduce dependency on traditional mobility aids, showcasing real-world utility.

  • Real-World Navigation Assistance:
    Neuro-Mobile also demonstrates significant potential in real-world navigation assistance for people with mobility impairments. The system can be adapted to control other assistive devices, such as exoskeletons or walking aids, providing users with greater autonomy in outdoor or complex environments. With precise brainwave signals controlling movement, users can navigate their surroundings with greater ease and independence, helping them interact with the world more fluidly and confidently.

  • Rehabilitation & Therapy:
    Beyond mobility, Neuro-Mobile offers significant potential for rehabilitation. The system could engage patients in brain-controlled exercises, where users would control their movements, tracking real-time progress. This has applications in motor skill rehabilitation and mental engagement, making it an invaluable tool for physical therapy. The project lays the groundwork for adaptive, interactive therapies, which can be tailored to an individual’s needs and progress, improving motor control and cognitive function over time.

  • Personalized Feedback & Adaptation:
    Another potential use case lies in personalized neurofeedback. By adapting the control system based on individual brain activity, the project could allow for increasingly nuanced control, such as adjusting speed, steering accuracy, or enabling dynamic obstacle avoidance. This represents a leap toward more intuitive, contextual user experiences.

Future Scope

  • Enhanced Precision & Control: Adaptive BCI-driven navigation, obstacle avoidance, and multi-speed motion.

  • Expanding Multi-Device Integration: Enabling users to control smart home appliances, communication tools, and rehabilitation systems through BCI-driven interfaces.

  • AI-Powered Personalization: Machine learning-based adaptive neurofeedback, refining usability based on individual brain patterns.

Neuro-Mobile demonstrates the power of BCI-driven HCI solutions, showcasing how AI, accessibility, and user-centered design can shape the future of assistive mobility. The live demonstration, coupled with the thesis publication, highlights the project’s academic value and its potential for real-world impact. The patent exploration further amplifies the long-term potential, ensuring the research can evolve into a scalable solution that benefits users globally.

Pose Estimation with MediaPipe (Machine Learning & Python)

In this project, I explored the potential of Google’s MediaPipe framework for real-time human pose tracking, using machine learning to detect key body and hand landmarks. My research focused on how pose estimation can improve user interfaces by enabling more intuitive, gesture-based interactions—aligning with the goal of creating technology that is more human-centered and accessible.

Research Focus

  1. Human-Centered Design
    A core aspect of my research was how pose estimation could empower users by enabling interactions based on natural human movement. I specifically explored how this aligns with principles of human-centered design, ensuring that technology remains adaptive and accessible, especially for users with physical limitations or disabilities.

  2. Real-Time Feedback
    I investigated how pairing pose estimation with AI could provide immediate, personalized feedback. This could be transformative in areas like fitness and rehabilitation, where instant guidance on movement accuracy could prevent injury and improve outcomes, offering a more tailored experience.

  3. Cross-Platform Integration
    One challenge I focused on was optimizing the pose estimation system for seamless performance across various devices, including mobile, desktop, and AR/VR platforms. Ensuring that the system could work fluidly across these different platforms was critical to maximizing its usability and impact.

Applications & Use Cases

  1. Fitness & Health
    In fitness, my project offers a system that tracks exercise form and provides real-time corrective feedback to prevent injury, making workouts more effective and safe for users.

  2. AR/VR
    I explored the potential of using pose estimation to create more immersive and intuitive AR/VR environments. Gesture controls could replace traditional input methods, offering a more natural, engaging experience.

  3. Healthcare
    In healthcare, particularly physical therapy, the system can assist in tracking rehabilitation exercises. By ensuring that patients perform movements correctly, it helps improve outcomes in a way that’s scalable, even for remote monitoring.

Future Scope

  1. Multi-Person Tracking
    I’m looking into expanding the system to track multiple users simultaneously. This would open up exciting possibilities for multiplayer gaming or group workouts, enhancing social and collaborative experiences.

  2. Enhanced Accuracy
    I plan to further improve the system’s tracking precision, especially in challenging conditions like low light or when users are wearing varying types of clothing. Improving robustness is key for real-world usability.

  3. Predictive AI
    One future direction is integrating AI to predict a user’s next movement, providing proactive feedback even before the gesture is completed. This would make the system even more intuitive and responsive to users' needs.

This project has greatly deepened my understanding of how human-centered design and machine learning can be combined to create more intuitive, accessible, and personalized user experiences. The potential applications in fitness, healthcare, and AR/VR excite me, and I look forward to continuing my research in these areas to push the boundaries of what gesture-based interactions can achieve.

Mask Detection System using TensorFlow (AI & Python)

In this project, I built and trained a machine learning model using TensorFlow to detect face mask usage and assess the level of protection. Initially developed during the COVID-19 pandemic, this system played a key role in ensuring compliance with mask mandates, particularly in public spaces, by helping businesses and public venues enforce health and safety protocols.

Research Focus

  1. AI for Public Health
    I explored how AI can be leveraged for public health safety, particularly in monitoring mask usage in real-time in environments such as airports, public transport, and healthcare settings. This research addressed safety measures during the pandemic and supported ongoing public health efforts.

  2. Model Optimization
    Optimizing the model for both accuracy and speed was a priority, particularly for deployment on mobile devices. I applied advanced transfer learning techniques to enhance performance, enabling the system to quickly and accurately detect masks in various real-world scenarios.

  3. Ethical Considerations
    Given the public nature of the model’s potential use, I ensured that it respects privacy. The system was designed to collect anonymized data and prioritize user consent, crucial for ethical operation in public spaces.

Applications & Use Cases

  1. Public Health Monitoring
    During the COVID-19 pandemic, this model proved invaluable in monitoring mask usage at the entrances of local shops, public spaces, and essential services. The system helped enforce mask mandates by providing real-time feedback, ensuring compliance and reducing the risk of virus transmission.

  2. Retail & Public Spaces
    Local shops, many of which were already equipped with CCTV systems, benefited greatly from this AI solution. The model integrated seamlessly with existing infrastructure, allowing businesses to monitor mask usage and ensure customer safety during the health crisis.

  3. Education & Workplaces
    The model also proved useful in schools and workplaces, where mask compliance was essential for the safety of students and employees. It provided an efficient way to monitor compliance, helping maintain a safer environment during the pandemic.

Future Scope

  1. Multi-Class Detection
    I plan to enhance the system by enabling it to detect other types of protective gear, such as gloves and face shields, providing a more comprehensive solution for health and safety monitoring.

  2. Integration with Surveillance Systems
    An exciting future direction is integrating the mask detection system with existing CCTV and surveillance systems. This integration would allow for seamless real-time alerts and data-sharing with health authorities, further enhancing public health efforts.

The Mask Detection System using TensorFlow provided a practical, AI-powered solution to public health challenges during COVID-19, helping local businesses and public spaces maintain safety protocols. The ability to integrate with existing CCTV infrastructure made it a highly accessible tool, offering a valuable, adaptable solution during a time of crisis.

Discord Moderation Bot using Perspective API (NLP & Python)

This project involves developing a Discord bot powered by Google's Perspective API, designed to detect toxic or insulting language in real-time. Initially built to manage online communities, I implemented this bot to improve group dynamics and foster positive, respectful interactions. One of the key reasons I’m proud of this project is that it’s still actively used in the Chelsea FC Official Discord server, where it plays an essential role in maintaining a healthy and supportive space for fans and members.

Research Focus

  1. Natural Language Processing (NLP) in Moderation
    As part of my research, I explored how NLP can be used to filter out harmful language effectively in online communities. The goal was to create an intuitive and automated system that can detect inappropriate content in real-time, enabling communities to thrive without the need for constant manual intervention. This aligns with my interest in developing user-centric systems that prioritize safety and inclusivity.

  2. Bias in AI Models
    I recognized the importance of fairness in AI systems, especially when dealing with diverse linguistic contexts. One of the key challenges was ensuring the bot could detect harmful language without unfairly flagging harmless conversations. Through this research, I explored ways to reduce bias in language detection, ensuring that the bot serves all users equitably, regardless of their background or communication style.

  3. User Behavior Analysis
    By analyzing how different types of toxic language affect group dynamics, I was able to design the bot to intervene before conflicts escalated. My research focused on balancing effective moderation with the need to preserve healthy, open discussions—an area that ties into my broader interest in improving user experiences and minimizing friction in digital environments.

Applications & Use Cases

  1. Chelsea FC Official Discord
    As an active member of the Chelsea FC Official Discord server, I am deeply invested in ensuring that the space remains welcoming and positive for all fans. The bot I developed automatically flags harmful language and sends warnings to users, helping moderators maintain the server’s positive atmosphere. It’s incredibly rewarding to see the bot continue to serve this community, making interactions safer and more enjoyable for everyone.

  2. Gaming & Social Platforms
    This bot can also be deployed in gaming and social environments, where large communities are often challenged by toxic behavior. By proactively detecting inappropriate language, the bot helps prevent problems before they escalate, creating a smoother and more engaging experience for users. My interest in how digital tools can improve online interactions fuels my desire to create smarter, more responsive moderation systems.

  3. Customer Support & Communities
    Beyond gaming, I see immense potential for this bot in customer service chatbots. By detecting negative sentiment early, it can improve user experience, preventing frustrations from building up and maintaining a positive tone in interactions. It’s fascinating to think about how this technology could reshape customer support systems to be more empathetic and user-focused.

Future Directions

  1. Enhancing Sentiment Analysis
    Going forward, I want to improve the bot’s ability to understand more subtle or context-dependent toxic language. By integrating deeper sentiment analysis techniques, the bot could offer more accurate moderation and respond to a wider variety of scenarios, aligning with my goal to continuously enhance the user experience.

  2. Real-Time Adaptation
    Language evolves, and so should AI systems. I’m working on enabling the bot to adapt in real-time to new slang and evolving language trends, ensuring it remains effective in an ever-changing digital landscape. This flexibility is critical to ensuring long-term usability in dynamic online communities.

  3. Expanding to Other Platforms
    The potential to extend this bot’s functionality across multiple platforms, such as Slack, Teams, or other social media networks, excites me. By providing a scalable solution for community management, I hope to contribute to broader efforts in creating more inclusive and safe online spaces.

Being deeply involved in the Chelsea FC Official Discord community, I’ve seen firsthand how effective this bot has been in improving online interactions. This project not only aligns with my passion for creating safer, more supportive digital environments but also reinforces my commitment to researching and developing AI-driven systems that enhance user experience. The bot’s ongoing impact in the Chelsea FC community is a testament to the success of this approach and the potential for further research and development in the field.

Summary

With a foundation and certifications, I am focused on integrating AI, Machine Learning (ML), User Driven Experience, and Human-Computer Interaction (HCI) to create intelligent, user-centered solutions. My work is driven by a passion for solving real-world challenges, particularly in accessibility, healthcare, and immersive environments such as AR/VR. Look at my research on EA Sports FC25 and Apple VisionPro; where I dive in on how these cutting-edge technologies are shaping the future.

I am committed to exploring how emerging technologies can enhance user experiences by creating adaptive, inclusive systems that respond intelligently to human needs. Through continuous learning and hands-on experience, I aim to bridge the gap between research and practical application in HCI, developing systems that are both intuitive and impactful.

My goal is to further my research in how HCI principles can be applied to design solutions that improve lives. I am focused on creating systems that blend AI in UX, ML, and HCI to develop personalized experiences in both digital and physical spaces, making them more accessible, engaging, and intuitive.

As I continue my journey, I look forward to contributing to the advancement of HCI research, focusing on how these technologies can be leveraged to create user-centric designs that improve accessibility and engagement in real-world applications.

Human-Centered Innovation

Blending AI, Machine Learning, and Human-Centered Design to create intuitive, transformative user experiences

About

I’ve always been fascinated by how technology can feel more like an extension of ourselves. "How can we make devices not just work for us, but understand us?" This question has driven my passion for Human-Computer Interaction (HCI), where I combine my background in Electronics and Telecommunications with a deep interest in creating intuitive, human-centered experiences.

Each project I’ve undertaken has been a step toward creating technology that intuitively understands and adapts to the user. Whether it's AI-driven moderation tools, gesture-based interaction systems, AR/VR interfaces, or AI models designed for real-world applications, every project blends creativity, usability, and technical expertise to enhance how users engage with the digital world.

The work showcased here reflects my commitment to creating empathetic, inclusive, and innovative solutions that not only meet user needs but anticipate them, all while prioritizing user-centered design and accessibility. I invite you to explore these projects and see how I’m pushing the boundaries of what technology can achieve—creating seamless, safe, and intuitive experiences for the user, with a particular focus on augmented and virtual reality to shape the future of human interaction with technology.

ROLE

HCI Developer, Researcher

TIMELINE

4+ Years

LOCATION

NA

5+

Real-world problems solved with AI

300+

Hours of HCI Research

My Vision in AI, ML, AR/VR, and HCI

My passion for Artificial Intelligence (AI), Machine Learning (ML), and Human-Computer Interaction (HCI) is driven by a deep desire to create innovative, user-centric solutions that address real-world challenges. With a solid foundation in Electronics and Telecommunications (EnTC), my goal is to bridge the gap between cutting-edge technologies and human-centered design, focusing on areas like accessibility, healthcare, and immersive technologies such as AR/VR.

I am particularly excited about how AI and ML can revolutionize user experiences by enabling systems to learn and adapt to individual behaviors. In the realm of HCI, I see AI and UX converging as a powerful force to create dynamic, intelligent environments that intuitively respond to human needs. As technologies like AR/VR continue to evolve, the interaction between the digital and physical worlds will become increasingly seamless, unlocking exciting possibilities for immersive and intuitive user experiences. My vision is to push these boundaries further, using the principles of AI, ML, and HCI to design systems that are not only functional but also deeply engaging, inclusive, and accessible.

My Goals

Exploring AI-Driven UX in Immersive Technology:
One of my primary goals is to dive deeper into how AI and ML can elevate the user experience in AR/VR environments. I believe that by applying these technologies, we can create more personalized, adaptive, and intuitive experiences. Systems will not only respond to user input but anticipate needs, offering a truly seamless interaction between the physical and virtual worlds.

Enhancing Accessibility through AI and HCI:
Creating inclusive technologies is at the heart of my work. I am particularly focused on how AI can be used to design tools that make digital interactions more accessible to individuals with disabilities. My aim is to develop assistive technologies that not only adapt to users' needs but empower them, enabling easier, more intuitive interactions with technology.

Driving Cross-Disciplinary Research:
I strive to contribute to research at the intersection of AI, ML, HCI, and immersive technologies. By prioritizing human-centered design, my goal is to develop technologies that are not only advanced but also user-friendly and impactful. I want to solve real-world problems through cross-disciplinary collaboration, ensuring that my solutions resonate with users and improve everyday life.

Fostering Innovation in Smart Systems:
Another area that excites me is the development of smart systems that intelligently respond to users and their environments. Whether it’s through assistive mobility devices, smart homes, or adaptive interfaces, I aim to contribute to the creation of systems that make technology more intuitive, responsive, and user-first.

Certifications & My AI/ML Journey

To further deepen my expertise and stay at the cutting edge of technology, I have completed several certifications, including:

  • Deep Learning Specialization – deeplearning.ai by Andrew Ng

  • TensorFlow Developer – deeplearning.ai by Andrew Ng

  • AWS Fundamentals Specialization – Amazon AWS

  • Game Playing AI with Swift for TensorFlow – IBM

  • Google IT Automation with Python – Google

These certifications have not only expanded my technical expertise but have also reinforced my commitment to creating intelligent, user-centered systems that adapt to and anticipate user needs. I am excited to continue my journey of combining AI, AR/VR, and HCI principles to create meaningful, personalized, and accessible experiences that will help shape the future of human-centered technology in the digital age

Past related works

Pose Estimation with MediaPipe (Machine Learning & Python)

In this project, I explored the potential of Google’s MediaPipe framework for real-time human pose tracking, using machine learning to detect key body and hand landmarks. My research focused on how pose estimation can improve user interfaces by enabling more intuitive, gesture-based interactions—aligning with the goal of creating technology that is more human-centered and accessible.

Research Focus

  1. Human-Centered Design
    A core aspect of my research was how pose estimation could empower users by enabling interactions based on natural human movement. I specifically explored how this aligns with principles of human-centered design, ensuring that technology remains adaptive and accessible, especially for users with physical limitations or disabilities.

  2. Real-Time Feedback
    I investigated how pairing pose estimation with AI could provide immediate, personalized feedback. This could be transformative in areas like fitness and rehabilitation, where instant guidance on movement accuracy could prevent injury and improve outcomes, offering a more tailored experience.

  3. Cross-Platform Integration
    One challenge I focused on was optimizing the pose estimation system for seamless performance across various devices, including mobile, desktop, and AR/VR platforms. Ensuring that the system could work fluidly across these different platforms was critical to maximizing its usability and impact.

Applications & Use Cases

  1. Fitness & Health
    In fitness, my project offers a system that tracks exercise form and provides real-time corrective feedback to prevent injury, making workouts more effective and safe for users.

  2. AR/VR
    I explored the potential of using pose estimation to create more immersive and intuitive AR/VR environments. Gesture controls could replace traditional input methods, offering a more natural, engaging experience.

  3. Healthcare
    In healthcare, particularly physical therapy, the system can assist in tracking rehabilitation exercises. By ensuring that patients perform movements correctly, it helps improve outcomes in a way that’s scalable, even for remote monitoring.

Future Scope

  1. Multi-Person Tracking
    I’m looking into expanding the system to track multiple users simultaneously. This would open up exciting possibilities for multiplayer gaming or group workouts, enhancing social and collaborative experiences.

  2. Enhanced Accuracy
    I plan to further improve the system’s tracking precision, especially in challenging conditions like low light or when users are wearing varying types of clothing. Improving robustness is key for real-world usability.

  3. Predictive AI
    One future direction is integrating AI to predict a user’s next movement, providing proactive feedback even before the gesture is completed. This would make the system even more intuitive and responsive to users' needs.

This project has greatly deepened my understanding of how human-centered design and machine learning can be combined to create more intuitive, accessible, and personalized user experiences. The potential applications in fitness, healthcare, and AR/VR excite me, and I look forward to continuing my research in these areas to push the boundaries of what gesture-based interactions can achieve.

Mask Detection System using TensorFlow (AI & Python)

In this project, I built and trained a machine learning model using TensorFlow to detect face mask usage and assess the level of protection. Initially developed during the COVID-19 pandemic, this system played a key role in ensuring compliance with mask mandates, particularly in public spaces, by helping businesses and public venues enforce health and safety protocols.

Research Focus

  1. AI for Public Health
    I explored how AI can be leveraged for public health safety, particularly in monitoring mask usage in real-time in environments such as airports, public transport, and healthcare settings. This research addressed safety measures during the pandemic and supported ongoing public health efforts.

  2. Model Optimization
    Optimizing the model for both accuracy and speed was a priority, particularly for deployment on mobile devices. I applied advanced transfer learning techniques to enhance performance, enabling the system to quickly and accurately detect masks in various real-world scenarios.

  3. Ethical Considerations
    Given the public nature of the model’s potential use, I ensured that it respects privacy. The system was designed to collect anonymized data and prioritize user consent, crucial for ethical operation in public spaces.

Applications & Use Cases

  1. Public Health Monitoring
    During the COVID-19 pandemic, this model proved invaluable in monitoring mask usage at the entrances of local shops, public spaces, and essential services. The system helped enforce mask mandates by providing real-time feedback, ensuring compliance and reducing the risk of virus transmission.

  2. Retail & Public Spaces
    Local shops, many of which were already equipped with CCTV systems, benefited greatly from this AI solution. The model integrated seamlessly with existing infrastructure, allowing businesses to monitor mask usage and ensure customer safety during the health crisis.

  3. Education & Workplaces
    The model also proved useful in schools and workplaces, where mask compliance was essential for the safety of students and employees. It provided an efficient way to monitor compliance, helping maintain a safer environment during the pandemic.

Future Scope

  1. Multi-Class Detection
    I plan to enhance the system by enabling it to detect other types of protective gear, such as gloves and face shields, providing a more comprehensive solution for health and safety monitoring.

  2. Integration with Surveillance Systems
    An exciting future direction is integrating the mask detection system with existing CCTV and surveillance systems. This integration would allow for seamless real-time alerts and data-sharing with health authorities, further enhancing public health efforts.

The Mask Detection System using TensorFlow provided a practical, AI-powered solution to public health challenges during COVID-19, helping local businesses and public spaces maintain safety protocols. The ability to integrate with existing CCTV infrastructure made it a highly accessible tool, offering a valuable, adaptable solution during a time of crisis.

Discord Moderation Bot using Perspective API (NLP & Python)

This project involves developing a Discord bot powered by Google's Perspective API, designed to detect toxic or insulting language in real-time. Initially built to manage online communities, I implemented this bot to improve group dynamics and foster positive, respectful interactions. One of the key reasons I’m proud of this project is that it’s still actively used in the Chelsea FC Official Discord server, where it plays an essential role in maintaining a healthy and supportive space for fans and members.

Research Focus

  1. Natural Language Processing (NLP) in Moderation
    As part of my research, I explored how NLP can be used to filter out harmful language effectively in online communities. The goal was to create an intuitive and automated system that can detect inappropriate content in real-time, enabling communities to thrive without the need for constant manual intervention. This aligns with my interest in developing user-centric systems that prioritize safety and inclusivity.

  2. Bias in AI Models
    I recognized the importance of fairness in AI systems, especially when dealing with diverse linguistic contexts. One of the key challenges was ensuring the bot could detect harmful language without unfairly flagging harmless conversations. Through this research, I explored ways to reduce bias in language detection, ensuring that the bot serves all users equitably, regardless of their background or communication style.

  3. User Behavior Analysis
    By analyzing how different types of toxic language affect group dynamics, I was able to design the bot to intervene before conflicts escalated. My research focused on balancing effective moderation with the need to preserve healthy, open discussions—an area that ties into my broader interest in improving user experiences and minimizing friction in digital environments.

Applications & Use Cases

  1. Chelsea FC Official Discord
    As an active member of the Chelsea FC Official Discord server, I am deeply invested in ensuring that the space remains welcoming and positive for all fans. The bot I developed automatically flags harmful language and sends warnings to users, helping moderators maintain the server’s positive atmosphere. It’s incredibly rewarding to see the bot continue to serve this community, making interactions safer and more enjoyable for everyone.

  2. Gaming & Social Platforms
    This bot can also be deployed in gaming and social environments, where large communities are often challenged by toxic behavior. By proactively detecting inappropriate language, the bot helps prevent problems before they escalate, creating a smoother and more engaging experience for users. My interest in how digital tools can improve online interactions fuels my desire to create smarter, more responsive moderation systems.

  3. Customer Support & Communities
    Beyond gaming, I see immense potential for this bot in customer service chatbots. By detecting negative sentiment early, it can improve user experience, preventing frustrations from building up and maintaining a positive tone in interactions. It’s fascinating to think about how this technology could reshape customer support systems to be more empathetic and user-focused.

Future Directions

  1. Enhancing Sentiment Analysis
    Going forward, I want to improve the bot’s ability to understand more subtle or context-dependent toxic language. By integrating deeper sentiment analysis techniques, the bot could offer more accurate moderation and respond to a wider variety of scenarios, aligning with my goal to continuously enhance the user experience.

  2. Real-Time Adaptation
    Language evolves, and so should AI systems. I’m working on enabling the bot to adapt in real-time to new slang and evolving language trends, ensuring it remains effective in an ever-changing digital landscape. This flexibility is critical to ensuring long-term usability in dynamic online communities.

  3. Expanding to Other Platforms
    The potential to extend this bot’s functionality across multiple platforms, such as Slack, Teams, or other social media networks, excites me. By providing a scalable solution for community management, I hope to contribute to broader efforts in creating more inclusive and safe online spaces.

Being deeply involved in the Chelsea FC Official Discord community, I’ve seen firsthand how effective this bot has been in improving online interactions. This project not only aligns with my passion for creating safer, more supportive digital environments but also reinforces my commitment to researching and developing AI-driven systems that enhance user experience. The bot’s ongoing impact in the Chelsea FC community is a testament to the success of this approach and the potential for further research and development in the field.

Neuro-Mobile: Brain-Computer Interfaces for Assistive Mobility (Electronics, BCI & AI)

Undergraduate Thesis

Developed as my undergraduate mega-thesis project, published under university research initiatives, and explored for patent potential, Neuro-Mobile advances the intersection of Brain-Computer Interfaces (BCI) and Human-Computer Interaction (HCI) to create transformative assistive technology solutions.

Using the MindWave 2.0 EEG headset, this system enables thought-controlled mobility, translating real-time brainwave signals into motion. It integrates IFTTT and Google Assistant, allowing seamless hands-free control, enhancing autonomy for individuals with physical disabilities.

Research Focus & HCI Applications

  • BCI in HCI: Explored neuroscientific and cognitive models for optimizing brain-to-machine interactions, aligning with usability, cognitive workload, and adaptive interaction principles.

  • Human-Centered Assistive Design: Applied UX research, accessibility heuristics, and real-time feedback loops to refine interface intuitiveness and user adaptation.

  • Emerging Technology for Real-World Impact: Leveraged EEG signal processing, smart integrations, and AI-driven user interactions to enhance accessibility and independence.

Key Insights & Outcomes

  • Achieved 95% accuracy in EEG signal interpretation, improving reliability for assistive mobility.

  • Live prototype demonstration to 150+ attendees, validating real-world feasibility.

  • Thesis published under university research projects, contributing to HCI, AI-driven accessibility, and neurotechnology innovation.

  • Patent exploration initiated, ensuring research contributes to scalable, real-world applications.

Use Cases

  • Assistive Mobility Devices:
    The core application of Neuro-Mobile was enabling individuals with mobility impairments to control their wheelchair purely through thought. This BCI-based control system empowered users to move forward and backward with ease, demonstrating how technology can dramatically improve the lives of individuals with disabilities. The live demonstration of the prototype underscored its potential to enhance autonomy and reduce dependency on traditional mobility aids, showcasing real-world utility.

  • Real-World Navigation Assistance:
    Neuro-Mobile also demonstrates significant potential in real-world navigation assistance for people with mobility impairments. The system can be adapted to control other assistive devices, such as exoskeletons or walking aids, providing users with greater autonomy in outdoor or complex environments. With precise brainwave signals controlling movement, users can navigate their surroundings with greater ease and independence, helping them interact with the world more fluidly and confidently.

  • Rehabilitation & Therapy:
    Beyond mobility, Neuro-Mobile offers significant potential for rehabilitation. The system could engage patients in brain-controlled exercises, where users would control their movements, tracking real-time progress. This has applications in motor skill rehabilitation and mental engagement, making it an invaluable tool for physical therapy. The project lays the groundwork for adaptive, interactive therapies, which can be tailored to an individual’s needs and progress, improving motor control and cognitive function over time.

  • Personalized Feedback & Adaptation:
    Another potential use case lies in personalized neurofeedback. By adapting the control system based on individual brain activity, the project could allow for increasingly nuanced control, such as adjusting speed, steering accuracy, or enabling dynamic obstacle avoidance. This represents a leap toward more intuitive, contextual user experiences.

Future Scope

  • Enhanced Precision & Control: Adaptive BCI-driven navigation, obstacle avoidance, and multi-speed motion.

  • Expanding Multi-Device Integration: Enabling users to control smart home appliances, communication tools, and rehabilitation systems through BCI-driven interfaces.

  • AI-Powered Personalization: Machine learning-based adaptive neurofeedback, refining usability based on individual brain patterns.

Neuro-Mobile demonstrates the power of BCI-driven HCI solutions, showcasing how AI, accessibility, and user-centered design can shape the future of assistive mobility. The live demonstration, coupled with the thesis publication, highlights the project’s academic value and its potential for real-world impact. The patent exploration further amplifies the long-term potential, ensuring the research can evolve into a scalable solution that benefits users globally.

Summary

With a foundation and certifications, I am focused on integrating AI, Machine Learning (ML), User Driven Experience, and Human-Computer Interaction (HCI) to create intelligent, user-centered solutions. My work is driven by a passion for solving real-world challenges, particularly in accessibility, healthcare, and immersive environments such as AR/VR. Look at my research on EA Sports FC25 and Apple VisionPro; where I dive in on how these cutting-edge technologies are shaping the future.

I am committed to exploring how emerging technologies can enhance user experiences by creating adaptive, inclusive systems that respond intelligently to human needs. Through continuous learning and hands-on experience, I aim to bridge the gap between research and practical application in HCI, developing systems that are both intuitive and impactful.

My goal is to further my research in how HCI principles can be applied to design solutions that improve lives. I am focused on creating systems that blend AI in UX, ML, and HCI to develop personalized experiences in both digital and physical spaces, making them more accessible, engaging, and intuitive.

As I continue my journey, I look forward to contributing to the advancement of HCI research, focusing on how these technologies can be leveraged to create user-centric designs that improve accessibility and engagement in real-world applications.

More work this way

neelakolkar2001@gmail.com

Email copied!

neelakolkar2001@gmail.com

Email copied!

Human-Centered Innovation

Blending AI, Machine Learning, and Human-Centered Design to create intuitive, transformative user experiences

About

I’ve always been fascinated by how technology can feel more like an extension of ourselves. "How can we make devices not just work for us, but understand us?" This question has driven my passion for Human-Computer Interaction (HCI), where I combine my background in Electronics and Telecommunications with a deep interest in creating intuitive, human-centered experiences.

Each project I’ve undertaken has been a step toward creating technology that intuitively understands and adapts to the user. Whether it's AI-driven moderation tools, gesture-based interaction systems, AR/VR interfaces, or AI models designed for real-world applications, every project blends creativity, usability, and technical expertise to enhance how users engage with the digital world.

The work showcased here reflects my commitment to creating empathetic, inclusive, and innovative solutions that not only meet user needs but anticipate them, all while prioritizing user-centered design and accessibility. I invite you to explore these projects and see how I’m pushing the boundaries of what technology can achieve—creating seamless, safe, and intuitive experiences for the user, with a particular focus on augmented and virtual reality to shape the future of human interaction with technology.

ROLE

HCI Developer, Researcher

TIMELINE

4+ Years

LOCATION

NA

5+

Real-world problems solved with AI

300+

Hours of HCI Research

My Vision in AI, ML, AR/VR, and HCI

My passion for Artificial Intelligence (AI), Machine Learning (ML), and Human-Computer Interaction (HCI) is driven by a deep desire to create innovative, user-centric solutions that address real-world challenges. With a solid foundation in Electronics and Telecommunications (EnTC), my goal is to bridge the gap between cutting-edge technologies and human-centered design, focusing on areas like accessibility, healthcare, and immersive technologies such as AR/VR.

I am particularly excited about how AI and ML can revolutionize user experiences by enabling systems to learn and adapt to individual behaviors. In the realm of HCI, I see AI and UX converging as a powerful force to create dynamic, intelligent environments that intuitively respond to human needs. As technologies like AR/VR continue to evolve, the interaction between the digital and physical worlds will become increasingly seamless, unlocking exciting possibilities for immersive and intuitive user experiences. My vision is to push these boundaries further, using the principles of AI, ML, and HCI to design systems that are not only functional but also deeply engaging, inclusive, and accessible.

My Goals

Exploring AI-Driven UX in Immersive Technology:
One of my primary goals is to dive deeper into how AI and ML can elevate the user experience in AR/VR environments. I believe that by applying these technologies, we can create more personalized, adaptive, and intuitive experiences. Systems will not only respond to user input but anticipate needs, offering a truly seamless interaction between the physical and virtual worlds.

Enhancing Accessibility through AI and HCI:
Creating inclusive technologies is at the heart of my work. I am particularly focused on how AI can be used to design tools that make digital interactions more accessible to individuals with disabilities. My aim is to develop assistive technologies that not only adapt to users' needs but empower them, enabling easier, more intuitive interactions with technology.

Driving Cross-Disciplinary Research:
I strive to contribute to research at the intersection of AI, ML, HCI, and immersive technologies. By prioritizing human-centered design, my goal is to develop technologies that are not only advanced but also user-friendly and impactful. I want to solve real-world problems through cross-disciplinary collaboration, ensuring that my solutions resonate with users and improve everyday life.

Fostering Innovation in Smart Systems:
Another area that excites me is the development of smart systems that intelligently respond to users and their environments. Whether it’s through assistive mobility devices, smart homes, or adaptive interfaces, I aim to contribute to the creation of systems that make technology more intuitive, responsive, and user-first.

Certifications & My AI/ML Journey

To further deepen my expertise and stay at the cutting edge of technology, I have completed several certifications, including:

  • Deep Learning Specialization – deeplearning.ai by Andrew Ng

  • TensorFlow Developer – deeplearning.ai by Andrew Ng

  • AWS Fundamentals Specialization – Amazon AWS

  • Game Playing AI with Swift for TensorFlow – IBM

  • Google IT Automation with Python – Google

These certifications have not only expanded my technical expertise but have also reinforced my commitment to creating intelligent, user-centered systems that adapt to and anticipate user needs. I am excited to continue my journey of combining AI, AR/VR, and HCI principles to create meaningful, personalized, and accessible experiences that will help shape the future of human-centered technology in the digital age

Past related works

Pose Estimation with MediaPipe (Machine Learning & Python)

In this project, I explored the potential of Google’s MediaPipe framework for real-time human pose tracking, using machine learning to detect key body and hand landmarks. My research focused on how pose estimation can improve user interfaces by enabling more intuitive, gesture-based interactions—aligning with the goal of creating technology that is more human-centered and accessible.

Research Focus

  1. Human-Centered Design
    A core aspect of my research was how pose estimation could empower users by enabling interactions based on natural human movement. I specifically explored how this aligns with principles of human-centered design, ensuring that technology remains adaptive and accessible, especially for users with physical limitations or disabilities.

  2. Real-Time Feedback
    I investigated how pairing pose estimation with AI could provide immediate, personalized feedback. This could be transformative in areas like fitness and rehabilitation, where instant guidance on movement accuracy could prevent injury and improve outcomes, offering a more tailored experience.

  3. Cross-Platform Integration
    One challenge I focused on was optimizing the pose estimation system for seamless performance across various devices, including mobile, desktop, and AR/VR platforms. Ensuring that the system could work fluidly across these different platforms was critical to maximizing its usability and impact.

Applications & Use Cases

  1. Fitness & Health
    In fitness, my project offers a system that tracks exercise form and provides real-time corrective feedback to prevent injury, making workouts more effective and safe for users.

  2. AR/VR
    I explored the potential of using pose estimation to create more immersive and intuitive AR/VR environments. Gesture controls could replace traditional input methods, offering a more natural, engaging experience.

  3. Healthcare
    In healthcare, particularly physical therapy, the system can assist in tracking rehabilitation exercises. By ensuring that patients perform movements correctly, it helps improve outcomes in a way that’s scalable, even for remote monitoring.

Future Scope

  1. Multi-Person Tracking
    I’m looking into expanding the system to track multiple users simultaneously. This would open up exciting possibilities for multiplayer gaming or group workouts, enhancing social and collaborative experiences.

  2. Enhanced Accuracy
    I plan to further improve the system’s tracking precision, especially in challenging conditions like low light or when users are wearing varying types of clothing. Improving robustness is key for real-world usability.

  3. Predictive AI
    One future direction is integrating AI to predict a user’s next movement, providing proactive feedback even before the gesture is completed. This would make the system even more intuitive and responsive to users' needs.

This project has greatly deepened my understanding of how human-centered design and machine learning can be combined to create more intuitive, accessible, and personalized user experiences. The potential applications in fitness, healthcare, and AR/VR excite me, and I look forward to continuing my research in these areas to push the boundaries of what gesture-based interactions can achieve.

Mask Detection System using TensorFlow (AI & Python)

In this project, I built and trained a machine learning model using TensorFlow to detect face mask usage and assess the level of protection. Initially developed during the COVID-19 pandemic, this system played a key role in ensuring compliance with mask mandates, particularly in public spaces, by helping businesses and public venues enforce health and safety protocols.

Research Focus

  1. AI for Public Health
    I explored how AI can be leveraged for public health safety, particularly in monitoring mask usage in real-time in environments such as airports, public transport, and healthcare settings. This research addressed safety measures during the pandemic and supported ongoing public health efforts.

  2. Model Optimization
    Optimizing the model for both accuracy and speed was a priority, particularly for deployment on mobile devices. I applied advanced transfer learning techniques to enhance performance, enabling the system to quickly and accurately detect masks in various real-world scenarios.

  3. Ethical Considerations
    Given the public nature of the model’s potential use, I ensured that it respects privacy. The system was designed to collect anonymized data and prioritize user consent, crucial for ethical operation in public spaces.

Applications & Use Cases

  1. Public Health Monitoring
    During the COVID-19 pandemic, this model proved invaluable in monitoring mask usage at the entrances of local shops, public spaces, and essential services. The system helped enforce mask mandates by providing real-time feedback, ensuring compliance and reducing the risk of virus transmission.

  2. Retail & Public Spaces
    Local shops, many of which were already equipped with CCTV systems, benefited greatly from this AI solution. The model integrated seamlessly with existing infrastructure, allowing businesses to monitor mask usage and ensure customer safety during the health crisis.

  3. Education & Workplaces
    The model also proved useful in schools and workplaces, where mask compliance was essential for the safety of students and employees. It provided an efficient way to monitor compliance, helping maintain a safer environment during the pandemic.

Future Scope

  1. Multi-Class Detection
    I plan to enhance the system by enabling it to detect other types of protective gear, such as gloves and face shields, providing a more comprehensive solution for health and safety monitoring.

  2. Integration with Surveillance Systems
    An exciting future direction is integrating the mask detection system with existing CCTV and surveillance systems. This integration would allow for seamless real-time alerts and data-sharing with health authorities, further enhancing public health efforts.

The Mask Detection System using TensorFlow provided a practical, AI-powered solution to public health challenges during COVID-19, helping local businesses and public spaces maintain safety protocols. The ability to integrate with existing CCTV infrastructure made it a highly accessible tool, offering a valuable, adaptable solution during a time of crisis.

Discord Moderation Bot using Perspective API (NLP & Python)

This project involves developing a Discord bot powered by Google's Perspective API, designed to detect toxic or insulting language in real-time. Initially built to manage online communities, I implemented this bot to improve group dynamics and foster positive, respectful interactions. One of the key reasons I’m proud of this project is that it’s still actively used in the Chelsea FC Official Discord server, where it plays an essential role in maintaining a healthy and supportive space for fans and members.

Research Focus

  1. Natural Language Processing (NLP) in Moderation
    As part of my research, I explored how NLP can be used to filter out harmful language effectively in online communities. The goal was to create an intuitive and automated system that can detect inappropriate content in real-time, enabling communities to thrive without the need for constant manual intervention. This aligns with my interest in developing user-centric systems that prioritize safety and inclusivity.

  2. Bias in AI Models
    I recognized the importance of fairness in AI systems, especially when dealing with diverse linguistic contexts. One of the key challenges was ensuring the bot could detect harmful language without unfairly flagging harmless conversations. Through this research, I explored ways to reduce bias in language detection, ensuring that the bot serves all users equitably, regardless of their background or communication style.

  3. User Behavior Analysis
    By analyzing how different types of toxic language affect group dynamics, I was able to design the bot to intervene before conflicts escalated. My research focused on balancing effective moderation with the need to preserve healthy, open discussions—an area that ties into my broader interest in improving user experiences and minimizing friction in digital environments.

Applications & Use Cases

  1. Chelsea FC Official Discord
    As an active member of the Chelsea FC Official Discord server, I am deeply invested in ensuring that the space remains welcoming and positive for all fans. The bot I developed automatically flags harmful language and sends warnings to users, helping moderators maintain the server’s positive atmosphere. It’s incredibly rewarding to see the bot continue to serve this community, making interactions safer and more enjoyable for everyone.

  2. Gaming & Social Platforms
    This bot can also be deployed in gaming and social environments, where large communities are often challenged by toxic behavior. By proactively detecting inappropriate language, the bot helps prevent problems before they escalate, creating a smoother and more engaging experience for users. My interest in how digital tools can improve online interactions fuels my desire to create smarter, more responsive moderation systems.

  3. Customer Support & Communities
    Beyond gaming, I see immense potential for this bot in customer service chatbots. By detecting negative sentiment early, it can improve user experience, preventing frustrations from building up and maintaining a positive tone in interactions. It’s fascinating to think about how this technology could reshape customer support systems to be more empathetic and user-focused.

Future Directions

  1. Enhancing Sentiment Analysis
    Going forward, I want to improve the bot’s ability to understand more subtle or context-dependent toxic language. By integrating deeper sentiment analysis techniques, the bot could offer more accurate moderation and respond to a wider variety of scenarios, aligning with my goal to continuously enhance the user experience.

  2. Real-Time Adaptation
    Language evolves, and so should AI systems. I’m working on enabling the bot to adapt in real-time to new slang and evolving language trends, ensuring it remains effective in an ever-changing digital landscape. This flexibility is critical to ensuring long-term usability in dynamic online communities.

  3. Expanding to Other Platforms
    The potential to extend this bot’s functionality across multiple platforms, such as Slack, Teams, or other social media networks, excites me. By providing a scalable solution for community management, I hope to contribute to broader efforts in creating more inclusive and safe online spaces.

Being deeply involved in the Chelsea FC Official Discord community, I’ve seen firsthand how effective this bot has been in improving online interactions. This project not only aligns with my passion for creating safer, more supportive digital environments but also reinforces my commitment to researching and developing AI-driven systems that enhance user experience. The bot’s ongoing impact in the Chelsea FC community is a testament to the success of this approach and the potential for further research and development in the field.

Neuro-Mobile: Brain-Computer Interfaces for Assistive Mobility (Electronics, BCI & AI)

Undergraduate Thesis

Developed as my undergraduate mega-thesis project, published under university research initiatives, and explored for patent potential, Neuro-Mobile advances the intersection of Brain-Computer Interfaces (BCI) and Human-Computer Interaction (HCI) to create transformative assistive technology solutions.

Using the MindWave 2.0 EEG headset, this system enables thought-controlled mobility, translating real-time brainwave signals into motion. It integrates IFTTT and Google Assistant, allowing seamless hands-free control, enhancing autonomy for individuals with physical disabilities.

Research Focus & HCI Applications

  • BCI in HCI: Explored neuroscientific and cognitive models for optimizing brain-to-machine interactions, aligning with usability, cognitive workload, and adaptive interaction principles.

  • Human-Centered Assistive Design: Applied UX research, accessibility heuristics, and real-time feedback loops to refine interface intuitiveness and user adaptation.

  • Emerging Technology for Real-World Impact: Leveraged EEG signal processing, smart integrations, and AI-driven user interactions to enhance accessibility and independence.

Key Insights & Outcomes

  • Achieved 95% accuracy in EEG signal interpretation, improving reliability for assistive mobility.

  • Live prototype demonstration to 150+ attendees, validating real-world feasibility.

  • Thesis published under university research projects, contributing to HCI, AI-driven accessibility, and neurotechnology innovation.

  • Patent exploration initiated, ensuring research contributes to scalable, real-world applications.

Use Cases

  • Assistive Mobility Devices:
    The core application of Neuro-Mobile was enabling individuals with mobility impairments to control their wheelchair purely through thought. This BCI-based control system empowered users to move forward and backward with ease, demonstrating how technology can dramatically improve the lives of individuals with disabilities. The live demonstration of the prototype underscored its potential to enhance autonomy and reduce dependency on traditional mobility aids, showcasing real-world utility.

  • Real-World Navigation Assistance:
    Neuro-Mobile also demonstrates significant potential in real-world navigation assistance for people with mobility impairments. The system can be adapted to control other assistive devices, such as exoskeletons or walking aids, providing users with greater autonomy in outdoor or complex environments. With precise brainwave signals controlling movement, users can navigate their surroundings with greater ease and independence, helping them interact with the world more fluidly and confidently.

  • Rehabilitation & Therapy:
    Beyond mobility, Neuro-Mobile offers significant potential for rehabilitation. The system could engage patients in brain-controlled exercises, where users would control their movements, tracking real-time progress. This has applications in motor skill rehabilitation and mental engagement, making it an invaluable tool for physical therapy. The project lays the groundwork for adaptive, interactive therapies, which can be tailored to an individual’s needs and progress, improving motor control and cognitive function over time.

  • Personalized Feedback & Adaptation:
    Another potential use case lies in personalized neurofeedback. By adapting the control system based on individual brain activity, the project could allow for increasingly nuanced control, such as adjusting speed, steering accuracy, or enabling dynamic obstacle avoidance. This represents a leap toward more intuitive, contextual user experiences.

Future Scope

  • Enhanced Precision & Control: Adaptive BCI-driven navigation, obstacle avoidance, and multi-speed motion.

  • Expanding Multi-Device Integration: Enabling users to control smart home appliances, communication tools, and rehabilitation systems through BCI-driven interfaces.

  • AI-Powered Personalization: Machine learning-based adaptive neurofeedback, refining usability based on individual brain patterns.

Neuro-Mobile demonstrates the power of BCI-driven HCI solutions, showcasing how AI, accessibility, and user-centered design can shape the future of assistive mobility. The live demonstration, coupled with the thesis publication, highlights the project’s academic value and its potential for real-world impact. The patent exploration further amplifies the long-term potential, ensuring the research can evolve into a scalable solution that benefits users globally.

Summary

With a foundation and certifications, I am focused on integrating AI, Machine Learning (ML), User Driven Experience, and Human-Computer Interaction (HCI) to create intelligent, user-centered solutions. My work is driven by a passion for solving real-world challenges, particularly in accessibility, healthcare, and immersive environments such as AR/VR. Look at my research on EA Sports FC25 and Apple VisionPro; where I dive in on how these cutting-edge technologies are shaping the future.

I am committed to exploring how emerging technologies can enhance user experiences by creating adaptive, inclusive systems that respond intelligently to human needs. Through continuous learning and hands-on experience, I aim to bridge the gap between research and practical application in HCI, developing systems that are both intuitive and impactful.

My goal is to further my research in how HCI principles can be applied to design solutions that improve lives. I am focused on creating systems that blend AI in UX, ML, and HCI to develop personalized experiences in both digital and physical spaces, making them more accessible, engaging, and intuitive.

As I continue my journey, I look forward to contributing to the advancement of HCI research, focusing on how these technologies can be leveraged to create user-centric designs that improve accessibility and engagement in real-world applications.

More work this way

neelakolkar2001@gmail.com

Email copied!

neelakolkar2001@gmail.com

Email copied!