This paper presents an innovative approach to enhance the querying capability of ChatGPT,a conversational artificial intelligence model,by incorporating voice-based interaction and a convolutional neural network(CNN)-...This paper presents an innovative approach to enhance the querying capability of ChatGPT,a conversational artificial intelligence model,by incorporating voice-based interaction and a convolutional neural network(CNN)-based impaired vision detection model.The proposed system aims to improve user experience and accessibility by allowing users to interact with ChatGPT using voice commands.Additionally,a CNN-based model is employed to detect impairments in user vision,enabling the system to adapt its responses and provide appropriate assistance.This research tackles head-on the challenges of user experience and inclusivity in artificial intelligence(AI).It underscores our commitment to overcoming these obstacles,making ChatGPT more accessible and valuable for a broader audience.The integration of voice-based interaction and impaired vision detection represents a novel approach to conversational AI.Notably,this innovation transcends novelty;it carries the potential to profoundly impact the lives of users,particularly those with visual impairments.The modular approach to system design ensures adaptability and scalability,critical for the practical implementation of these advancements.Crucially,the solution places the user at its core.Customizing responses for those with visual impairments demonstrates AI’s potential to not only understand but also accommodate individual needs and preferences.展开更多
基金This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number:IMSIU-RP23008).
文摘This paper presents an innovative approach to enhance the querying capability of ChatGPT,a conversational artificial intelligence model,by incorporating voice-based interaction and a convolutional neural network(CNN)-based impaired vision detection model.The proposed system aims to improve user experience and accessibility by allowing users to interact with ChatGPT using voice commands.Additionally,a CNN-based model is employed to detect impairments in user vision,enabling the system to adapt its responses and provide appropriate assistance.This research tackles head-on the challenges of user experience and inclusivity in artificial intelligence(AI).It underscores our commitment to overcoming these obstacles,making ChatGPT more accessible and valuable for a broader audience.The integration of voice-based interaction and impaired vision detection represents a novel approach to conversational AI.Notably,this innovation transcends novelty;it carries the potential to profoundly impact the lives of users,particularly those with visual impairments.The modular approach to system design ensures adaptability and scalability,critical for the practical implementation of these advancements.Crucially,the solution places the user at its core.Customizing responses for those with visual impairments demonstrates AI’s potential to not only understand but also accommodate individual needs and preferences.