Department of Computer Science and Engineering (Data Science), Ashwani Attri, ACE Engineering College, Telangana, India.
World Journal of Advanced Research and Reviews, 2025, 26(02), 1037-1044
Article DOI: 10.30574/wjarr.2025.26.2.1685
Received on 27 March 2025; revised on 03 May 2025; accepted on 06 May 2025
With the increasing ubiquity of digital imagery, there is a growing need for intelligent systems capable of understanding visual content and expressing that understanding in human-like language. This paper presents a comprehensive AI-based pipeline that not only generates captions from images but also constructs vivid stories based on those captions and finally delivers them in a human voice. The proposed system integrates multiple components: a Convolutional Neural Network (VGG16) for extracting visual features, an LSTM-based sequence model for caption generation, GPT-2 for creative story generation, and Google Text-to-Speech (gTTS) for voice synthesis. The result is a multi-modal AI framework capable of transforming static images into rich, spoken narratives. This approach has applications in assistive technologies, interactive storytelling, content automation, and education. The proposed model is trained and evaluated on the Flickr8k dataset, demonstrating a viable path for automated visual storytelling.
Image Captioning; CNN-LSTM; VGG16; GPT-2; Text-to-Speech (gTTS); Image-to-Story Generation; Natural Language Processing (NLP)
Preview Article PDF
Ashwani Attri, Priyanka Gudeboyena, Vaishnavi Chigurla, Soumika Moluguri and Nithin Kasoju. Multimodal AI framework for image captioning, story generation and natural speech narration. World Journal of Advanced Research and Reviews, 2025, 26(2), 1037-1044. Article DOI: https://doi.org/10.30574/wjarr.2025.26.2.1685