Advancements in Text Summarization Using LSTM and Transformer-Based Models: A Comparative Review

##plugins.themes.academic_pro.article.main##

M. Rukhsah

Abstract

Text summarization has proved to be a vital part of NLP as it has turned long texts short and yet with the primary components. This survey investigates the development of text summarization algorithms, putting a particular focus on the Long Short-Term Memory (LSTM) and Transformer-based summarization. Initially, LSTM models had significantly advanced the way neural text summarization was conducted by carefully managing sequential connections through encoder-decoder architectures and, on numerous occasions, through attention and pointer-generator networks. But why they suffer in distant connections and fail to employ parallel computing is the question that gave rise to Transformer-based models, which introduce self-attention and increase the summarization performance. Recently, systems based on BERTSUM, PEGASUS, BART and T5 have set new scores in both summarization tasks. This paper critically compares these architectures in terms of their model, the complexity of the training algorithm involved, the metrics to be used in evaluating and the areas of application they operate in. Besides that, the paper also highlights highly relevant dataset and trends in progress such as the absorption of ready-learned language models, adapting to new domains and encountering new evaluation challenges. The necessity to cope with the concerns like consistency, summarizing low-resource texts and scalability guides the choice of the future research direction.

##plugins.themes.academic_pro.article.details##

How to Cite
M. Rukhsah. (2025). Advancements in Text Summarization Using LSTM and Transformer-Based Models: A Comparative Review. IIRJET, 11(2). https://doi.org/10.32595/iirjet.org/v11i2.2025.233