Text Summarization using Deep Learning: A Study on Automatic Summarization
DOI:
https://doi.org/10.62019/abbdm.v4i4.263Keywords:
Automatic, text summarization, natural language processing, deep learning.Abstract
Automatic text summarization has recently become popular in natural language processing because of its ability to minimize an overwhelming quantity of information into a summarization. This research aims at analyzing extractive and abstractive approaches to the automatic text summarization through the help of deep learning models. The work mainly deals with the analysis of the performance of models like Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) networks and Transformer-based models including BERT and GPT. Pertaining to the evaluation of these models, the study employs objective metrics such as ROUGE, BLEU, in addition to subjective human evaluation of coherence, relevance and fluency. Studies show that Transformer-based models: BERT and GPT perform better than the extractive model in every aspect in the ability to produce summaries with high fluency and context relevancy. However, there are still problems associated with growing the scope of higher-order n-gram recall and preserving summary relevance to the text information. The author of the study also supports the fact that deep learning-based summarization methods demonstrate high potential but require additional studies to improve the quality of the output summary. The work offers an understanding of the elements of the existing models and offers a foundation for future development in automatic summarization.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Anwar Ali Sanjrani, Muhammad Saqib, Saira Rehman, Muhammad Saeed Ahmad

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.