Posts

Showing posts from September, 2023

The Evolution of Natural Language Processing: A Decade in Reflection (2013–2023)

  When I began my PhD journey in 2013, the field of Natural Language Processing (NLP) was still dominated by traditional machine learning techniques, feature engineering, and statistical models. Over the course of the past decade, I have witnessed one of the fastest and most profound transformations in the history of computer science — a shift from hand-crafted features to deep learning, and eventually to large language models (LLMs) that now power generative AI systems. This article reflects on that trajectory, spanning three key phases: 2013–2015 (statistical learning era), 2016–2018 (deep learning revolution), and 2019–current (the transformer and LLM era). When I started my PhD in 2013, the state-of-the-art NLP methods were feature-based statistical models . Dominant Techniques: Logistic regression, SVMs, CRFs (Conditional Random Fields) for tasks like POS tagging, NER, and text classification. Feature engineering (n-grams, TF-IDF, syntactic features) was central....