Language modeling studies the probability distributions over strings of
texts. It is one of the most fundamental tasks in natural language processing
(NLP). It has been widely used in text generation, speech recognition, machine
translation, etc. Conventional language models (CLMs) aim to predict the
probability of linguistic sequences in a causal manner. In contrast,
pre-trained language models (PLMs) cover broader concepts and can be used in
both causal sequential modeling and fine-tuning for downstream applications.
PLMs have their own training paradigms (usually self-supervised) and serve as
foundation models in modern NLP systems. This overview paper provides an
introduction to both CLMs and PLMs from five aspects, i.e., linguistic units,
structures, training methods, evaluation methods, and applications.
Furthermore, we discuss the relationship between CLMs and PLMs and shed light
on the future directions of language modeling in the pre-trained era.