This item's files can only be accessed by the Administrator group.
Permalink
https://hdl.handle.net/2142/117572
Description
Title
Empower learning-to-rank with language models
Author(s)
Zhang, Yunan
Issue Date
2022-12-07
Director of Research (if dissertation) or Advisor (if thesis)
Zhai, Chengxiang
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
ranking
information retrieval
data mining
Abstract
Pre-trained large language models bring revolutionary chances to solving NLP problems. This thesis tackles how to leverage pre-training language models for information retrieval tasks. On the one hand, searching and ranking is the most well-grounded machine learning sceneario. On the other hand, we find the progress in NLU can be transferred to search problems given its foundation in document understanding. This thesis consists of 3 parts, each part investigates how we can build practical applications based on the recent success of neural language models. The first part discusses how we design a multi-lingual query understanding system using tailored pre-training language models for this task. In the second part, we discussed how we build a vision-language multimodality transformer for fine-grained classification and retrieval tasks. In the third part, we propose a novel transformer model to mitigate the distribution shift between training and serving of ranking systems. In the forth part, we present a way to mitigate the confounding effects in two-tower models.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.