Posts by Collection

portfolio

publications

Your Pre-trained LLM is Secretly an Unsupervised Confidence Calibrator

Published in The Thirty-ninth Annual Conference on Neural Information Processing Systems , 2025

In this work, we propose a noval unsupervised calibration method to mitigate the over-confidence problem of LLMs introduced by post-training techniques. With the observation of the inherent well-calibrated nature of Pre-trained LMs (PLMs), we propose to leverage the output of PLMs on unlabeled data for post-hoc calibration.

Download Paper

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.