Title

Enhancing Conversational LLMs through Temporal Contextual Similarity and Formalized Message Retrieval

Abstract

Abstract

Conversational large language models (LLMs) offer powerful generative capabilities, yet remain vulnerable to informal language, hallucinations, and limited contextual awareness. This thesis introduces a multi-component enhancement framework that improves LLM performance through input standardization, retrieval-augmented generation, and temporal contextual modeling. Informal user messages—often rich with spelling errors, slang, or homophones—are first transformed into standardized inputs using a transformer model fine-tuned on noisy-to-clean message pairs, improving downstream performance on core NLP tasks. To enhance factual grounding and reduce hallucinations in responses, a retrieval-based module gathers semantically relevant past responses, which are paraphrased and adapted to the current dialogue using language models aware of user context and recent history. This is further strengthened by a temporal contextual similarity model that represents user behavior as evolving topic sequences, enabling more pragmatic and coherent response retrieval. While standardization is evaluated through improvements in NLP tasks such as classification and sentiment analysis, the other components are assessed through response quality and human evaluation. Together, these contributions push conversational agents toward more robust, context-sensitive, and human-aligned interaction quality in real-world messaging environments.

Supervisor(s)

Supervisor(s)

EMRE KULAH

Date and Location

Date and Location

2025-09-03 13:30:00

Category

Category

PhD_Thesis