Spring 2026: Secure and Fault-Tolerant Large Language Models for Embedded Systems

Affiliations: College of Engineering and Computer Science
Team Leader:
Chanhee Lee, PhD
chanhee.lee@ucf.edu
Postdoctoral Scholar
Faculty Mentor:
Jongouk Choi, PhD
Team Size:
12
Open Spots: 2
Team Member Qualifications:
Required qualifications for applicants are C/C++ and Python programming, basic knowledge and experience to run open source LLMs, basic knowledge of embedded systems. Preferred qualifications for applicants are LLM model training, knowledge of system security regarding volatile/non-volatile memory, knowledge of fault-tolerance, Java and Swift programming skills.
Description:
In this project, we analyze the effect of various security attacks and soft errors on large language models (LLMs) for embedded systems. Open-source LLMs will be run on real embedded system boards or a Gem5 simulator. We also consider commercial smartphones such as the Samsung Galaxy for Android and the iPhone for iOS. After that, novel solutions to ensure the security and fault-tolerance of LLMs for the target embedded systems will be drawn. We also search for new attack vectors to make large language models produce wrong results, considering the domain of embedded systems. Security attacks can include existing attacks, e.g., data corruption such as bit-flips on volatile/non-volatile memory, and new attack vectors. The project member will focus on either research paper contribution, C++-based LLM core engine development, or application development with Java, Swift, or Python.