Affiliations: | College of Engineering and Computer Science |
Team Leader: |
|
Faculty Mentor: |
Jongouk Choic, PhD
|
Team Size:
|
4 |
Open Spots: | 4 |
Team Member Qualifications:
|
Applicants should have experience with Python programming, the ability to run open-source large language models (LLMs), and a basic understanding of embedded systems. Preferred qualifications include experience with LLM model training, system security related to volatile and non-volatile memory, fault-tolerant systems, and C/C++ programming. |
Description:
|
In this project, we analyze the effect of various security attacks and soft errors on large language models (LLMs) for embedded systems. Open-source LLMs will be run on real embedded system boards or a Gem5 simulator. After that, novel solutions to ensure security and fault-tolerance of LLMs for the target embedded systems will be drawn. We also search for new attack vectors to make large language models produce wrong results, considering the domain of embedded systems. Security attacks can include existing attacks, e.g., data corruption such as bit-flips on volatile/non-volatile memory, and new attack vectors. |