I am an associate professor in the School of Computing (SoC) at KAIST and a faculty member of Computer Architecture and Systems Laboratory (CASYS). I am co-affiliated with School of Electrical Engineering, Graduate School of AI Semiconductor, Graduate School of System Architecture (provisional), Graduate School of AI, and Department of Semiconductor System Engineering (SSE).
I am very much looking forward to working with talented students on exciting projects.
If you’re interested in joining my research group, please contact me via email introducing yourself (with your CV and transcripts if available).
BS in Computer Science and Engineering, 2010
Interference-Aware DNN Serving on Heterogeneous Processors in Edge
Systems
Yeonjae Kim, Igjae Kim, Kwanghoon Choi, Jeongseob Ahn, Jongse Park, Jaehyuk Huh
ICCD, 2024 [Paper]
LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale
Jaehong Cho, Minsu Kim, Hyunmin Choi, Guseul Heo, Jongse Park
IISWC, 2024 [Paper|Talk|Code]
Best Paper Award & Distinguished Artifact Award
Accelerating String-key Learned Index Structures via Memoization-based Incremental Training
Minsu Kim, Jinwoo Hwang, Guseul Heo, Seiyeon Cho, Divya Mahajan, Jongse Park
VLDB, 2024 [Paper|Talk|Code]
DaCapo: Accelerating Continuous Learning in Autonomous Systems for Video Analytics
Yoonsung Kim, Changhun Oh, Jinwoo Hwang, Wonung Kim, Seongryong Oh, Yubin Lee, Hardik Sharma, Amir Yazdanbakhsh, Jongse Park
ISCA, 2024 [Paper|Talk|Code]
Distinguished Artifact Award
NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing
Guseul Heo, Sangyeop Lee, Jaehong Cho, Hyunmin Choi, Sanghyeon Lee, Hyungkyu Ham, Gwangsun Kim, Divya Mahajan, Jongse Park
ASPLOS, 2024 [Paper|Talk|Code]
Tandem Processor: Grappling with Emerging Operators in Neural Networks
Soroush Ghodrati, Sean Kinzer, Hanyang Xu, Rohan Mahapatra, Yoonsung Kim, Byung Hoon Ahn, Dong Kai Wang, Lavanya Karthikeyan, Amir Yazdanbakhsh, Jongse Park, Nam Sung Kim, Hadi Esmaeilzadeh
ASPLOS, 2024 [Paper]
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Network
Hardik Sharma, Jongse Park, Naveen Suda, Liangzhen Lai, Benson Chau, Vikas Chandra, Hadi Esmaeilzadeh
ISCA, 2023 [Retrospective]
Selected for Inclusion in ISCA 25-year Retrospective 1996-2020
General-Purpose Code Acceleration with Limited-Precision Analog Computation
Renée St. Amant, Amir Yazdanbakhsh, Jongse Park, Bradley Thwaites, Hadi Esmaeilzadeh, Arjang Hassibi, Luis Ceze, and Doug Burger
ISCA, 2023 [Retrospective]
Selected for Inclusion in ISCA 25-year Retrospective 1996-2020
Tunable Memory Protection for Secure Neural Processing Units
Sunho Lee, Seonjin Na, Jungwoo Kim, Jongse Park, and Jaehyuk Huh
ICCD, 2022 [Paper]
Supporting Dynamic Translation Granularity for Hybrid Memory Systems
Bokyeong Kim, Soojin Hwang, Sanghoon Cha, Chang Hyun Park, Jongse Park, and Jaehyuk Huh
ICCD, 2022 [Paper]
CoVA: Exploiting Compressed-Domain Analysis to Accelerate Video Analytics
Jinwoo Hwang, Minsu Kim, Daeun Kim, Seungho Nam, Yoonsung Kim, Dohee Kim, Hardik Sharma, Jongse Park
USENIX ATC, 2022 [Paper|Talk]
Serving Heterogeneous Machine Learning Models on Multi-GPU Servers with Spatio-Temporal Sharing
Seungbeom Choi, Sunho Lee, Yeonjae Kim, Jongse Park, Youngjin Kwon, and Jaehyuk Huh
USENIX ATC, 2022 [Paper|Talk]
TNPU: Supporting Trusted Execution with Tree-less Integrity Protection for Neural Processing Unit
Sunho Lee, Jungwoo Kim, Seonjin Na, Jongse Park, and Jaehyuk Huh
HPCA, 2022 [Paper|Talk]
Common Counters: Compressed Encryption Counters for Secure GPU Memory
Seonjin Na, Sunho Lee, Yeonjae Kim, Jongse Park, and Jaehyuk Huh
HPCA, 2021 [Paper|Talk]
Mixed-Signal Charge-Domain Acceleration of Deep Neural Networks through Interleaved Bit-Partitioned Arithmetic
Soroush Ghodrati, Hardik Sharma, Sean Kinzer, Amir Yazdanbakhsh, Jongse Park, Nam Sung Kim, Doug Burger, and Hadi Esmaeilzadeh
PACT, 2020 [Paper|Talk]
A Network-Centric Hardware/Algorithm Co-Design to Accelerate Distributed Training of Deep Neural Networks
Youjie Li, Jongse Park, Mohammad Alian, Yifan Yuan, Zheng Qu, Peitian Pan, Ren Wang, Alexander Gerhard Schwing, Hadi Esmaeilzadeh, and Nam Sung Kim
MICRO, 2018 [Paper|Talk]
From Tensors to FPGAs: Accelerating Deep Learning
Hardik Sharma, Jongse Park, Balavinayagam Samynathan, Behnam Robatmili, Shahrzad Mirkhani, and Hadi Esmaeilzadeh
HotChips, 2018 [Paper|Poster|Demo1|Demo2]
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks
Hardik Sharma, Jongse Park, Naveen Suda, Liangzhen Lai, Benson Chau, Vikas Chandra, Hadi Esmaeilzadeh
ISCA, 2018 [Paper|Talk]
Scale-Out Acceleration for Machine Learning
Jongse Park, Hardik Sharma, Divya Mahajan, Joon Kyung Kim, Preston Olds, and Hadi Esmaeilzadeh
MICRO, 2017 [Paper|Talk]
AxGames: Towards Crowdsourcing Quality Target Determination in Approximate Computing
Jongse Park, Divya Mahajan, Bradley Thwaites, Emmanuel Amaro, and Hadi Esmaeilzadeh
ASPLOS, 2016 [Paper|Talk]
From High-Level Deep Neural Models to FPGAs
Hardik Sharma, Jongse Park, Divya Mahajan, Emmanuel Amaro, Joon Kyung Kim, Chenkai Shao, Asit Mishra, and Hadi Esmaeilzadeh
MICRO, 2016 [Paper|Talk]
Towards Statistical Guarantees in Controlling Quality Tradeoffs in Approximate Acceleration
Divya Mahajan, Amir Yazdanbaksh, Jongse Park, Bradley Thwaites, and Hadi Esmaeilzadeh
ISCA, 2016 [Paper|Talk]
Tabla: A Unified Template-based Framework for Accelerating Statistical Machine Learning
Divya Mahajan, Jongse Park, Emmanuel Amaro, Hardik Sharma, Amir Yazdanbaksh, Joon Kyung Kim, and Hadi Esmaeilzadeh
HPCA, 2016 [Paper|Talk]
Distinguished Paper Award
FlexJava: Language Support for Safe and Modular Approximate Programming
Jongse Park, Hadi Esmaeilzadeh, Xin Zhang, Mayur Naik, William Harris
FSE, 2015 [Paper|Talk|Artifact]
Neural Acceleration for GPU Throughput Processors
Amir Yazdanbakhsh, Jongse Park, Hardik Sharma, Pejman Lofti-Kamran, and Hadi Esmaeilzadeh
MICRO, 2015 [Paper|Talk]
Axilog: Language Support for Approximate Hardware Design
Amir Yazdanbakhsh, Divya Mahajan, Bradley Thwaites, Jongse Park, Anandhavel Nagendrakumar, Sindhuja Sethuraman, Kartik Ramkrishnan, Nishanthi Ravindran, Rudra Jariwala, Abbas Rahimi, Hadi Esmaeilzadeh, and Kia Bazargan
DATE, 2015 [Paper|Talk]
General-Purpose Code Acceleration with Limited-Precision Analog Computation
Renée St. Amant, Amir Yazdanbakhsh, Jongse Park, Bradley Thwaites, Hadi Esmaeilzadeh, Arjang Hassibi, Luis Ceze, and Doug Burger
ISCA, 2014 [Paper|Talk]
Honorable Mention in IEEE Micro Top Picks
Rollbak-Free Value Prediction with Approximate Loads (Short paper)
Bradley Thwaites, Gennady Pekhimenko, Amir Yazdanbakhsh, Jongse Park, Girish Mururu, Hadi Esmaeilzadeh, Onur Mutlu, and Todd Mowry
PACT, 2014 [Paper]
Isolated Mini-domain for Trusted Cloud Computing
Jaewon Choi, Jongse Park, Jinho Seol, and Seungryoul Maeng
CCGrid, 2013 [Paper]
Locality-aware Dynamic VM Reconfiguration on MapReduce Clouds
Jongse Park, Daewoo Lee, Bokyeong Kim, Jaehyuk Huh, and Seungryoul Maeng
HPDC, 2012 [Paper|Talk]
ONNXim: A Fast, Cycle-level Multi-core NPU Simulator
Hyungkyu Ham*, Wonhyuk Yang*, Yunseon Shin, Okkyun Woo, Guseul Heo, Sangyeop Lee, Jongse Park, Gwangsun Kim
IEEE Computer Architecture Letters (CAL), 2024 [Paper|Code]
LPU: A Latency-optimized and Highly Scalable Processor for Large Language Model Inference
Seungjae Moon, Jung-Hoon Kim, Junsoo Kim, Seongmin Hong, Junseo Cha, Minsu Kim, Sukbin Lim, Gyubin Choi, Dongjin Seo, Jongho Kim, Hunjong Lee, Hyunjun Park, Ryeowook Ko, Soongyu Choi, Jongse Park, Jinwon Lee, Joo-Young Kim
IEEE Micro, special issue on Contemporary Industry Products, 2024 [Paper]
Cerberus: Triple Mode Acceleration of Sparse Matrix and Vector Multiplication
Soojin Hwang, Daehyeon Baek, Jongse Park, Jaehyuk Huh
IEEE Transactions on Architecture and Code Optimization (TACO), 2024 [Paper]
Hardware Hardened Sandbox Enclaves for Trusted Serverless Computing
Joongun Park, Seunghyo Kang, Sanghyeon Lee, Taehoon Kim, Jongse Park, Youngjin Kwon, and Jaehyuk Huh
IEEE Transactions on Architecture and Code Optimization (TACO), 2023 [Paper]
FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support
Seock-Hwan Noh, Jahyun Koo, Seunghyun Lee, Jongse Park, and Jaeha Kung
IEEE Transactions on Computers (TC), 2023 [Paper]
HAMMER: Hardware-friendly Approximate Computing for Self-attention with Mean-redistribution and Linearization
Seonho Lee, Ranggi Hwang, Jongse Park, and Minsoo Rhu
IEEE Computer Architecture Letters (CAL), 2023 [Paper]
Yin-Yang: Programming Abstraction for Cross-Domain Multi-Acceleration
Joon Kyung Kim, Byung Hoon Ahn, Sean Kinzer, Soroush Ghodrati, Rohan Mahapatra, Brahmendra Yatham, Dohee Kim, Parisa Sarikhani, Babak Mahmoudi, Divya Mahajan, Jongse Park, Hadi Esmaeilzadeh
IEEE Micro, special issue on Compiling for Accelerators, 2022 [Paper]
SLO-aware Inference Scheduler for Heterogeneous Processors in Edge Platforms
Wonik Seo, Sanghoon Cha, Yeonjae Kim, Jaehyuk Huh, and Jongse Park
TACO, 2021 [Paper]
Axilog: Abstractions for Approximate Hardware Design and Reuse
Divya Mahajan, Kartik Ramkrishnan, Rudra Jariwala, Amir Yazdanbakhsh, Jongse Park, Bradley Thwaites, Anandhavel Nagendrakumar, Abbas Rahimi, Hadi Esmaeilzadeh, and Kia Bazargan
IEEE Micro, , special issue on Alternative Computing Designs and Technologies, 2015 [Paper]
LLMServingSim: A Simulation Infrastructure for LLM Inference Serving Systems
Jaehong Cho, Minsu Kim, Hyunmin Choi, Jongse Park
ISCA Workshop on ML for Computer Architecture and Systems (MLArchSys), 2024 [Paper|Talk]
LVS: A Learned Video Storage for Fast and Efficient Video Understanding
Yunghee Lee, Jongse Park
CVPR Workshop on Efficient Deep Learning for Computer Vision (ECV), 2024 [Paper|Talk]