International Workshop

International Workshop 2022

System-level core technologies for AI realization

Theme: System-level core technologies for AI realization

The use of deep neural network (DNN) models is driving successful results in many AI applications. In order to improve the performance of AI, research works related to the DNN model itself have been actively conducted, but the ones in terms of computing system that can efficiently execute DNN models are relatively insufficient. Applying advanced DNN models to resource-constrained systems such as IoT devices is still a challenging task, due to the huge amount of MAC (Multiply–Accumulate) operations and memory requirements of DNN models. To address these challenges, various approaches have been proposed to make deep learning lightweight and optimized for resource-constrained devices. In this workshop, we will review and discuss the system-level optimization techniques for DNN computing such as ReRAM for in-memory computing and memory footprint reduction through recursive quantization. We will also cover the advanced topic like hyper-parameter optimization and Vision Transformer. Additionally, we will introduce an open-source platform based on Kubernetes to accelerate the AI development process, thus reducing the time-to-market. In the end, we will present a new big related to AI, named Data-Centric AI. The traditional approaches to augment AI models’ performance are to get either more data or fine-tune the model, which is an unsustainable trajectory and expensive activity. In these circumstances, the focus must shift from big data to small and good-quality data that is what Data-Centric AI is all about. We will introduce the fundamentals and promising use cases of this new discipline to the participants.

Date: 27 October 2022, 10:00 am -06:00 pm, Korea Standard Time (KST) (For time zone conversion, click here.)
Venue: Virtual event

Program Committee Chair:
Prof. Myungsun Kim (Hansung University, Korea)
Prof. Seong Oun Hwang (Gachon University, Korea)

Program Committee:
Prof. Hyung Jin Chang (University of Birmingham, United Kingdom)
Prof. Byung Chul Ko (Keimyung University, South Korea)
Dr. Wai Kong Lee (Gachon University, Korea)
Prof. Hyunsik Ahn (Tongmyong University, South Korea)
Prof. Byung Seo Kim (Hongik University, South Korea)
Prof. Minho Jo (Korea University, South Korea)
Prof. Joohyung Lee (Gachon University, Korea)

Co-host:
Institute of Electronics and Information Engineers (SIG on AI Application, SIG on Security and AI)
IEEE Seoul Section Sensors Council Chapter

Sponsor:
Gachon University BK21 FAST Artificial Intelligence Convergence Center
Hongik University BK21 Research Team for Super-Distributed Autonomous Computing Service Technologies
Korea University BK21 IoT Data Science
Incheon National University On-site Customization Practical Problem Research Group
Seoul National University Autonomous Robot Intelligence Lab
Gachon University Intelligent Mobile Edge Computing Systems Lab
IEEE Student Branch at Gachon University
IEEE Sensors Council Student Branch Chapter at Gachon University

Program Schedule

27 October 2022

Time Program Speaker
10:00 - 10:50
(KST)
Vision Transformers: A New Computer Vision Paradigm Prof. Byung Chul Ko
11:00 - 11:50
(KST)
Electronic Design Automation for a Next Generation Intelligent Semiconductor Device and Circuit Dr. Seong Yeop Jung
12:00 - 13:00
(KST)
Lunch -
13:00 - 14:00
(KST)
Introduction of Sponsors’ Research and Technology -
14:00 - 14:50
(KST)
Hyper-parameters Optimization of Deep Neural Networks Dr. Shin Kyu Kim
15:00 - 15:50
(KST)
Open-source AI/ML Platform based on Kubernetes Dr. Yong Seok Park
16:00 - 16:50
(KST)
Scalable Precision via Recursive Quantization Dr. Ji Hoon Oh
17:00 - 17:50
(KST)
Data-Centric Artificial Intelligence: A New Engineering Discipline Dr.Abdul Majeed
Synopses
Vision Transformers: A New Computer Vision Paradigm
Speaker: Prof. Byung Chul Ko , (Keimyung University)
Transformer structures, which have recently led to successful results in natural language processing (NLP), have also been applied to computer vision. In the same manner in which NLP divides a sentence into several words and allows the transformer to learn the association between each word, a vision transformer (ViT) learns the association between image patches.
Consequently, it has been proven that the ViT structure performs better in the field of vision than a convolutional neural network (CNN) structure. This lecture introduces the basic algorithm of ViT, which is emerging as a new issue in computer vision. In addition, I will give a detailed introduction on how the ViT is different from the transformer used in NLP and how it is applied to images.

Electronic Design Automation for a Next Generation Intelligent Semiconductor Device and Circuit
Speaker: Dr. Seong Yeop Jung , (Advanced Institute of Convergence Technology)
The advent of artificial intelligence calls for more energy-efficient and high-performance computing hardware. Yet, modern computers employ the von-Neumann architecture in which computation and storage are physically separated so that wasteful power consumption and fundamental time delay exist for the data transfer between two remote components. In-memory computing is an approach to break this bottleneck by conceiving systems that compute within the memory. A resistive switching memory (ReRAM) was proposed as an area- and energy-efficient device for in-memory computing due to their two-terminal structure, resistive switching properties, and direct data processing in the memory. In this talk, I will first introduce the use of a crossbar array (CBA) of ReRAM devices as a neural network hardware. Then, I will examine the state-of-the-art electronic design automation (EDA) environment for the design technology to co-optimize ReRAM devices and their crossbar array. To this end, we will explore technology computer aided design (TCAD) tools for describing the microscopic physical mechanisms involved in the resistive switching of ReRAM. In addition, I will discuss the compact modeling of ReRAM devices, its applications in their CBA design, and the challenges in accurate and fast SPICE simulation. I hope that this seminar would be helpful for those who wonder how next-generation intelligent semiconductor device and circuit are developed.
Hyper-parameters Optimization of Deep Neural Networks
Speaker: Dr. Shin Kyu Kim , (Intel Korea)
As deep neural networks become more complex, it is very challenging to find optimal hyper-parameters that can train deep neural networks well. This is because not only the number of hyper-parameters to be set increases, but also the resources required for training increase as the neural network becomes huge. To solve this problem, the “Hyper-parameters Optimization” (HPO) technique was introduced, and many researchers and companies have developed various HPO techniques. In this talk, I will explain various HPO techniques that have been studied so far, and the problems to be solved in the future.
Open-source AI/ML Platform based on Kubernetes
Speaker: Dr. Yong Seok Park , (RedHat)
AI/ML applications have a long and tedious lifecycle that involves many steps such as data collection, pre-processing, model learning, validation, deployment, optimization, upgrade, and termination. Containers and Kubernetes have become a key to accelerate the AI/ML lifecycle as these technologies provide data engineers and scientists with the much needed agility, flexibility, portability, and scalability to the lifecycle. In this talk, I will discuss AI/ML platforms that take advantage of the latest open-source container technologies with integrated DevOps and accelerator technologies.
Scalable Precision via Recursive Quantization
Speaker: Dr. Ji Hoon Oh , (Neubla)
Uniform quantization with low precision often causes severe performance degradation. Mixed-precision quantization addresses this problem but requires dedicated hardware and instruction set supporting multiple bit-widths, which is less efficient than single-precision hardware. We propose a novel learning-based recursive quantization framework with single precision that compensates consecutively for the quantization error of weights.
Data-Centric Artificial Intelligence: A New Engineering Discipline
Speaker: Dr.Abdul Majeed , (Gachon University)
The advent of AI has transformed the IT industry with a significant impact on nations and societies across the globe. In this talk, I will introduce a brand new and big concept related to AI, named Data-Centric AI. Specifically, I will present fundamentals, use cases, and opinions of industry experts related to Data-Centric AI, which is a promising research area with a lot of potential than the conventional Model-Centric AI. I will discuss the advantages of this new engineering discipline that can pave the way in rectifying the unsustainable research trajectories in the AI domain. Lastly, some enabling technologies targeting Data-Centric AI will be discussed.

Registration:
The registration site is https://www.theieie.org/events/?tab=4&part=03&c_id=801
Registration must be completed no later than 20 October 2022.
A registration includes electronic presentation materials, but not lunch.
Webex sign-in details will be sent to registered attendees at least a day before the workshop opens.
For further enquiries, please contact Prof. Seong Oun Hwang (sohwang at gachon dot ac dot kr, https://ai-security.github.io/index_e.htm)
Students Professionals
150,000 KRW 300,000 KRW