Banknote Recognizer: From Theory to Application

Long-yin YUNG, Ming-hua XIA, Yik-chung WU

Abstract


In recent years, Deep Convolutional Neural Network (CNN) has demonstrated a robust performance and reaches the state-of-the-art performance in many image processing related tasks, such as object detection, image classification or some natural language processing tasks. However, most of the studies tend to focus on the development of the model architecture design, especially with some standard datasets such as MNIST or ImageNet and only a few implementing the advanced technologies into real-life application [1]. In this study, we further introduce the well-designed model in image classification and demonstrate the designed model applied into a mobile environment. In data preparation, different data collection methods were evaluated and examining different methods for creating a dataset. We further investigate the advantages and restrictions of the mobile neural network model. Results illustrated the consistency of performance when transferring from computer environment to mobile environment. In addition, a real-life product was made based on the theory and investigation by co-operating with local blind society and software development company, forming the first real-time A.I. application for visually impaired with high mobility and built-in neural network model, called “Hong Kong Banknote Recognizer”.

Keywords


Deep learning, Image classification, Mobile application


DOI
10.12783/dtcse/cnai2018/24180

Full Text:

PDF

Refbacks

  • There are currently no refbacks.