Machine Vision Inspection Systems, Machine Learning-Based Approaches

Здесь есть возможность читать онлайн «Machine Vision Inspection Systems, Machine Learning-Based Approaches» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Machine Vision Inspection Systems, Machine Learning-Based Approaches: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Machine Vision Inspection Systems, Machine Learning-Based Approaches»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

Machine Vision Inspection Systems (MVIS) is a multidisciplinary research field that emphasizes image processing, machine vision and, pattern recognition for industrial applications. Inspection techniques are generally used in destructive and non-destructive evaluation industry. Now a day's the current research on machine inspection gained more popularity among various researchers, because the manual assessment of the inspection may fail and turn into false assessment due to a large number of examining while inspection process.
This volume 2 covers machine learning-based approaches in MVIS applications and it can be employed to a wide diversity of problems particularly in Non-Destructive testing (NDT), presence/absence detection, defect/fault detection (weld, textile, tiles, wood, etc.,), automated vision test & measurement, pattern matching, optical character recognition & verification (OCR/OCV), natural language processing, medical diagnosis, etc. This edited book is designed to address various aspects of recent methodologies, concepts, and research plan out to the readers for giving more depth insights for perusing research on machine vision using machine learning-based approaches.

Machine Vision Inspection Systems, Machine Learning-Based Approaches — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Machine Vision Inspection Systems, Machine Learning-Based Approaches», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

54. Nayak, S.R., Mishra, J., Padhy, R., An improved algorithm to estimate the fractal dimension of gray scale images, in: International Conference on Signal Processing, Communication, Power and Embedded System , IEEE, pp. 1109–1114, 2016.

55. Nayak, S.R., Mishra, J., Jena, P.M., Fractal dimension of grayscale images, in: Progress in Computing, Analytics and Networking , pp. 225–234, Springer, Singapore, 2018.

56. https://orange.biolab.si/download/#windows[Accessed on April 11, 2020].

57. https://www.onlinebiologynotes.com/classification-of-virus/[Accessed on May 14, 2020].

58. https://www.viprbrc.org/brc/home.spg?decorator=vipr[Accessed on May 14, 2020].

59. https://www.researchgate.net/figure/Transmission-electron-micro-scopeview-of-an-Ebolavirus-virion-The-bar-shows-an_fig1_269095800[Accessed on May 29, 2020].

60. https://commons.wikimedia.org/wiki/File:Ebola_Virus_TEM_PHIL_1832_lores.jpg[Accessed on May 29, 2020].

61. https://time.com/3502740/ebola-virus-1976/[Accessed on May 29, 2020].

62. https://en.wikipedia.org/wiki/Ebolavirus[Accessed on May 29, 2020].

63. https://www.wvik.org/post/why-wont-fear-airborne-ebola-go-away-0#stream/0 [Accessed on May 29, 2020].

64. https://www.defense.gov/observe/photo-gallery/igphoto/2001104229/[Accessed on May 29, 2020].

65. https://www.flickr.com/photos/nihgov/27385281096/[Accessed on May 29, 2020].

66. https://www.britannica.com/science/Zika-virus[Accessed on May 29, 2020].

67. https://en.wikipedia.org/wiki/Zika_virus[Accessed on May 29, 2020].

68. https://www.northcountrypublicradio.org/news/npr/495935879/zika-mystery-how-did-a-73-year-old-man-infect-his-son[Accessed on May 29, 2020].

69. https://www.mtu.edu/unscripted/stories/2018/november/be-brief-envel-oped.html[Accessed on May 29, 2020].

70. https://www.mpi-magdeburg.mpg.de/3254770/2017-05-15-pm-zika-virus-propagation[Accessed on May 29, 2020].

71. https://www.nih.gov/news-events/nih-research-matters/novel-coronavirus-structure-reveals-targets-vaccines-treatments[Accessed on May 29, 2020].

72. http://www.sci-news.com/medicine/sars-cov-2-natural-origin-08242.html[Accessed on May 29, 2020].

73. https://www.sciencemag.org/news/2020/03/who-launches-global-mega-trial-four-most-promising-coronavirus-treatments[Accessed on May 29, 2020].

74. https://www.genengnews.com/news/sars-cov-2-insists-on-making-a-name-for-itself/[Accessed on May 29, 2020].

75. https://www.niaid.nih.gov/news-events/novel-coronavirus-sarscov2-images[Accessed on May 29, 2020].

76. https://www.soundhealthandlastingwealth.com/health-news/new-insightsinto-sars-cov-2-viral-diversity/?utm_source=rss&utm_medium=rss&utm_campaign=new-insights-into-sars-cov-2-viral-diversity[Accessed on May 29, 2020].

77. https://www.flickr.com/photos/nihgov/43683984840[Accessed on May 29, 2020].

78. https://www.nih.gov/news-events/news-releases/scientists-develop-novel-vaccine-lassa-fever-rabies[Accessed on May 29, 2020].

79. https://www.nytimes.com/2015/05/27/science/lassa-virus-carries-little-risk-to-public-experts-say.html[Accessed on May 29, 2020].

80. http://www.mrcindia.org/journal/issues/441001.pdf[Accessed on May 29, 2020].

81. https://www.dw.com/en/man-severely-ill-with-lassa-fever-being-treated-at-university-hospital-frankfurt/a-19122900[Accessed on May 29, 2020].

82. https://fineartamerica.com/featured/1-lassa-virus-tem-science-source.html[Accessed on May 29, 2020].

83. https://www.cdc.gov/non-polio-enterovirus/resources-ev68-photos.html[Accessed on May 29, 2020].

84. https://www.researchgate.net/figure/TEM-image-of-Enterovirus-71-EV71-virus-like-particles-The-morphology-of-purified-VLPs_fig1_277783163[Accessed on May 29, 2020].

85. https://www.nih.gov/news-events/nih-research-matters/enterovirus-infection-linked-acute-flaccid-myelitis[Accessed on May 29, 2020].

86. https://en.wikipedia.org/wiki/Enterovirus_C[Accessed on May 29, 2020].

87. https://www.emptywheel.net/tag/enterovirus-d68/?print=print[Accessed on May 29, 2020].

88. https://simple.wikipedia.org/wiki/Enterovirus[Accessed on May 29, 2020].

1 *Corresponding author: kalyankumarjena@gmail.com

2

Capsule Networks for Character Recognition in Low Resource Languages

C. Abeysinghe, I. Perera and D.A. Meedeniya*

Department of Computer Science and Engineering, University of Moratuwa, Moratuwa, Sri Lanka

Abstract

Most of the existing techniques in handwritten character recognition are not well-utilized for low resource languages, due to the lack of labelled data and the need for large datasets for image classification using deep neural networks. In contrast to recent advancement in deep learning-based image classification, human cognition could quickly identify and differentiate characters without much training. As a solution to character recognition problem in low resource languages, this chapter proposes a model that replicates the human cognition ability to learn with small datasets. The proposed solution is a Siamese neural network which bestows capsules and convolutional units to get a thorough understanding of the image. The presented model takes two images as inputs, process, and extract features through the capsule network and outputs the probability of being similar. This study attests that the capsule-based Siamese network could learn abstract knowledge about different characters which could be extended to unforeseen characters. The proposed model is trained on Omniglot dataset and achieved up to 94% accuracy for previously unseen alphabets. Further, the module is tested on Sinhala language alphabet and MNIST dataset that stands for Modified National Institute of Standards and Technology database, which are new to the trained model.

Keywords:Character recognition, capsule networks, deep learning, one-shot, learning, sinhala dataset

2.1 Introduction

Ability to learn visual concepts using a small number of examples is a distinctive ability of human cognition. For instance, even a child can correctly distinguish between a bicycle and car, after showing them one example. Taking this one step further, if we show them a plane and ship, which they have never seen before, they could correctly understand that they are two different vehicle types. One could argue that this ability is an application of previous experience and domain knowledge to new situations. How could we reproduce this same ability in machines? In this chapter, we propose a method to transfer previously learned knowledge about characters to differentiate between new character images.

There are versatile applications in image classification using few training samples [1–3]. Being able to classify images without any previous training possess greater importance in situations like character recognition, signature verification, and robot vision. This paradigm, where only one sample is used to learn and make predictions, is known as one-shot learning [4]. Especially when it comes to low resource languages, currently available deep learning techniques fail due to lack of large labeled datasets. If a model could do one-shot learning for an alphabet using a single image as a training sample for classification, that model could make a massive impact for optical character recognition [5].

This chapter uses Omniglot dataset [6] to train such one-shot learning model. Omniglot stands for the online encyclopedia of writing systems and languages, which is a dataset of handwritten characters and widely used in similar tasks that need a small number of data samples belonging to many classes. In the research, we extend this dataset by introducing a set of characters from Sinhala language, which has around 17 million native speakers and mainly used only in Sri Lanka. Due to lack of available resources for the language, using novel deep learning-based Optical Character Recognition (OCR) methods are challenging. With the trained model introduced in this chapter, significant character recognition accuracy was achieved for Sinhala language using a small dataset.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Machine Vision Inspection Systems, Machine Learning-Based Approaches»

Представляем Вашему вниманию похожие книги на «Machine Vision Inspection Systems, Machine Learning-Based Approaches» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Machine Vision Inspection Systems, Machine Learning-Based Approaches»

Обсуждение, отзывы о книге «Machine Vision Inspection Systems, Machine Learning-Based Approaches» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x