{"id":2747,"date":"2018-08-27T13:39:24","date_gmt":"2018-08-27T13:39:24","guid":{"rendered":"https:\/\/ermlab.com\/?p=2747"},"modified":"2018-09-12T20:48:05","modified_gmt":"2018-09-12T20:48:05","slug":"cifar-10-classification-using-keras-tutorial","status":"publish","type":"post","link":"https:\/\/ermlab.com\/en\/blog\/nlp\/cifar-10-classification-using-keras-tutorial\/","title":{"rendered":"CIFAR-10 classification using Keras Tutorial"},"content":{"rendered":"<p><span style=\"font-weight: 400;\">The CIFAR-10 dataset consists of 60000 32&#215;32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Recognizing photos from the <\/span><a href=\"https:\/\/www.cs.toronto.edu\/~kriz\/cifar.html\"><span style=\"font-weight: 400;\">cifar-10 collection<\/span><\/a><span style=\"font-weight: 400;\"> is one of the most common problems in the today&#8217;s world of machine learning. I\u2019m going to show you &#8211; step by step &#8211; how to build multi-layer artificial neural networks that will recognize images from a cifar-10 \u00a0set with an accuracy of about 80% and visualize it.<\/span><\/p>\n<p><!--more--><\/p>\n<h1><span style=\"font-weight: 400;\">Building 4 and 6-layer Convolutional Neural Networks<\/span><\/h1>\n<p><span style=\"font-weight: 400;\">To build our CNN (Convolutional Neural Networks) \u00a0we will use <a href=\"https:\/\/keras.io\/\"><strong>Keras<\/strong><\/a> and introduce a few newer techniques for Deep Learning model like activation functions: <strong><a href=\"https:\/\/en.wikipedia.org\/wiki\/Rectifier_(neural_networks)\">ReLU<\/a>,<\/strong>\u00a0 <a href=\"https:\/\/en.wikipedia.org\/wiki\/Dropout_(neural_networks)\"><strong>dropout<\/strong><\/a>.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Keras is an open source neural network Python library which can run on top of other machine learning libraries like TensorFlow, CNTK or Theano. It allows for an easy and fast prototyping, supports convolutional, \u00a0recurrent neural networks and a combination of the two. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">In the beginning, we will learn what Keras is, deep learning, what we will learn, and briefly about the cifar-10 collection. Then step by step, we will build a 4 and 6 layer neural network along with its visualization, resulting in % accuracy of classification with graphical interpretation.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Finally, we will see the results and compare the two networks in terms of the accuracy and speed of training for each epoch.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">The CIFAR-10 DATASET<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">The dataset is divided into five training batches and one test batch, each with 10000 images. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">The test batch contains exactly 1000 randomly-selected images from each class. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Between them, the training batches contain exactly 5000 images from each class.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">You can download it from <\/span><a href=\"https:\/\/www.cs.toronto.edu\/~kriz\/cifar.html\"><span style=\"font-weight: 400;\">here<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<h1><span style=\"font-weight: 400;\">Convolutional Neural Networks &#8211; The Code<\/span><\/h1>\n<p><span style=\"font-weight: 400;\">First of all, we will be defining all of the classes and functions we will need:<\/span><\/p>\n<pre class=\"\"># Import all modules\r\nimport time\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np\r\nfrom keras.models import Sequential\r\nfrom keras.layers import Dense\r\nfrom keras.layers import Dropout\r\nfrom keras.layers import Flatten\r\nfrom keras.constraints import maxnorm\r\nfrom keras.optimizers import SGD\r\nfrom keras.layers import Activation\r\nfrom keras.layers.convolutional import Conv2D\r\nfrom keras.layers.convolutional import MaxPooling2D\r\nfrom keras.layers.normalization import BatchNormalization\r\nfrom keras.utils import np_utils\r\nfrom keras_sequential_ascii import sequential_model_to_ascii_printout\r\nfrom keras import backend as K\r\nif K.backend()=='tensorflow':\r\n    K.set_image_dim_ordering(\"th\")\r\n\r\n# Import Tensorflow with multiprocessing\r\nimport tensorflow as tf\r\nimport multiprocessing as mp\r\n\r\n# Loading the CIFAR-10 datasets\r\nfrom keras.datasets import cifar10<\/pre>\n<p><span style=\"font-weight: 400;\">As a good practice suggests, we need to declare our variables:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\"><i><span style=\"font-weight: 400;\">batch_size<\/span><\/i><span style=\"font-weight: 400;\"> &#8211; the number of training examples in one forward\/ backwards pass. The higher the batch size, the more memory space you&#8217;ll need<\/span><\/li>\n<li style=\"font-weight: 400;\"><i><span style=\"font-weight: 400;\">num_classes<\/span><\/i><span style=\"font-weight: 400;\"> &#8211; number of cifar-10 dataset classes<\/span><\/li>\n<li style=\"font-weight: 400;\"><i><span style=\"font-weight: 400;\">one<\/span><\/i><span style=\"font-weight: 400;\"> epoch &#8211; <\/span><i><span style=\"font-weight: 400;\">one<\/span><\/i><span style=\"font-weight: 400;\"> forward pass and one backward pass of <\/span><i><span style=\"font-weight: 400;\">all<\/span><\/i><span style=\"font-weight: 400;\"> the training examples<\/span><\/li>\n<\/ul>\n<pre class=\"\"># Declare variables\r\n\r\nbatch_size = 32 \r\n# 32 examples in a mini-batch, smaller batch size means more updates in one epoch\r\n\r\nnum_classes = 10 #\r\nepochs = 100 # repeat 100 times<\/pre>\n<p><span style=\"font-weight: 400;\">Next, we can load the CIFAR-10 data set.<\/span><\/p>\n<pre class=\"\">(x_train, y_train), (x_test, y_test) = cifar10.load_data() \r\n# x_train - training data(images), y_train - labels(digits)<\/pre>\n<p>Print figure with 10 random images from the CIFAR-10 dataset.<\/p>\n<pre class=\"\"># Print figure with 10 random images from each\r\n\r\nfig = plt.figure(figsize=(8,3))\r\nfor i in range(num_classes):\r\n    ax = fig.add_subplot(2, 5, 1 + i, xticks=[], yticks=[])\r\n    idx = np.where(y_train[:]==i)[0]\r\n    features_idx = x_train[idx,::]\r\n    img_num = np.random.randint(features_idx.shape[0])\r\n    im = np.transpose(features_idx[img_num,::],(1,2,0))\r\n    ax.set_title(class_names[i])\r\n    plt.imshow(im)\r\nplt.show()<\/pre>\n<p><span style=\"font-weight: 400;\">Running the code create a 5&#215;2 plot of images and show examples from each class.<\/span><br \/>\n<img src=\"https:\/\/blog.plon.io\/wp-content\/uploads\/2017\/08\/cifar.png\" \/><\/p>\n<p><span style=\"font-weight: 400;\">The pixel values are in the range of 0 to 255 for each of the red, green and blue channels.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">It&#8217;s good practice to work with normalized data.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Because the input values are well understood, we can easily normalize to the range 0 to 1 by dividing each value by the maximum observation which is 255.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Note, the data is loaded as integers, so we must cast it to float point values in order to perform the division.<\/span><\/p>\n<pre class=\"\"># Convert and pre-processing\r\n\r\ny_train = np_utils.to_categorical(y_train, num_classes)\r\ny_test = np_utils.to_categorical(y_test, num_classes)\r\nx_train = x_train.astype('float32')\r\nx_test = x_test.astype('float32')\r\nx_train  \/= 255\r\nx_test \/= 255<\/pre>\n<p><span style=\"font-weight: 400;\">The output variables are defined as a vector of integers from 0 to 1 for each class.<\/span><\/p>\n<p>&nbsp;<\/p>\n<h4><span style=\"font-weight: 400;\">Let&#8217;s start by defining a simple CNN model.<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">We will use a model with four convolutional layers followed by max pooling and a flattening out of the network to <a href=\"https:\/\/leonardoaraujosantos.gitbooks.io\/artificial-inteligence\/content\/fc_layer.html\"><strong>fully connected layers<\/strong><\/a> to make predictions:<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Convolutional input layer, 32 feature maps with a size of 3&#215;3, a rectifier activation function<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Convolutional input layer, 32 feature maps with a size of 3&#215;3, a rectifier activation function<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Max Pool layer with size 2&#215;2<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Dropout set to 25%<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Convolutional input layer, 64 feature maps with a size of 3&#215;3, a rectifier activation function<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Convolutional input layer, 64 feature maps with a size of 3&#215;3, a rectifier activation function<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Max Pool layer with size 2&#215;2<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Dropout set to 25%<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Flatten layer<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Fully connected layer with 512 units and a rectifier activation function<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Dropout set to 50%<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Fully connected output layer with 10 units and a softmax activation function<\/span><\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">A <a href=\"http:\/\/wiki.fast.ai\/index.php\/Log_Loss\"><strong>logarithmic loss function<\/strong><\/a> is used with the stochastic <a href=\"https:\/\/en.wikipedia.org\/wiki\/Stochastic_gradient_descent\"><strong>gradient descent (SGD)<\/strong><\/a> optimization algorithm configured with a large <a href=\"https:\/\/www.quora.com\/What-does-momentum-mean-in-neural-networks\"><strong>momentum<\/strong><\/a> and weight <a href=\"https:\/\/stats.stackexchange.com\/questions\/273189\/what-is-the-weight-decay-loss\/273190\"><strong>decay<\/strong><\/a> start with a <a href=\"https:\/\/www.quora.com\/What-is-the-learning-rate-in-neural-networks\"><strong>learning rate<\/strong><\/a> of 0.1.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Then we can fit this model with 100 epochs and a batch size of 32.<\/span><\/p>\n<pre class=\"\">def base_model():\r\n\r\n    model = Sequential()\r\n    model.add(Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]))\r\n    model.add(Activation('relu'))\r\n    model.add(Conv2D(32,(3, 3)))\r\n    model.add(Activation('relu'))\r\n    model.add(MaxPooling2D(pool_size=(2, 2)))\r\n    model.add(Dropout(0.25))\r\n\r\n    model.add(Conv2D(64, (3, 3), padding='same'))\r\n    model.add(Activation('relu'))\r\n    model.add(Conv2D(64, (3,3)))\r\n    model.add(Activation('relu'))\r\n    model.add(MaxPooling2D(pool_size=(2, 2)))\r\n    model.add(Dropout(0.25))\r\n\r\n    model.add(Flatten())\r\n    model.add(Dense(512))\r\n    model.add(Activation('relu'))\r\n    model.add(Dropout(0.5))\r\n    model.add(Dense(num_classes))\r\n    model.add(Activation('softmax'))\r\n\r\n    sgd = SGD(lr = 0.1, decay=1e-6, momentum=0.9 nesterov=True)\r\n\r\n# Train model\r\n\r\n    model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])\r\n    return model\r\ncnn_n = base_model()\r\ncnn_n.summary()\r\n\r\n# Fit model\r\n\r\ncnn = cnn_n.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_test,y_test),shuffle=True)<\/pre>\n<h4>The second variant for 6-layer model:<\/h4>\n<ol>\n<li>Convolutional input layer, 32 feature maps with a size of 3&#215;3, a rectifier activation function<\/li>\n<li>Dropout set to 20%<\/li>\n<li>Convolutional input layer, 32 feature maps with a size of 3&#215;3, a rectifier activation function<\/li>\n<li>Max Pool layer with size 2&#215;2<\/li>\n<li>Convolutional input layer, 64\u00a0feature maps with a size of 3&#215;3, a rectifier activation function<\/li>\n<li>Dropout set to 20%<\/li>\n<li>Convolutional input layer, 64\u00a0feature maps with a size of 3&#215;3, a rectifier activation function<\/li>\n<li>Max Pool layer with size 2&#215;2<\/li>\n<li>Convolutional input layer, 128\u00a0feature maps with a size of 3&#215;3, a rectifier activation function<\/li>\n<li>Dropout set to 20%<\/li>\n<li>Convolutional input layer, 128\u00a0feature maps with a size of 3&#215;3, a rectifier activation function<\/li>\n<li>Max Pool layer with size 2&#215;2<\/li>\n<li>Flatten layer<\/li>\n<li>Dropout set to 20%<\/li>\n<li>Fully connected layer with 1024 units and a rectifier activation function and a weight constraint of max norm set to 3<\/li>\n<li>Dropout set to 20%<\/li>\n<li>Fully connected output layer with 10 units and a softmax activation function<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<pre class=\"\">def base_model():\r\n    model = Sequential()\r\n\r\n    model.add(Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=x_train.shape[1:]))\r\n    model.add(Dropout(0.2))\r\n\r\n    model.add(Conv2D(32,(3,3),padding='same', activation='relu'))\r\n    model.add(MaxPooling2D(pool_size=(2,2)))\r\n\r\n    model.add(Conv2D(64,(3,3),padding='same',activation='relu'))\r\n    model.add(Dropout(0.2))\r\n\r\n    model.add(Conv2D(64,(3,3),padding='same',activation='relu'))\r\n    model.add(MaxPooling2D(pool_size=(2,2)))\r\n\r\n    model.add(Conv2D(128,(3,3),padding='same',activation='relu'))\r\n    model.add(Dropout(0.2))\r\n\r\n    model.add(Conv2D(128,(3,3),padding='same',activation='relu'))\r\n    model.add(MaxPooling2D(pool_size=(2,2)))\r\n\r\n    model.add(Flatten())\r\n    model.add(Dropout(0.2))\r\n    model.add(Dense(1024,activation='relu',kernel_constraint=maxnorm(3)))\r\n    model.add(Dropout(0.2))\r\n    model.add(Dense(num_classes, activation='softmax'))<\/pre>\n<p>In this section, we can visualize the model structure. For this problem, we can use a\u00a0<a href=\"https:\/\/github.com\/stared\/keras-sequential-ascii\">library for\u00a0Keras\u00a0for investigating architectures and parameters of sequential models<\/a> by <a href=\"http:\/\/p.migdal.pl\/?utm_source=PLONBlog&amp;utm_campaign=LiveLongAndProsper\">Piotr Migda\u0142<\/a>.<\/p>\n<pre class=\"\"># Vizualizing model structure\r\n\r\nsequential_model_to_ascii_printout(cnn_n)\r\n<\/pre>\n<h4>First variant for 4-layer:<\/h4>\n<p><img src=\"https:\/\/blog.plon.io\/wp-content\/uploads\/2017\/08\/4.jpg\" \/><\/p>\n<p><strong>Second variant for 6-layer:<\/strong><\/p>\n<p><img src=\"https:\/\/blog.plon.io\/wp-content\/uploads\/2017\/08\/6.jpg\" \/><\/p>\n<p>After the training process, we can see loss and accuracy on plots using the code below:<\/p>\n<pre class=\"\"># Plots for training and testing process: loss and accuracy\r\n\r\nplt.figure(0)\r\nplt.plot(cnn.history['acc'],'r')\r\nplt.plot(cnn.history['val_acc'],'g')\r\nplt.xticks(np.arange(0, 101, 2.0))\r\nplt.rcParams['figure.figsize'] = (8, 6)\r\nplt.xlabel(\"Num of Epochs\")\r\nplt.ylabel(\"Accuracy\")\r\nplt.title(\"Training Accuracy vs Validation Accuracy\")\r\nplt.legend(['train','validation'])\r\n\r\n\r\nplt.figure(1)\r\nplt.plot(cnn.history['loss'],'r')\r\nplt.plot(cnn.history['val_loss'],'g')\r\nplt.xticks(np.arange(0, 101, 2.0))\r\nplt.rcParams['figure.figsize'] = (8, 6)\r\nplt.xlabel(\"Num of Epochs\")\r\nplt.ylabel(\"Loss\")\r\nplt.title(\"Training Loss vs Validation Loss\")\r\nplt.legend(['train','validation'])\r\n\r\n\r\nplt.show()<\/pre>\n<h4>4-layer:<\/h4>\n<p><img src=\"https:\/\/blog.plon.io\/wp-content\/uploads\/2017\/08\/6a.png\" \/><img src=\"https:\/\/blog.plon.io\/wp-content\/uploads\/2017\/08\/6b.png\" \/><\/p>\n<h4>6-layer:<\/h4>\n<p><img src=\"https:\/\/blog.plon.io\/wp-content\/uploads\/2017\/08\/4a.png\" \/><img src=\"https:\/\/blog.plon.io\/wp-content\/uploads\/2017\/08\/4b.png\" \/><\/p>\n<pre class=\"\">scores = cnn_n.evaluate(x_test, y_test, verbose=0)\r\nprint(\"Accuracy: %.2f%%\" % (scores[1]*100))<\/pre>\n<p>Running this example prints the classification accuracy and loss on the training and test datasets for each epoch.<\/p>\n<p>After that, we can print a confusion matrix for our example\u00a0with graphical interpretation.<\/p>\n<p><strong>Confusion matrix<\/strong> &#8211;\u00a0also known as an <em>error matrix<\/em>,<span style=\"font-size: 13.3333px;\">\u00a0i<\/span>s a specific table layout that allows visualization of the performance of an algorithm, typically a\u00a0<a title=\"Supervised learning\" href=\"https:\/\/en.wikipedia.org\/wiki\/Supervised_learning\">supervised learning<\/a>\u00a0one (in\u00a0<a title=\"Unsupervised learning\" href=\"https:\/\/en.wikipedia.org\/wiki\/Unsupervised_learning\">unsupervised learning<\/a>\u00a0it is usually called a\u00a0<em>matching matrix<\/em>). Each row of the matrix represents the instances in a predicted class while each column represents the instances in an actual class (or vice versa).<\/p>\n<pre class=\"\"># Confusion matrix result\r\n\r\nfrom sklearn.metrics import classification_report, confusion_matrix\r\nY_pred = cnn_n.predict(x_test, verbose=2)\r\ny_pred = np.argmax(Y_pred, axis=1)\r\n\r\nfor ix in range(10):\r\n    print(ix, confusion_matrix(np.argmax(y_test,axis=1),y_pred)[ix].sum())\r\ncm = confusion_matrix(np.argmax(y_test,axis=1),y_pred)\r\nprint(cm)\r\n\r\n# Visualizing of confusion matrix\r\nimport seaborn as sn\r\nimport pandas  as pd\r\n\r\n\r\ndf_cm = pd.DataFrame(cm, range(10),\r\n                  range(10))\r\nplt.figure(figsize = (10,7))\r\nsn.set(font_scale=1.4)#for label size\r\nsn.heatmap(df_cm, annot=True,annot_kws={\"size\": 12})# font size\r\nplt.show()<\/pre>\n<p><strong>4-layer confusion matrix and visualizing:<\/strong><\/p>\n<pre><code>[[599   5  74  98  55   14  12   9 117  17]\r\n [ 16 738  12  65   9   26   7   6  40  81]\r\n [ 31   0 523 168 136   86  33  14   9   0]\r\n [ 10   1  31 652  90  175  19  15   5   2]\r\n [  6   0  34 132 717   55  16  31   9   0]\r\n [  5   1  17 233  53  661  10  15   4   1]\r\n [  2   1  39 157 105   48 637   3   7   1]\r\n [  6   0  14  97 103   96   5 637   5   1]\r\n [ 41   7  28  84  19   18   6   4 783  10]\r\n [ 25  28   8  77  29   27   5  19  59 723]]<\/code><\/pre>\n<p><img src=\"https:\/\/blog.plon.io\/wp-content\/uploads\/2017\/08\/cm4.png\" \/><\/p>\n<p><strong>6-layer confusion matrix and visualizing:<\/strong><\/p>\n<pre><code>[[736  11  54  45  30  14  15   9  61  25]\r\n [ 10 839   6  38   3  13   7   5  22  57]\r\n [ 47   2 566  96 145  65  51  17   7   4]\r\n [ 23   6  56 570  97 140  57  29  12  10]\r\n [ 16   2  52  80 700  55  25  64   3   3]\r\n [ 10   1  64 211  59 582  24  39   6   4]\r\n [  4   3  42 114 121  40 650  13   5   8]\r\n [ 14   1  40  57  69  68  11 723   3  14]\r\n [ 93  32  26  37  16  15   6   2 752  21]\r\n [ 34  83   8  42  12  21   6  21  25 748]]<\/code><\/pre>\n<p><img src=\"https:\/\/blog.plon.io\/wp-content\/uploads\/2017\/08\/cm6.png\" \/><\/p>\n<p>&nbsp;<\/p>\n<h3>Comparison Accuracy [%] between 4-layer and 6-layer CNN<\/h3>\n<p>As we can see in the chart below, the best accuracy for 4-layer CNN is for epochs between 20-50. For 6-layer CNN is for epochs between 10-20 epochs.<\/p>\n<p><img src=\"https:\/\/blog.plon.io\/wp-content\/uploads\/2017\/08\/acc.png\" \/><\/p>\n<h3>Comparison time of learning process between 4-layer and 6-layer CNN<\/h3>\n<p>As we can see in the chart below, the neural network training time is considerably longer for a 6-layer network.<\/p>\n<p><img src=\"https:\/\/blog.plon.io\/wp-content\/uploads\/2017\/08\/lp.png\" \/><b>Summary<\/b><\/p>\n<p><span style=\"font-weight: 400;\">After working through this tutorial you learned:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">What is Keras library and how to use it<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">What is Deep Learning<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">How to use ready datasets<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">What is Convolutional Neural Networks(CNN)<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">How to build step by step Convolutional Neural Networks(CNN)<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">What are differences in model results<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Basics of Machine Learning<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Introduction to Artificial Intelligence(AI)<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">What is the confusion matrix and how to visualize it<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">If you have any questions about the project or this post, please ask your question in the comments.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">You can <\/span>download<a href=\"https:\/\/github.com\/Ermlab\/cifar-keras-tutorial\"><span style=\"font-weight: 400;\"> it from GitHub<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<h2><b>Resources<\/b><\/h2>\n<ol>\n<li style=\"font-weight: 400;\"><a href=\"https:\/\/keras.io\/\"><span style=\"font-weight: 400;\">Official Keras Documentation<\/span><\/a><\/li>\n<li style=\"font-weight: 400;\"><a href=\"https:\/\/en.wikipedia.org\/wiki\/Keras\"><span style=\"font-weight: 400;\">About Keras on Wikipedia<\/span><\/a><\/li>\n<li style=\"font-weight: 400;\"><a href=\"https:\/\/en.wikipedia.org\/wiki\/Deep_learning\"><span style=\"font-weight: 400;\">About Deep Learning on Wikipedia<\/span><\/a><\/li>\n<li style=\"font-weight: 400;\"><a href=\"http:\/\/machinelearningmastery.com\/object-recognition-convolutional-neural-networks-keras-deep-learning-library\/\"><span style=\"font-weight: 400;\">Tutorial by Dr. Jason Brownlee<\/span><\/a><\/li>\n<li style=\"font-weight: 400;\"><a href=\"http:\/\/parneetk.github.io\/blog\/cnn-cifar10\/\"><span style=\"font-weight: 400;\">Tutorial by Parneet Kaur<\/span><\/a><\/li>\n<li style=\"font-weight: 400;\"><a href=\"https:\/\/www.bonaccorso.eu\/2016\/08\/06\/cifar-10-image-classification-with-keras-convnet\/\"><span style=\"font-weight: 400;\">Tutorial by Giuseppe Bonaccorso<\/span><\/a><\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>The CIFAR-10 dataset consists of 60000 32&#215;32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. Recognizing photos from the cifar-10 collection is one of the most common problems in the today&#8217;s world of machine learning. I\u2019m going to show you &#8211; step by step [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":3027,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[85],"tags":[118,87],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v15.9.1 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>CIFAR-10 classification using Keras Tutorial - Ermlab Software<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ermlab.com\/en\/blog\/nlp\/cifar-10-classification-using-keras-tutorial\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"CIFAR-10 classification using Keras Tutorial - Ermlab Software\" \/>\n<meta property=\"og:description\" content=\"The CIFAR-10 dataset consists of 60000 32&#215;32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. Recognizing photos from the cifar-10 collection is one of the most common problems in the today&#8217;s world of machine learning. I\u2019m going to show you &#8211; step by step [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ermlab.com\/en\/blog\/nlp\/cifar-10-classification-using-keras-tutorial\/\" \/>\n<meta property=\"og:site_name\" content=\"Ermlab Software\" \/>\n<meta property=\"article:published_time\" content=\"2018-08-27T13:39:24+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2018-09-12T20:48:05+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/07\/asymmetry-chlorophyll-color-1268129-1024x683.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"683\" \/>\n<meta name=\"twitter:card\" content=\"summary\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\">\n\t<meta name=\"twitter:data1\" content=\"9 minutes\">\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/ermlab.com\/#website\",\"url\":\"https:\/\/ermlab.com\/\",\"name\":\"Ermlab Software\",\"description\":\"Data science, aplikacje web i mobilne. Projektujemy aplikacje na zam\\u00f3wienie.\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/ermlab.com\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/ermlab.com\/en\/blog\/nlp\/cifar-10-classification-using-keras-tutorial\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/07\/asymmetry-chlorophyll-color-1268129.jpg\",\"width\":5760,\"height\":3840},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ermlab.com\/en\/blog\/nlp\/cifar-10-classification-using-keras-tutorial\/#webpage\",\"url\":\"https:\/\/ermlab.com\/en\/blog\/nlp\/cifar-10-classification-using-keras-tutorial\/\",\"name\":\"CIFAR-10 classification using Keras Tutorial - Ermlab Software\",\"isPartOf\":{\"@id\":\"https:\/\/ermlab.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/ermlab.com\/en\/blog\/nlp\/cifar-10-classification-using-keras-tutorial\/#primaryimage\"},\"datePublished\":\"2018-08-27T13:39:24+00:00\",\"dateModified\":\"2018-09-12T20:48:05+00:00\",\"author\":{\"@id\":\"https:\/\/ermlab.com\/#\/schema\/person\/cd6459e58479af9087fff64e1a66baaf\"},\"breadcrumb\":{\"@id\":\"https:\/\/ermlab.com\/en\/blog\/nlp\/cifar-10-classification-using-keras-tutorial\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/ermlab.com\/en\/blog\/nlp\/cifar-10-classification-using-keras-tutorial\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/ermlab.com\/en\/blog\/nlp\/cifar-10-classification-using-keras-tutorial\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ermlab.com\/en\/\",\"url\":\"https:\/\/ermlab.com\/en\/\",\"name\":\"Strona g\\u0142\\u00f3wna\"}},{\"@type\":\"ListItem\",\"position\":2,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ermlab.com\/en\/blog\/nlp\/cifar-10-classification-using-keras-tutorial\/\",\"url\":\"https:\/\/ermlab.com\/en\/blog\/nlp\/cifar-10-classification-using-keras-tutorial\/\",\"name\":\"CIFAR-10 classification using Keras Tutorial\"}}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/ermlab.com\/#\/schema\/person\/cd6459e58479af9087fff64e1a66baaf\",\"name\":\"Szymon P\\u0142otka\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/ermlab.com\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/b5a81c7942fac551a03899e6b1ee5f2a?s=96&r=g\",\"caption\":\"Szymon P\\u0142otka\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/posts\/2747"}],"collection":[{"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/comments?post=2747"}],"version-history":[{"count":4,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/posts\/2747\/revisions"}],"predecessor-version":[{"id":3029,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/posts\/2747\/revisions\/3029"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/media\/3027"}],"wp:attachment":[{"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/media?parent=2747"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/categories?post=2747"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/tags?post=2747"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}