{"id":3076,"date":"2018-11-07T11:24:30","date_gmt":"2018-11-07T11:24:30","guid":{"rendered":"https:\/\/ermlab.com\/?p=3076"},"modified":"2018-11-07T11:24:30","modified_gmt":"2018-11-07T11:24:30","slug":"breast-cancer-classification-using-scikit-learn-and-keras","status":"publish","type":"post","link":"https:\/\/ermlab.com\/en\/blog\/data-science\/breast-cancer-classification-using-scikit-learn-and-keras\/","title":{"rendered":"Breast cancer classification using scikit-learn and Keras"},"content":{"rendered":"<p>The post on the blog will be devoted to the breast cancer classification, implemented using machine learning techniques and neural networks.<\/p>\n<h2>Introduction to Breast Cancer<\/h2>\n<p>The goal of the project is a medical data analysis using artificial intelligence methods such as machine learning and deep learning for classifying cancers (malignant or benign). Breast cancer is the most common cancer occurring among women, and\u00a0this is also the main reason\u00a0for dying from cancer in the world. The most effective way to reduce numbers of death is early detection.<br \/>\nEvery 19 seconds, cancer in women is diagnosed somewhere in the world,\u00a0and every 74 seconds someone dies from breast cancer.<\/p>\n<p>Machine learning allows to precision and fast classification of breast cancer based on numerical data (in our case) and images without leaving home e.g. for a surgical biopsy.<\/p>\n<h2>Data used for the project<\/h2>\n<p>For the project, I used a breast cancer <a href=\"https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/breast-cancer-wisconsin\/\">dataset<\/a> from Wisconsin\u00a0University. The dataset contains 569 samples and 30 features computed from digital images. Each sample identifies parameters of each patient.<\/p>\n<p><strong>Futures information:<\/strong><\/p>\n<ol>\n<li>ID<\/li>\n<li>diagnosis<\/li>\n<li>radius<\/li>\n<li>texture<\/li>\n<li>perimeter<\/li>\n<li>area<\/li>\n<li>smoothness<\/li>\n<li>compactness<\/li>\n<li>concavity<\/li>\n<li>concave points<\/li>\n<li>symmetry<\/li>\n<li>fractal dimension<\/li>\n<\/ol>\n<h2>Python packages<\/h2>\n<p>I work daily with Python 3.6+ using a few packages to simplify everyday tasks in data science.<\/p>\n<p>Below are the most important ones.<\/p>\n<ul>\n<li>scikit-learn is a library for machine learning algorithms<\/li>\n<li>Keras is a library for deep learning algorithms<\/li>\n<li>Pandas is used for data processing<\/li>\n<li>Seaborn is used for data visualization<\/li>\n<\/ul>\n<p>All requirements are in Ermlab repository as a requirements.txt file.<\/p>\n<h2>Data processing<\/h2>\n<p>First of all, we need to import our data using Pandas module.<\/p>\n<pre class=\"lang:python decode:true \"># Load data\r\ndata = pd.read_csv('Data\/data.csv', delimiter=',', header=0)<\/pre>\n<p>Before making anything like feature selection, feature extraction and classification, firstly we start with basic data analysis. Let&#8217;s look at the features of data.<\/p>\n<pre class=\"lang:python decode:true\"># Head method show first 5 rows of data\r\nprint(data.head())<\/pre>\n<pre class=\"lang:default decode:true\">         id diagnosis     ...       fractal_dimension_worst  Unnamed: 32\r\n0    842302         M     ...                       0.11890          NaN\r\n1    842517         M     ...                       0.08902          NaN\r\n2  84300903         M     ...                       0.08758          NaN\r\n3  84348301         M     ...                       0.17300          NaN\r\n4  84358402         M     ...                       0.07678          NaN<\/pre>\n<p>Now, We need to drop unused columns such as id (not used for classification), Unnamed: 32 (with NaN values) and diagnosis (this is our label). The next step is to convert strings (M, B) to integers (0, 1) using map(),\u00a0 define our features and labels.<\/p>\n<pre class=\"lang:python decode:true\"># Drop unused columns\r\ncolumns = ['Unnamed: 32', 'id', 'diagnosis']\r\n\r\n# Convert strings -&gt; integers\r\nd = {'M': 0, 'B': 1}\r\n\r\n# Define features and labels\r\ny = data['diagnosis'].map(d)\r\nX = data.drop(columns, axis=1)<\/pre>\n<p>First plot: number of malignant and begin cancer.<\/p>\n<pre class=\"lang:python decode:true\"># Plot number of M - malignant and B - benign cancer\r\n\r\nax = sns.countplot(y, label=\"Count\", palette=\"muted\")\r\nB, M = y.value_counts()\r\nplt.savefig('count.png')\r\nprint('Number of benign cancer: ', B)\r\nprint('Number of malignant cancer: ', M)<\/pre>\n<p><img class=\"aligncenter\" src=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/09\/count.png\" \/><\/p>\n<p style=\"text-align: center;\">Picture 1. Count of Benign and Malignant cancer<\/p>\n<p>We have 357 benign and 212 malignant samples of cancer.<\/p>\n<p>Split our data into train and test set and normalize them.<\/p>\n<pre class=\"lang:python decode:true\"># Split dataset into training (80%) and test (20%) set\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)\r\n\r\n# Normalize data\r\nX_train_N = (X_train-X_train.mean())\/(X_train.max()-X_train.min())\r\nX_test_N = (X_test-X_train.mean())\/(X_test.max()-X_test.min())<\/pre>\n<h2>Dimensionality Reduction<\/h2>\n<p>Principal Component Analysis (PCA) is by far the most popular dimensionality reduction algorithm.<\/p>\n<p>Another very useful piece of information is the <a href=\"https:\/\/ro-che.info\/articles\/2017-12-11-pca-explained-variance\"><strong>Explained Variance Ratio<\/strong><\/a> of each principal component. It indicates the proportion of the dataset&#8217;s variance.<\/p>\n<p>&nbsp;<\/p>\n<p><img class=\"aligncenter\" src=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/09\/pcavariancewithoutstd.png\" \/><\/p>\n<p style=\"text-align: center;\">Picture 2. Variance ratio of PCA without Std<\/p>\n<p>As you can see in Picture 2., only one variable is necessary without data normalization. But to learn more, let&#8217;s make data standardization\u00a0presented in Picture 3.<\/p>\n<p><img class=\"aligncenter\" src=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/09\/pcavariancewithstd.png\" \/><\/p>\n<p style=\"text-align: center;\">Picture 3. Variance ratio of PCA with Std<\/p>\n<p>As you can see in Picture 3., only six variables are necessary without data standardization to reach 95% of the variance.<\/p>\n<h2>Classification<\/h2>\n<p>In this section, we compare the classification results of several popular classifiers and neural networks with different architecture.<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Support_vector_machine\"><strong>Support Vector Machines (SVM)<\/strong><\/a><\/p>\n<pre class=\"lang:python decode:true\">svc = svm.SVC(kernel='linear', C=1)\r\n\r\n# Pipeline\r\nmodel = Pipeline([\r\n    ('reduce_dim', pca),\r\n    ('svc', svc)\r\n])\r\n\r\n# Fit\r\nmodel.fit(X_train_N, y_train)\r\nsvm_score = cross_val_score(model, X, y, cv=10, scoring='accuracy')<\/pre>\n<p>SVM accuracy = 98,83%<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/K-nearest_neighbors_algorithm\"><strong>K-Nearest Neighbours (K-NN)<\/strong><\/a><\/p>\n<pre class=\"lang:python decode:true \">def KnearestNeighbors():\r\n    \"\"\"\r\n    Function for compute accuracy using K-NN algorithm\r\n    :return: k-NN score\r\n    \"\"\"\r\n    for i in range(1, 5):\r\n        knn = KNeighborsClassifier(n_neighbors=i)\r\n        knnp = Pipeline([\r\n            ('reduce_dim', pca),\r\n            ('knn', knn)\r\n        ])\r\n        k_score = cross_val_score(knnp, X, y, cv=10, scoring=\"accuracy\")<\/pre>\n<p>K-NN accuracy: 96,74%<\/p>\n<p><strong><a href=\"https:\/\/en.wikipedia.org\/wiki\/Decision_tree\">Decision Tree<\/a><\/strong><\/p>\n<pre class=\"lang:python decode:true \">trees = tree.DecisionTreeClassifier()\r\ntreeclf = trees.fit(X_train_N, y_train)\r\ntreep = Pipeline([\r\n    ('reduce_dim', pca),\r\n    ('trees', trees)\r\n    ])\r\nscore_trees = cross_val_score(treep, X, y, cv=10)<\/pre>\n<p>simple visualization of Decision Tree:<\/p>\n<pre class=\"lang:python decode:true \">feature_names = X.columns.values\r\n\r\ndef plot_decision_tree1(a,b):\r\n    \"\"\"\r\n    Function for plot decision tree\r\n    :param a: decision tree classifier\r\n    :param b: feature names\r\n    :return: graph\r\n    \"\"\"\r\n    dot_data = tree.export_graphviz(a, out_file='Plots\/tree.dot',\r\n                             feature_names=b,\r\n                             class_names=['Malignant','Benign'],\r\n                             filled=False, rounded=True,\r\n                             special_characters=False)\r\n    graph = graphviz.Source(dot_data)\r\n    return graph<\/pre>\n<p><img src=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/09\/graphviz.png\" \/><\/p>\n<p style=\"text-align: center;\">Picture 4. Visualization of Decision Tree<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>Decision Tree accuracy: 96,24%<\/p>\n<p><strong><a href=\"https:\/\/en.wikipedia.org\/wiki\/Random_forest\">Random Forest<\/a><\/strong><\/p>\n<pre class=\"lang:python decode:true\">rf = RandomForestClassifier()\r\nrfp = Pipeline([\r\n    ('reduce_dim', pca),\r\n    ('rf', rf)\r\n])\r\nscore_rf = cross_val_score(rfp, X, y, cv=10)<\/pre>\n<p>Random Forest accuracy = 95,9%<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Naive_Bayes_classifier\"><strong>Naive Bayes Classifier<\/strong><\/a><\/p>\n<pre class=\"lang:python decode:true\">gnb = GaussianNB()\r\ngnbclf = gnb.fit(X_train_N, y_train)\r\ngnbp = Pipeline([\r\n    ('reduce_dim', pca),\r\n    ('gnb', gnb)\r\n])\r\ngnb_score = cross_val_score(gnb, X, y, cv=10, scoring='accuracy')<\/pre>\n<p>Naive Bayes Classifier accuracy = 95,38%<\/p>\n<p><strong><a href=\"https:\/\/en.wikipedia.org\/wiki\/Artificial_neural_network\">Neural Networks<\/a><\/strong><\/p>\n<pre class=\"lang:python decode:true \">###### Neural Networks ######\r\n\r\nscaler = StandardScaler()\r\n\r\nnum_epoch = 10\r\n\r\n# 1-layer NN\r\ndef l1neuralNetwork():\r\n    model = Sequential()\r\n    model.add(Dense(input_dim=30, units=2))\r\n    model.add(Activation('softmax'))\r\n    model.compile(loss='sparse_categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])\r\n    #model.summary()\r\n\r\n    model.fit(scaler.fit_transform(X_train_N), y_train, epochs=num_epoch,\r\n              shuffle=True)\r\n    y_pred = model.predict_classes(scaler.transform(X_test_N.values))\r\n\r\n# 3-layer NN\r\ndef l3neuralNetwork():\r\n    model = Sequential()\r\n    model.add(Dense(input_dim=30, units=30))\r\n    model.add(Dense(input_dim=30, units=30))\r\n    model.add(Dense(input_dim=30, units=2))\r\n    model.add(Activation('softmax'))\r\n    model.compile(loss='sparse_categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])\r\n    #model.summary()\r\n    model.fit(scaler.fit_transform(X_train_N), y_train, epochs=num_epoch,\r\n              shuffle=True)\r\n    y_pred = model.predict_classes(scaler.transform(X_test_N.values))\r\n\r\n# 5-layer NN\r\ndef l5neuralNetwork():\r\n    model = Sequential()\r\n    model.add(Dense(input_dim=30, units=30))\r\n    model.add(Dense(input_dim=30, units=30))\r\n    model.add(Dense(input_dim=30, units=30))\r\n    model.add(Dense(input_dim=30, units=30))\r\n    model.add(Dense(input_dim=30, units=2))\r\n    model.add(Activation('softmax'))\r\n    model.compile(loss='sparse_categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])\r\n    #model.summary()\r\n    model.fit(scaler.fit_transform(X_train_N), y_train, epochs=num_epoch,\r\n              shuffle=True)\r\n    y_pred = model.predict_classes(scaler.transform(X_test_N.values))<\/pre>\n<p>Accuracy for 1, 3 and 5-layer Neural Network: 97.07, 96.73 and 97.66%<\/p>\n<p>As we see, in this comparison of classifiers, the best classification we get with the SVM algorithm.<\/p>\n<p>The worst with Naive Bayes Classifier.<\/p>\n<h2>Classification metrics<\/h2>\n<p>Our classification metrics are prepared from the best score of accuracy (SVM algorithm).<\/p>\n<h4>Confusion Matrix<\/h4>\n<p>Confusion Matrix is a performance measurement for machine learning classification problem, where output can be two or more classes.<\/p>\n<p>It&#8217;s useful for measuring Precision, Recall, F1 score, accuracy and AUC.<\/p>\n<p><img class=\"aligncenter\" src=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/10\/cf.png\" \/><\/p>\n<p>TP (True Positive) &#8211; you predicted positive and it is true,<\/p>\n<p>FP (False Positive) &#8211; you predicted positive and it is false,<\/p>\n<p>FN (False Negative) &#8211; you predicted negative and it is false,<\/p>\n<p>TN (True Negative) &#8211; you predicted negative and it is true.<\/p>\n<pre class=\"lang:python decode:true\">y_pred = model.predict(X_test_N)\r\ncm = confusion_matrix(y_test, y_pred)\r\ndf_cm = pd.DataFrame(cm, range(2),\r\n                  range(2))\r\nplt.figure(figsize=(10,7))\r\nsns.set(font_scale=1.4)#for label size\r\ncm_plot = sns.heatmap(df_cm, annot=True, fmt='n', annot_kws={\"size\": 12})<\/pre>\n<p>&nbsp;<\/p>\n<p><img class=\"aligncenter\" src=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/09\/confusionmatrix.png\" \/><\/p>\n<p style=\"text-align: center;\">Picture 5. Visualization of Confusion Matrix<\/p>\n<h4>Precision, Recall &amp; F1 Score<\/h4>\n<p><img class=\"aligncenter\" src=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/10\/precision.png\" \/><\/p>\n<p>Out of all the classes, how much we predicted correctly.<\/p>\n<p><img loading=\"lazy\" class=\"aligncenter\" src=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/10\/recall.png\" width=\"198\" height=\"66\" \/><\/p>\n<p>Out of all the positive classes, how much we predicted correctly.<\/p>\n<p>&nbsp;<\/p>\n<p><img class=\"aligncenter\" src=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/10\/f1.png\" \/><\/p>\n<p>F1-score is the harmonic mean of the precision and recall.<\/p>\n<pre class=\"lang:python decode:true\">print(\"Precision score {}%\".format(round(precision_score(y_test, y_pred),3)))\r\nprint(\"Recall score {}%\".format(round(recall_score(y_test, y_pred),3)))\r\nprint(\"F1 Score {}%\".format(round(f1_score(y_test, y_pred, average='weighted'),3)))<\/pre>\n<h4>ROC Curve<\/h4>\n<p>ROC Curve (Receiver Operating Characteristics)\u00a0 is a performance measurement for classification problem at various thresholds settings. It tells how much model is capable of distinguishing between classes.<\/p>\n<pre class=\"lang:python decode:true\">y_score = model.fit(X_train_N, y_train).decision_function(X_test_N)\r\n\r\nfpr, tpr, thresholds = roc_curve(y_test, y_score)\r\n\r\n\r\nfig, ax = plt.subplots(1, figsize=(12, 6))\r\nplt.plot(fpr, tpr, color='blue', label='ROC curve for SVM')\r\nplt.plot([0, 1], [0, 1], 'k--')\r\nplt.xlabel('False Positive Rate (1 - specificity)')\r\nplt.ylabel('True Positive Rate (sensitivity)')\r\nplt.title('ROC Curve for Breast Cancer Classifer')\r\nplt.legend(loc=\"lower right\")<\/pre>\n<p>&nbsp;<\/p>\n<p><img class=\"aligncenter\" src=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/09\/roccurve.png\" \/><\/p>\n<p style=\"text-align: center;\">Picture 6. ROC Curve<\/p>\n<h4>Correlation Map<\/h4>\n<pre class=\"lang:python decode:true \">plt.figure()\r\nf, ax = plt.subplots(figsize=(14,14))\r\ncorr_plot = sns.heatmap(X.corr(), annot=False, linewidths=.5, fmt='.1f', ax=ax)<\/pre>\n<p>&nbsp;<\/p>\n<p><img class=\"aligncenter\" src=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/09\/corrmap.png\" \/><\/p>\n<p style=\"text-align: center;\">Picture 7. Visualization of Correlation Map for all features<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The post on the blog will be devoted to the breast cancer classification, implemented using machine learning techniques and neural networks. Introduction to Breast Cancer The goal of the project is a medical data analysis using artificial intelligence methods such as machine learning and deep learning for classifying cancers (malignant or benign). Breast cancer is [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":3661,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[113,127],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v15.9.1 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Breast cancer classification using scikit-learn and Keras - Ermlab Software<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ermlab.com\/en\/blog\/data-science\/breast-cancer-classification-using-scikit-learn-and-keras\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Breast cancer classification using scikit-learn and Keras - Ermlab Software\" \/>\n<meta property=\"og:description\" content=\"The post on the blog will be devoted to the breast cancer classification, implemented using machine learning techniques and neural networks. Introduction to Breast Cancer The goal of the project is a medical data analysis using artificial intelligence methods such as machine learning and deep learning for classifying cancers (malignant or benign). Breast cancer is [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ermlab.com\/en\/blog\/data-science\/breast-cancer-classification-using-scikit-learn-and-keras\/\" \/>\n<meta property=\"og:site_name\" content=\"Ermlab Software\" \/>\n<meta property=\"article:published_time\" content=\"2018-11-07T11:24:30+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/10\/agenda-analysis-business-990818.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2500\" \/>\n\t<meta property=\"og:image:height\" content=\"1522\" \/>\n<meta name=\"twitter:card\" content=\"summary\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\">\n\t<meta name=\"twitter:data1\" content=\"7 minutes\">\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/ermlab.com\/#website\",\"url\":\"https:\/\/ermlab.com\/\",\"name\":\"Ermlab Software\",\"description\":\"Data science, aplikacje web i mobilne. Projektujemy aplikacje na zam\\u00f3wienie.\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/ermlab.com\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/ermlab.com\/en\/blog\/data-science\/breast-cancer-classification-using-scikit-learn-and-keras\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/ermlab.com\/wp-content\/uploads\/2018\/10\/agenda-analysis-business-990818.jpg\",\"width\":2500,\"height\":1522},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ermlab.com\/en\/blog\/data-science\/breast-cancer-classification-using-scikit-learn-and-keras\/#webpage\",\"url\":\"https:\/\/ermlab.com\/en\/blog\/data-science\/breast-cancer-classification-using-scikit-learn-and-keras\/\",\"name\":\"Breast cancer classification using scikit-learn and Keras - Ermlab Software\",\"isPartOf\":{\"@id\":\"https:\/\/ermlab.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/ermlab.com\/en\/blog\/data-science\/breast-cancer-classification-using-scikit-learn-and-keras\/#primaryimage\"},\"datePublished\":\"2018-11-07T11:24:30+00:00\",\"dateModified\":\"2018-11-07T11:24:30+00:00\",\"author\":{\"@id\":\"https:\/\/ermlab.com\/#\/schema\/person\/cd6459e58479af9087fff64e1a66baaf\"},\"breadcrumb\":{\"@id\":\"https:\/\/ermlab.com\/en\/blog\/data-science\/breast-cancer-classification-using-scikit-learn-and-keras\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/ermlab.com\/en\/blog\/data-science\/breast-cancer-classification-using-scikit-learn-and-keras\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/ermlab.com\/en\/blog\/data-science\/breast-cancer-classification-using-scikit-learn-and-keras\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ermlab.com\/en\/\",\"url\":\"https:\/\/ermlab.com\/en\/\",\"name\":\"Strona g\\u0142\\u00f3wna\"}},{\"@type\":\"ListItem\",\"position\":2,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ermlab.com\/en\/blog\/data-science\/breast-cancer-classification-using-scikit-learn-and-keras\/\",\"url\":\"https:\/\/ermlab.com\/en\/blog\/data-science\/breast-cancer-classification-using-scikit-learn-and-keras\/\",\"name\":\"Breast cancer classification using scikit-learn and Keras\"}}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/ermlab.com\/#\/schema\/person\/cd6459e58479af9087fff64e1a66baaf\",\"name\":\"Szymon P\\u0142otka\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/ermlab.com\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/b5a81c7942fac551a03899e6b1ee5f2a?s=96&r=g\",\"caption\":\"Szymon P\\u0142otka\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/posts\/3076"}],"collection":[{"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/comments?post=3076"}],"version-history":[{"count":8,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/posts\/3076\/revisions"}],"predecessor-version":[{"id":3664,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/posts\/3076\/revisions\/3664"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/media\/3661"}],"wp:attachment":[{"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/media?parent=3076"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/categories?post=3076"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ermlab.com\/en\/wp-json\/wp\/v2\/tags?post=3076"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}