<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Depends on the definition</title><link>https://www.depends-on-the-definition.com/</link><description>Recent content on Depends on the definition</description><generator>Hugo -- gohugo.io</generator><language>en</language><copyright>© depends-on-the-definition 2017-2022</copyright><lastBuildDate>Sun, 19 Feb 2023 00:00:00 +0000</lastBuildDate><atom:link href="https://www.depends-on-the-definition.com/index.xml" rel="self" type="application/rss+xml"/><item><title>Causal graphs and the back-door criterion - A practical test on deconfounding</title><link>https://www.depends-on-the-definition.com/causal-graphs-and-deconfounding/</link><pubDate>Sun, 19 Feb 2023 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/causal-graphs-and-deconfounding/</guid><description>I read into causal inference recently and since I didn&amp;rsquo;t really have a use-case for it right now, I played around with some data and some causal graphs. In this article, I looked at some causal graphs from the &amp;ldquo;Book of Why&amp;rdquo; Chapter 4 by Judea Pearl and Dana Mackenzie and created simulated data based of them.</description></item><item><title>How to calculate shapley values from scratch</title><link>https://www.depends-on-the-definition.com/shapley-values-from-scratch/</link><pubDate>Tue, 19 Jul 2022 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/shapley-values-from-scratch/</guid><description>The shapley value is a popular and useful method to explain machine learning models. The shapley value of a feature is the average contribution of a feature value to the prediction. In this article I&amp;rsquo;ll show you how to compute shapley values from scratch.</description></item><item><title>How to add new tokens to huggingface transformers vocabulary</title><link>https://www.depends-on-the-definition.com/how-to-add-new-tokens-to-huggingface-transformers/</link><pubDate>Thu, 12 May 2022 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/how-to-add-new-tokens-to-huggingface-transformers/</guid><description>In this short article, you&amp;rsquo;ll learn how to add new tokens to the vocabulary of a huggingface transformer model.</description></item><item><title>How to test error messages with pytest</title><link>https://www.depends-on-the-definition.com/test-error-messages-with-pytest/</link><pubDate>Wed, 20 Apr 2022 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/test-error-messages-with-pytest/</guid><description>In this short article, you will learn, how and when to test the error message of an exception with pytest.</description></item><item><title>Learning unsupervised embeddings for textual similarity with transformers</title><link>https://www.depends-on-the-definition.com/unsupervised-text-embeddings-with-transformers/</link><pubDate>Mon, 24 May 2021 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/unsupervised-text-embeddings-with-transformers/</guid><description>In this article, we look at SimCSE, a simple contrastive sentence embedding framework, which can be used to produce superior sentence embeddings, from either unlabeled or labeled data. The idea behind the unsupervised SimCSE is to simply predicts the input sentence itself, with only dropout used as noise.</description></item><item><title>The missing guide on data preparation for language modeling</title><link>https://www.depends-on-the-definition.com/missing-guide-on-data-preparation-for-language-modeling/</link><pubDate>Fri, 25 Sep 2020 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/missing-guide-on-data-preparation-for-language-modeling/</guid><description>Language models gained popularity in NLP in the recent years. Sometimes you might have enough data and want to train a language model like BERT or RoBERTa from scratch. While there are many tutorials about tokenization and on how to train the model, there is not much information about how to load the data into the model. This guide aims to close this gap.</description></item><item><title>Data augmentation with transformer models for named entity recognition</title><link>https://www.depends-on-the-definition.com/data-augmentation-with-transformers/</link><pubDate>Sun, 23 Aug 2020 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/data-augmentation-with-transformers/</guid><description>In this article we sample from pre-trained transformers to augment small, labeled text datasets for named entity recognition.</description></item><item><title>How to approach almost any real-world NLP problem</title><link>https://www.depends-on-the-definition.com/how-to-approach-nlp/</link><pubDate>Tue, 07 Jul 2020 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/how-to-approach-nlp/</guid><description>This time, I&amp;rsquo;m going to talk about how to approach general NLP problems. But we&amp;rsquo;re not going to look at the standard tips which are tosed around on the internet, for example on platforms like kaggle.</description></item><item><title>Data validation for NLP applications with topic models</title><link>https://www.depends-on-the-definition.com/data-validation-with-topic-models/</link><pubDate>Wed, 03 Jun 2020 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/data-validation-with-topic-models/</guid><description>In a recent article, we saw how to implement a basic validation pipeline for text data. Once a machine learning model has been deployed its behavior must be monitored. The predictive performance is expected to degrade over time as the environment changes.</description></item><item><title>Latent Dirichlet allocation from scratch</title><link>https://www.depends-on-the-definition.com/lda-from-scratch/</link><pubDate>Wed, 20 May 2020 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/lda-from-scratch/</guid><description>Today, I&amp;rsquo;m going to talk about topic models in NLP. Specifically we will see how the Latent Dirichlet Allocation model works and we will implement it from scratch in numpy.
What is a topic model? Assume we are given a large collections of documents.</description></item><item><title>Data validation for NLP machine learning applications</title><link>https://www.depends-on-the-definition.com/data-validation-for-nlp/</link><pubDate>Thu, 30 Jan 2020 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/data-validation-for-nlp/</guid><description>An important part of machine learning applications, is making sure that there is no data degeneration while a model is in production. Sometimes downstream data processing changes and machine learning models are very prone to silent failure due to this.</description></item><item><title>Find label issues with confident learning for NLP</title><link>https://www.depends-on-the-definition.com/confident-learning-for-nlp/</link><pubDate>Tue, 21 Jan 2020 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/confident-learning-for-nlp/</guid><description>In every machine learning project, the training data is the most valuable part of your system. In many real-world machine learning projects the largest gains in performance come from improving training data quality. Training data is often hard to aquire and since the data can be large, quality can be hard to check.</description></item><item><title>How explainable AI fails and what to do about it</title><link>https://www.depends-on-the-definition.com/how-explainable-ai-fails-and-what-to-do-about-it/</link><pubDate>Sat, 28 Dec 2019 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/how-explainable-ai-fails-and-what-to-do-about-it/</guid><description>This article heavily relys on &amp;quot;Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead&amp;quot; by Cynthia Rudin and finally on some of my personal experiences. I will mainly focus on technical issues and leave out most of the governance and ethics related issues that derive from these.</description></item><item><title>How the LIME algorithm fails</title><link>https://www.depends-on-the-definition.com/how-the-lime-algorithm-fails/</link><pubDate>Tue, 10 Dec 2019 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/how-the-lime-algorithm-fails/</guid><description>You maybe know the LIME algorithm from some of my earlier blog posts. It can be quite useful to “debug” data sets and understand machine learning models better. But LIME is fooled very easily.</description></item><item><title>Cluster discovery in german recipes</title><link>https://www.depends-on-the-definition.com/cluster-discovery/</link><pubDate>Sun, 24 Nov 2019 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/cluster-discovery/</guid><description>If you are dealing with a large collections of documents, you will often find yourself in the situation where you are looking for some structure and understanding what is contained in the documents. Here I&amp;rsquo;ll show you a convenient method for discovering and understanding clusters of text documents.</description></item><item><title>That was PyCon DE &amp; PyData Berlin 2019</title><link>https://www.depends-on-the-definition.com/pycon-pydata-berlin-2019/</link><pubDate>Tue, 15 Oct 2019 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/pycon-pydata-berlin-2019/</guid><description>Last weekendPyCon DE and PyData Berlin joined in Berlin for a great conference event that I was lucky to attend. The speaker line-up was great and often it was hard to choose which talk or tutorial to attend.</description></item><item><title>Model uncertainty in deep learning with Monte Carlo dropout in keras</title><link>https://www.depends-on-the-definition.com/model-uncertainty-in-deep-learning-with-monte-carlo-dropout/</link><pubDate>Mon, 05 Aug 2019 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/model-uncertainty-in-deep-learning-with-monte-carlo-dropout/</guid><description>Deep learning models have shown amazing performance in a lot of fields such as autonomous driving, manufacturing, and medicine, to name a few. However, these are fields in which representing model uncertainty is of crucial importance.</description></item><item><title>Interpretable named entity recognition with keras and LIME</title><link>https://www.depends-on-the-definition.com/interpretable-named-entity-recognition/</link><pubDate>Sat, 08 Jun 2019 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/interpretable-named-entity-recognition/</guid><description>In the previous posts, we saw how to build strong and versatile named entity recognition systems and how to properly evaluate them. But often you want to understand your model beyond the metrics. So in this tutorial I will show you how you can build an explainable and interpretable NER system with keras and the LIME algorithm.</description></item><item><title>Introduction to entity embeddings with neural networks</title><link>https://www.depends-on-the-definition.com/introduction-to-embeddings-with-neural-networks/</link><pubDate>Sun, 14 Apr 2019 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/introduction-to-embeddings-with-neural-networks/</guid><description>Since a lot of people recently asked me how neural networks learn the embeddings for categorical variables, for example words, I&amp;rsquo;m going to write about it today. You all might have heard about methods like word2vec for creating dense vector representation of words in an unsupervised way.</description></item><item><title>Introduction to n-gram language models</title><link>https://www.depends-on-the-definition.com/introduction-n-gram-language-models/</link><pubDate>Sun, 07 Apr 2019 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/introduction-n-gram-language-models/</guid><description>You might have heard, that neural language models power a lot of the recent advances in natural language processing. Namely large models like Bert and GPT-2. But there is a fairly old approach to language modeling that is quite successful in a way.</description></item><item><title>Learn to identify ingredients with neural networks</title><link>https://www.depends-on-the-definition.com/identify-ingredients-with-neural-networks/</link><pubDate>Fri, 08 Mar 2019 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/identify-ingredients-with-neural-networks/</guid><description>Today we want to build a model, that can identify ingredients in cooking recipes. I use the &amp;ldquo;German Recipes Dataset&amp;rdquo;, I recently published on kaggle. We have more than 12000 German recipes and their ingredients list.</description></item><item><title>How to use magnitude with keras</title><link>https://www.depends-on-the-definition.com/how-to-magnitude-with-keras/</link><pubDate>Tue, 12 Feb 2019 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/how-to-magnitude-with-keras/</guid><description>This time we have a look into the magnitude library, a feature-packed Python package and vector storage file format for utilizing vector embeddings in machine learning models in a fast, efficient, and simple manner developed by Plasticity.</description></item><item><title>Text analysis with named entity recognition</title><link>https://www.depends-on-the-definition.com/text-analysis-with-named-entities/</link><pubDate>Wed, 26 Dec 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/text-analysis-with-named-entities/</guid><description>This is the second post of my series about understanding text datasets. If you read my blog regularly, you probably noticed quite some posts about named entity recognition. In this posts, we focused on finding the named entities and explored different techniques to do this.</description></item><item><title>Named entity recognition with Bert</title><link>https://www.depends-on-the-definition.com/named-entity-recognition-with-bert/</link><pubDate>Mon, 10 Dec 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/named-entity-recognition-with-bert/</guid><description>In 2018 we saw the rise of pretraining and finetuning in natural language processing. Large neural networks have been trained on general tasks like language modeling and then fine-tuned for classification tasks. One of the latest milestones in this development is the release of BERT.</description></item><item><title>Understanding text data with topic models</title><link>https://www.depends-on-the-definition.com/understanding-text-data-with-topic-models/</link><pubDate>Sat, 24 Nov 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/understanding-text-data-with-topic-models/</guid><description>This is the first post of my series about understanding text datasets. A lot of the current NLP progress is made in predictive performance. But in practice, you often want and need to know, what is going on in your dataset.</description></item><item><title>LSTM with attention for relation classification</title><link>https://www.depends-on-the-definition.com/attention-lstm-relation-classification/</link><pubDate>Wed, 19 Sep 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/attention-lstm-relation-classification/</guid><description>Once named entities have been identified in a text, we then want to extract the relations that exist between them. As indicated earlier, we will typically be looking for relations between specified types of named entity.</description></item><item><title>Evaluate sequence models in python</title><link>https://www.depends-on-the-definition.com/evaluate-sequence-models/</link><pubDate>Fri, 24 Aug 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/evaluate-sequence-models/</guid><description>An important part of every machine learning project is the proper evaluation of the performance of the system. In this post we will talk about evaluation of token-based sequence models. This is especially tricky because:</description></item><item><title>Image segmentation with test time augmentation with keras</title><link>https://www.depends-on-the-definition.com/test-time-augmentation-keras/</link><pubDate>Thu, 09 Aug 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/test-time-augmentation-keras/</guid><description>In the last post, I introduced the U-Net model for segmenting salt depots in seismic images. This time, we will see how to improve the model by data augmentation and especially test time augmentation (TTA). You will learn how to use data augmentation with segmentation masks and what test time augmentation is and how to use it in keras.</description></item><item><title>U-Net for segmenting seismic images with keras</title><link>https://www.depends-on-the-definition.com/unet-keras-segmenting-images/</link><pubDate>Thu, 26 Jul 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/unet-keras-segmenting-images/</guid><description>Today I&amp;rsquo;m going to write about a kaggle competition I started working on recently. In the TGS Salt Identification Challenge, you are asked to segment salt deposits beneath the Earth&amp;rsquo;s surface. So we are given a set of seismic images that are $101 \times 101$ pixels each and each pixel is classified as either salt or sediment.</description></item><item><title>State-of-the-art named entity recognition with residual LSTM and ELMo</title><link>https://www.depends-on-the-definition.com/named-entity-recognition-with-residual-lstm-and-elmo/</link><pubDate>Sun, 01 Jul 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/named-entity-recognition-with-residual-lstm-and-elmo/</guid><description>This is the sixth post in my series about named entity recognition. This time I&amp;rsquo;m going to show you some cutting edge stuff. We will use a residual LSTM network together with ELMo embeddings, developed at Allen NLP. You will learn how to wrap a tensorflow hub pre-trained model to work with keras. The resulting model with give you state-of-the-art performance on the named entity recognition task.</description></item><item><title>Debugging black-box text classifiers with LIME</title><link>https://www.depends-on-the-definition.com/debugging-black-box-text-classifiers-with-lime/</link><pubDate>Sat, 02 Jun 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/debugging-black-box-text-classifiers-with-lime/</guid><description>Often in text classification, we use so called black-box classifiers. By black-box classifiers I mean a classification system where the internal workings are completely hidden from you. A famous example are deep neural nets, in text classification often recurrent or convolutional neural nets.</description></item><item><title>Explain neural networks with keras and eli5</title><link>https://www.depends-on-the-definition.com/keras-and-eli5/</link><pubDate>Sat, 02 Jun 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/keras-and-eli5/</guid><description>In this post, I’m going to show you how you can use a neural network from keras with the LIME algorithm implemented in the eli5 TextExplainer class. For this we will write a scikit-learn compatible wrapper for a keras bidirectional LSTM model. The wrapper will also handle the tokenization and the storage of the vocabulary.</description></item><item><title>PyData Amsterdam 2018</title><link>https://www.depends-on-the-definition.com/pydata-amsterdam-2018/</link><pubDate>Tue, 08 May 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/pydata-amsterdam-2018/</guid><description>Last weekend I participated at the PyData Amsterdam 2018 Conference in, you guess it, in Amsterdam. It has been a great conference and I meet a lot of great people and had a very good time in Amsterdam.</description></item><item><title>Enhancing LSTMs with character embeddings for Named entity recognition</title><link>https://www.depends-on-the-definition.com/lstm-with-char-embeddings-for-ner/</link><pubDate>Sun, 15 Apr 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/lstm-with-char-embeddings-for-ner/</guid><description>This is the fifth post in my series about named entity recognition. If you haven&amp;rsquo;t seen the last four, have a look now. The last time we used a CRF-LSTM to model the sequence structure of our sentences.</description></item><item><title>Guide to word vectors with gensim and keras</title><link>https://www.depends-on-the-definition.com/guide-to-word-vectors-with-gensim-and-keras/</link><pubDate>Fri, 16 Mar 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/guide-to-word-vectors-with-gensim-and-keras/</guid><description>Word vectors Today, I tell you what word vectors are, how you create them in python and finally how you can use them with neural networks in keras. For a long time, NLP methods use a vectorspace model to represent words.</description></item><item><title>How to build a smart product: Transfer Learning for Dog Breed Identification with keras</title><link>https://www.depends-on-the-definition.com/transfer-learning-for-dog-breed-identification/</link><pubDate>Sat, 03 Feb 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/transfer-learning-for-dog-breed-identification/</guid><description>This time I will show you how to build a simple &amp;ldquo;AI&amp;rdquo; product with transfer learning. We will build a &amp;ldquo;dog breed identification chat bot&amp;rdquo;. In this first post, I will show how to build a good model using keras, augmentation, pre-trained models for transfer learning and fine-tuning.</description></item><item><title>Detecting Network Attacks with Isolation Forests</title><link>https://www.depends-on-the-definition.com/detecting-network-attacks-with-isolation-forests/</link><pubDate>Sat, 27 Jan 2018 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/detecting-network-attacks-with-isolation-forests/</guid><description>In this post, I will show you how to use the isolation forest algorithm to detect attacks to computer networks in python.
The term isolation means separating an instance from the rest of the instances. Since anomalies are ‘few and different’ and therefore they are more susceptible to isolation.</description></item><item><title>A strong and simple baseline to classify toxic comments on wikipedia with keras</title><link>https://www.depends-on-the-definition.com/classify-toxic-comments-on-wikipedia/</link><pubDate>Sat, 23 Dec 2017 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/classify-toxic-comments-on-wikipedia/</guid><description>This time we&amp;rsquo;re going to discuss a current machine learning completion on kaggle. In this competition, you’re challenged to build a multi-headed model that’s capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based hate.</description></item><item><title>Sequence tagging with LSTM-CRFs</title><link>https://www.depends-on-the-definition.com/sequence-tagging-lstm-crf/</link><pubDate>Mon, 27 Nov 2017 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/sequence-tagging-lstm-crf/</guid><description>This is the fourth post in my series about named entity recognition. If you haven&amp;rsquo;t seen the last three, have a look now. The last time we used a recurrent neural network to model the sequence structure of our sentences.</description></item><item><title>Guide to sequence tagging with neural networks</title><link>https://www.depends-on-the-definition.com/guide-sequence-tagging-neural-networks-python/</link><pubDate>Sun, 22 Oct 2017 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/guide-sequence-tagging-neural-networks-python/</guid><description>This is the third post in my series about named entity recognition. If you haven&amp;rsquo;t seen the last two, have a look now. The last time we used a conditional random field to model the sequence structure of our sentences.</description></item><item><title>Named entity recognition with conditional random fields in python</title><link>https://www.depends-on-the-definition.com/named-entity-recognition-conditional-random-fields-python/</link><pubDate>Sun, 10 Sep 2017 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/named-entity-recognition-conditional-random-fields-python/</guid><description>This is the second post in my series about named entity recognition. If you haven&amp;rsquo;t seen the first one, have a look now. Last time we started by memorizing entities for words and then used a simple classification model to improve the results a bit.</description></item><item><title>Introduction to named entity recognition in python</title><link>https://www.depends-on-the-definition.com/introduction-named-entity-recognition-python/</link><pubDate>Sat, 26 Aug 2017 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/introduction-named-entity-recognition-python/</guid><description>In this post, I will introduce you to something called Named Entity Recognition (NER). NER is a part of natural language processing (NLP) and information retrieval (IR). The task in NER is to find the entity-type of words.</description></item><item><title>Classifying genres of movies by looking at the poster - A neural approach</title><link>https://www.depends-on-the-definition.com/classifying-genres-of-movies-by-looking-at-the-poster-a-neural-approach/</link><pubDate>Sat, 12 Aug 2017 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/classifying-genres-of-movies-by-looking-at-the-poster-a-neural-approach/</guid><description>In this article, we will apply the concept of multi-label multi-class classification with neural networks from the last post, to classify movie posters by genre. First we import the usual suspects in python.
import numpy as np import pandas as pd import glob import scipy.</description></item><item><title>Guide to multi-class multi-label classification with neural networks in python</title><link>https://www.depends-on-the-definition.com/guide-to-multi-label-classification-with-neural-networks/</link><pubDate>Fri, 11 Aug 2017 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/guide-to-multi-label-classification-with-neural-networks/</guid><description>Often in machine learning tasks, you have multiple possible labels for one sample that are not mutually exclusive. This is called a multi-class, multi-label classification problem. Obvious suspects are image classification and text classification, where a document can have multiple topics.</description></item><item><title>Efficient AWS usage for deep learning</title><link>https://www.depends-on-the-definition.com/efficient-aws-for-deep-learning/</link><pubDate>Tue, 01 Aug 2017 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/efficient-aws-for-deep-learning/</guid><description>When running experiments with deep neural nets you want to use appropriate hardware. Most of the time I work on a thinkpad laptop with no GPU. This makes experimenting painfully slow. A convenient way is to use an AWS instance, for example the p2.</description></item><item><title>Getting started with Multivariate Adaptive Regression Splines</title><link>https://www.depends-on-the-definition.com/getting-started-with-multivariate-adaptive-regression-spline/</link><pubDate>Sun, 30 Jul 2017 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/getting-started-with-multivariate-adaptive-regression-spline/</guid><description>In this post we will introduce multivariate adaptive regression splines model (MARS) using python. This is a regression model that can be seen as a non-parametric extension of the standard linear model.
The multivariate adaptive regression splines model MARS builds a model of the from</description></item><item><title>About</title><link>https://www.depends-on-the-definition.com/about/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/about/</guid><description>the author Hi, my name is Tobias. I’m a trained mathematician that now works as a data scientist and machine learning engineer. In 2018 I achieved the status of a kaggle master and still spend quite some time there.</description></item><item><title>Legal details</title><link>https://www.depends-on-the-definition.com/imprint/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/imprint/</guid><description>Contact E-Mail: info@depends-on-the-definition.com
Accountability for content The contents of our pages have been created with the utmost care. However, we cannot guarantee the contents&amp;rsquo; accuracy, completeness or topicality. According to statutory provisions, we are furthermore responsible for our own content on these web pages.</description></item><item><title>Privacy</title><link>https://www.depends-on-the-definition.com/privacy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/privacy/</guid><description>It is Depends-on-the-definition&amp;rsquo;s policy to respect your privacy regarding any information we may collect while operating our website. This Privacy Policy applies to https://depends-on-the-definition.com (hereinafter, &amp;ldquo;us&amp;rdquo;, &amp;ldquo;we&amp;rdquo;, or &amp;ldquo;https://depends-on-the-definition.com&amp;rdquo;). We respect your privacy and are committed to protecting personally identifiable information you may provide us through the Website.</description></item><item><title>Work with me</title><link>https://www.depends-on-the-definition.com/work-with-me/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://www.depends-on-the-definition.com/work-with-me/</guid><description>I&amp;rsquo;m a freelancer I have a passion to support companies to solve their business challenges by leveraging their data and applying advanced machine learning.
Services offered Hands on teaching/consulting - Learn how to solve your problems with code, the client chooses what is covered/taught.</description></item></channel></rss>