When to Avoid Deep Learning

Matt Przybyla
Nov 12, 2021

When to Avoid Deep Learning

Nov 12, 2021 7 minutes read

Introduction


This article is intended for data scientists who may consider using deep learning algorithms, and want to know more about the cons of implementing these type of models into your work. Deep learning algorithms have many benefits, are powerful, and can be fun to show off. However, there are a few times when you should avoid them. I will be discussing those times when you should stop using deep learning below, so keep on reading if you would like a deeper dive into deep learning.

When You Want to Easily Explain



Photo by Malte Helmhold on Unsplash [2].

Because other algorithms have been around longer, they have countless amounts of documentation, including examples and functions that make interpretability easier. It is also how the other algorithms work themselves. Deep learning can be intimidating to data scientists for this reason as well, it can be a turn-off to use a deep learning algorithm when you are unsure of how to explain it to a stakeholder.

Here are 3 examples of when you would have trouble explaining deep learning:
  • When you want to describe the top features of your model — the features become hidden inputs, so you will not know what caused a certain prediction to happen, and if you need to prove to stakeholders or customers why a certain output was achieved, it can be more of a black box
  • When you want to tune your hyperparameters like learning rate and batch size
  • When you want to explain how the algorithm works itself — for example, if you were to present the algorithm itself to stakeholders, they might get lost, because even a simplified approach is still difficult to understand

Here are 3 examples of how you could explain those same situations from above from non-deep learning algorithms:
  • When you want to explain your top features, you can easily access SHAP libraries, say for the algorithm CatBoost, once your model is fitted, you can simply make a summary plot from feat = model.get_feature_importance() and then use the summary_plot() to rank the features by feature name, so that you can present a nice plot to stakeholders (and yourself for that matter)


Example of ranked SHAP output from a non-deep learning model [3].
  • As a solution, some other algorithms have made it plenty easy to tune your hyperparameters by randomized grid search or a more structured, set grid search method. There are even some algorithms that tune themselves so you do not have to worry about complicated tuning
  • Explaining how other algorithms work can be a lot easier, like decision trees, for example, you can easily show a yes or no, 0/1 chart that shows a simple answer for features that lead to a prediction, like yes it is raining, yes it is winter, would provide for yes it is going to be cold

Overall, deep learning algorithms are useful and powerful, so there is definitely a time and place for them, but there are other algorithms that you can use instead, as we will discuss below.

When You Can Use Other Algorithms



Photo by Luca Bravo on Unsplash [4].

To be frank, there are a few go-to algorithms that can give you a great model with great results rather quickly. Some of these algorithms include Linear Regression, Decision Trees, Random Forest, XGBoost, and CatBoost. These are alternatives that are more simple.

Here are examples of why you would want to use a non-deep learning algorithm, becuase you have so many other, simpler, non-deep learning options:
  • They can be easier and faster to set up, for example, deep learning can require you to have your model add sequential, dense layers, and compile it, which can be more complex, and take longer than simply having a regressor or classifier and fitting it with non-deep learning algorithms
  • I personally find more errors that can result from this more complex deep learning code and documentation for how to fix it can be confusing or old and not be applicable, using an algorithm like Random Forest instead, can have much more documentation on errors that are easy to understand
  • Training on a deep learning algorithm may not be complicated sometimes, but when predicting from an endpoint, it might be confusing on how to feed values to predict on, whereas some models, you can simply have the values in an encoded list of ordered values

I would say that you can of course try out deep learning algorithms, but before you do that, it might be best to start with a simpler solution. It can depend on things like how often you will train and make predictions, or if it is a one-off task. There are some other reasons why you would not want to use a deep learning algorithm, like when you have a small dataset and small budget, as we will discuss below.

When You Have a Small Dataset and Budget



Photo by Hello I’m Nik on Unsplash [5].

Oftentimes, you can be working as a data scientist at a smaller company, or perhaps at a startup. In these cases, you would not have much data and you might not have a big budget. You would, therefore, try to avoid the use of deep learning algorithms. Sometimes you can even have a small dataset that is just a few thousand rows and few features, you could simply run an alternative model instead locally, rather than spending a lot of money by serving it frequently.

Here is when you should second guess using a deep learning algorthim based on costs and data availability:
  • Small data availability is usually the case for a lot of companies (but is not always the case), and deep learning performs better on information with a lot of data
  • You might be performing a one-off task, as in the model only predicts one time — and you can run it locally for free (not all models will be running in production frequently), like a simple Decision Tree Classifier. It might not be worth investing time in a deep learning model.
  • Your company is interested in data science applications but wants to keep the budget small, rather than perform costly executions from a deep learning model, and rather, use a tree-based model with early-stopping-rounds to prevent overfitting, shorten training time, and ultimately reduce costs

There have been times where I brought up deep learning and it was shot down for a variety of reasons, and these reasons were usually the case. But, I do not want to dissuade someone from using deep learning completely, as it is something you should use sometimes in your career, and can be something you do frequently or mainly depending on the circumstances and where you are working.

Summary


Overall, before you dive deep into deep learning, realize that there are some times when you should avoid using it for a variety of reasons. There are, of course, more reasons for avoiding it, but there are also reasons for using it too. It is ultimately up to you to look at the pros and cons of deep learning yourself.

Here are three times/reasons when you should not use deep learning:
* When You Want to Easily Explain
* When You Can Use Other Algorithms
* When You Have Small Dataset and Budget

I hope you found my article both interesting and useful. Please feel free to comment down below if you agree or disagree with reasons for avoiding deep learning. Why or why not? What other reasons do you think you should avoid using deep learning as a data scientist? These can certainly be clarified even further, but I hope I was able to shed some light on deep learning. Thank you for reading!

I am not affiliated with any of these companies.

Please feel free to check out my profile, 
Matt Przybyla, and other articles, as well as subscribe to receive email notifications for my blogs by following the link below, or by clicking on the subscribe icon on the top of the screen by the follow icon, and reach out to me on LinkedIn if you have any questions or comments.



References


[1] Photo by Nadine Shaabana on Unsplash, (2018)

[2] Photo by Malte Helmhold on Unsplash, (2021)

[3] M.Przybyla, Example of ranked SHAP output from a non-deep learning model, (2021)

[4] Photo by Luca Bravo on Unsplash, (2016)

[5] Photo by Hello I’m Nik on Unsplash, (2021)

Join our private community in Discord

Keep up to date by participating in our global community of data scientists and AI enthusiasts. We discuss the latest developments in data science competitions, new techniques for solving complex challenges, AI and machine learning models, and much more!