Sie sind auf Seite 1von 8

Top 10 Coding Mistakes Made by Data Scientists https://www.kdnuggets.com/2019/04/top-10-coding-mistakes-data-scienti...

KDnuggets

Subscribe to KDnuggets News | | Contact

SOFTWARE
News/Blog
Top stories
Opinions
Tutorials
JOBS
Companies
Courses
Datasets
EDUCATION
Certificates
Meetings
Webinars

KDnuggets Home » News » 2019 » Apr » Opinions » Top 10 Coding Mistakes Made by Data Scientists
( 19:n13 )

Top 10 Coding Mistakes Made by Data


Scientists
Previous post
Next post

Tags: Data Science, Data Scientist, Mistakes, Programming

Here is a list of 10 common mistakes that a senior data scientist — who is ranked in the top 1% on
Stackoverflow for python coding and who works with a lot of (junior) data scientists — frequently sees.

comments

By Norman Niemer, Chief Data Scientist

1 of 8 4/15/2019, 11:48 PM
Top 10 Coding Mistakes Made by Data Scientists https://www.kdnuggets.com/2019/04/top-10-coding-mistakes-data-scienti...

A data scientist is a "person who is better at statistics than any software engineer and better at software
engineering than any statistician". Many data scientists have a statistics background and little experience with
software engineering. I'm a senior data scientist ranked top 1% on Stackoverflow for python coding and work
with a lot of (junior) data scientists. Here is my list of 10 common mistakes I frequently see.

1. Don't share data referenced in code

Data science needs code AND data. So for someone else to be able to reproduce your results, they need to
have access to the data. Seems basic but a lot of people forget to share the data with their code.

import pandas as pd
df1 = pd.read_csv('file-i-dont-have.csv') # fails
do_stuff(df)

Solution: Use d6tpipe to share data files with your code or upload to S3/web/google drive etc or save to a
database so the recipient can retrieve files (but don't add them to git, see below).

2. Hardcode inaccessible paths

Similar to mistake 1, if you hardcode paths others don't have access to, they can't run your code and have to
look in lots of places to manually change paths. Booo!

2 of 8 4/15/2019, 11:48 PM
Top 10 Coding Mistakes Made by Data Scientists https://www.kdnuggets.com/2019/04/top-10-coding-mistakes-data-scienti...

import pandas as pd
df = pd.read_csv('/path/i-dont/have/data.csv') # fails
do_stuff(df)

# or
import os
os.chdir('c:\\Users\\yourname\\desktop\\python') # fails

Solution: Use relative paths, global path config variables or d6tpipe to make your data easily accessible.

3. Mix data with code

Since data science code needs data why not dump it in the same directory? And while you are at it, save
images, reports and other junk there too. Yikes, what a mess!

├── data.csv
├── ingest.py
├── other-data.csv
├── output.png
├── report.html
└── run.py

Solution: Organize your directory into categories, like data, reports, code etc. See Cookiecutter Data Science
or d6tflow project templates (see #5) and use tools mentioned in #1 to store and share data.

4. Git commit data with source code

Most people now version control their code (if you don't that's another mistake!! See git). In an attempt to
share data, it might be tempting to add data files to version control. That's ok for very small files but git is not
optimized for data, especially large files.

git add data.csv

Solution: Use tools mentioned in #1 to store and share data. If you really want to version control data,
see d6tpipe, DVC and Git Large File Storage.

3 of 8 4/15/2019, 11:48 PM
Top 10 Coding Mistakes Made by Data Scientists https://www.kdnuggets.com/2019/04/top-10-coding-mistakes-data-scienti...

5. Write functions instead of DAGs

Enough about data, lets talk about the actual code! Since one of the first things you learn when you learn to
code are functions, data science code is mostly organized as a series of functions that are run linearly. That
causes several problems, see 4 Reasons Why Your Machine Learning Code is Probably Bad.

def process_data(data, parameter):


data = do_stuff(data)
data.to_pickle('data.pkl')

data = pd.read_csv('data.csv')
process_data(data)
df_train = pd.read_pickle(df_train)
model = sklearn.svm.SVC()
model.fit(df_train.iloc[:,:-1], df_train['y'])

Solution: Instead of linearly chaining functions, data science code is better written as a set of tasks with
dependencies between them. Use d6tflow or airflow.

6. Write for loops

Like functions, for loops are the first thing you learn when you learn to code. Easy to understand, but they are
slow and excessively wordy, typically indicating you are unaware of vectorized alternatives.

x = range(10)
avg = sum(x)/len(x); std = math.sqrt(sum((i-avg)**2 for i in x)/len(x));
zscore = [(i-avg)/std for x]
# should be: scipy.stats.zscore(x)

# or
groupavg = []
for i in df['g'].unique():
dfg = df[df[g']==i]
groupavg.append(dfg['g'].mean())
# should be: df.groupby('g').mean()

Solution: Numpy, scipy and pandas have vectorized functions for most things that you think might require for
loops.

7. Don't write unit tests

4 of 8 4/15/2019, 11:48 PM
Top 10 Coding Mistakes Made by Data Scientists https://www.kdnuggets.com/2019/04/top-10-coding-mistakes-data-scienti...

As data, parameters or user input change, your code might break, sometimes without you noticing. That can
lead to bad output and if someone makes decisions based on your output, bad data will lead to bad decisions!

Solution: Use assert statements to check for data quality. pandas has equality tests, d6tstack has checks for
data ingestion and d6tjoin for data joins. Code for example data checks:

assert df['id'].unique().shape[0] == len(ids) # have data for all ids?


assert df.isna().sum()<0.9 # catch missing values
assert df.groupby(['g','date']).size().max() ==1 # no duplicate values/date?
assert d6tjoin.utils.PreJoin([df1,df2],['id','date']).is_all_matched() # all ids matched?

8. Don't document code

I get it, you're in a hurry to produce some analysis. You hack things together to get results to your client or
boss. Then a week later they come back and say "can you change xyz" or "can you update this please". You
look at your code and can't remember why you did what you did. And now imagine someone else has to run
it.

def some_complicated_function(data):
data = data[data['column']!='wrong']
data = data.groupby('date').apply(lambda x: complicated_stuff(x))
data = data[data['value']<0.9]
return data

Solution: Take the extra time, even if it's after you've delivered the analysis, to document what you did. You
will thank yourself and other will do so even more! You'll look like a pro!

9. Save data as csv or pickle

Back data, it's DATA science after all. Just like functions and for loops, CSVs and pickle files are commonly
used but they are actually not very good. CSVs don't include a schema so everyone has to parse numbers and
dates again. Pickles solve that but only work in python and are not compressed. Both are not good formats to
store large datasets.

def process_data(data, parameter):


data = do_stuff(data)
data.to_pickle('data.pkl')

5 of 8 4/15/2019, 11:48 PM
Top 10 Coding Mistakes Made by Data Scientists https://www.kdnuggets.com/2019/04/top-10-coding-mistakes-data-scienti...

data = pd.read_csv('data.csv')
process_data(data)
df_train = pd.read_pickle(df_train)

Solution: Use parquet or other binary data formats with data schemas, ideally ones that compress
data. d6tflowautomatically saves data output of tasks as parquet so you don't have to deal with it.

10. Use jupyter notebooks

Lets conclude with a controversial one: jupyter notebooks are as common as CSVs. A lot of people use them.
That doesn't make them good. Jupyter notebooks promote a lot of bad software engineering habits mentioned
above, notably:

1. You are tempted to dump all files into one directory


2. You write code that runs top-bottom instead of DAGs
3. You don't modularize your code
4. Difficult to debug
5. Code and output gets mixed in one file
6. They don't version control well

It feels easy to get started but scales poorly.

Solution: Use pycharm and/or spyder.

Bio: Norman Niemer is the Chief Data Scientist at a large asset manager where he delivers data-driven
investment insights. He holds a MS Financial Engineering from Columbia University and a BS in Banking
and Finance from Cass Business School (London).

Original. Reposted with permission.

Related:

4 Reasons Why Your Machine Learning Code is Probably Bad


The Machine Learning Project Checklist
Data Science Project Flow for Startups

6 of 8 4/15/2019, 11:48 PM
Top 10 Coding Mistakes Made by Data Scientists https://www.kdnuggets.com/2019/04/top-10-coding-mistakes-data-scienti...

What do you think?


57 Responses

Upvote Funny Love Surprised Angry Sad

7 Comments KDnuggets 1 Login

Recommend 2 Tweet Share Sort by Best

Join the discussion…

LOG IN WITH OR SIGN UP WITH DISQUS ?

Fred R • 12 days ago


When people discuss Jupiter notebooks and how they exclaim that they are not suitable for
software engineering they tend to forget that data science work is not about software engineering.
iPython (predecessor to Jupiter) started in the scientific community for scientific work and not
software engineering. If you are developing production software using jupyter you are not using
the right tool but if you are doing expiatory data analysis you should be using jupyter.
3 • Reply • Share ›

Rational • 13 days ago


I have to agree with @AutoSniper's comments. This post seems to suffer from over-
generalization...
2 • Reply • Share ›

Ralph Winters • 2 days ago


The usage of for loops have been debated for at least 40 years and people continue to use them,
partly because they are understandable to other coders and non-coders alike. If performance
becomes a problem (many times it is not), the code can then be optimized along with other parts
of the code in which performance can suffer.
• Reply • Share ›

Günter Faes Ralph Winters • 2 days ago


I completely agree. Not to use for loops are "helpful hints" that don't really help, because
what happens in a for loop is often directly understandable. It is important that the code is
written cleanly and then can be optimized.
• Reply • Share ›

Micheal • 7 days ago


What a blatant attempt to sell d6t.

7 of 8 4/15/2019, 11:48 PM
Top 10 Coding Mistakes Made by Data Scientists https://www.kdnuggets.com/2019/04/top-10-coding-mistakes-data-scienti...

Previous post
Next post

Top Stories Past 30 Days

Most Popular Most Shared

1. Top 10 Coding Mistakes Made by Data 1. Artificial Neural Networks Optimization


Scientists using Genetic Algorithm with Python
2. Another 10 Free Must-Read Books for 2. Another 10 Free Must-See Courses for
Machine Learning and Data Science Machine Learning and Data Science
3. 9 Must-have skills you need to become a 3. Who is a typical Data Scientist in 2019?
Data Scientist, updated 4. Top 10 Coding Mistakes Made by Data
4. Who is a typical Data Scientist in 2019? Scientists
5. The Pareto Principle for Data Scientists 5. R vs Python for Data Visualization
6. My favorite mind-blowing Machine 6. 8 Reasons Why You Should Get a
Learning/AI breakthroughs Microsoft Azure Certification
7. Explaining Random Forest (with Python 7. The Pareto Principle for Data Scientists
Implementation)

Latest News


Hot Deep Learning Applications at Deep Learning World ...
Worcester Polytechnic Institute: Research Scientist [Wo...
Top March Stories: Another 10 Free Must-Read Books for ...
An introduction to explainable AI, and why we need it
Become the new generation of marketing technologists
Data Science with Optimus Part 1: Intro

KDnuggets Home » News » 2019 » Apr » Opinions » Top 10 Coding Mistakes Made by Data Scientists
( 19:n13 )

© 2019 KDnuggets. About KDnuggets. Privacy policy. Terms of Service

Subscribe to KDnuggets News

8 of 8 4/15/2019, 11:48 PM

Das könnte Ihnen auch gefallen