Data Structuring & Cleaning with Mike


This entry is part 4 of 5 in the series Data Cleaning

What are the steps to cleaning your data in a data analysis project? This article combines a few sources of information, so I called it “with Mike”. I have another post that is called EDA Cleaning with Pandas. In that article, I break down EDA into six parts. The six main practices of EDA are discovering, structuring, cleaning, joining, validating and presenting.

Before you clean your data you’ll do a few of things first. You want to know the purpose of your project and who the stakeholder are. This article will assume you are using Python as your programming language. You might be using Jupyter Notebook as you programming environment. If you can get hold of a data dictionary, do so. It will be very helpful.

The links in the list below all link to code examples in Python. They are usually linking to code that uses the pandas library.

  1. Import the data (reading files)
  2. Initial Exploratory Data Analysis (EDA)
  3. Drop any Columns we Don’t Need
  4. Rename Columns as necessary (reorder if necessary)
  5. Check the Data Types of the columns
  6. Check the numerical data ranges (describe) of the columns
  7. Uniqueness constraints (are there any duplicates?)
  8. Check Outliers (statistics and boxplots)
  9. Remove Bad Characters in Text Columns (remove begin and trail spaces, remove Non-alphanumeric)
  10. Explore the Dependent Variable
  11. Are the categorical columns consistent? (correct categories, correct spelling)
  12. Text length is within limits – consider empty strings and nulls
  13. Text data has consistent formatting (phone numbers, postal codes, etc.)
  14. Numeric Unit Uniformity (numbers are in same units – money, temperature etc.)
  15. Datetime Uniformity (mm-dd-yyyy or dd-mm-yyyy or yyyy-mm-dd)
  16. Crossfield Validation (check calculations in calculated columns) and/or create calculated columns
  17. Missing Data

From a Columns Perspective

Let’s reorganize the above list in a different way. Looking at data cleaning from a dataset (rows and columns) perspective may be a more systematic approach that you might appreciate. Consider a single dataset, or “table” or Excel worksheet. Most of the above items are working with columns, but a few are working with rows.

  1. Drop any Columns we don’t need
  2. Rename Columns as necessary
  3. Reorder the columns as needed
  4. Check the Data Types of the columns
  5. Check the numerical data ranges (describe) of the columns
  6. Check Outliers (statistics and boxplots)
  7. Remove Bad Characters in Text Columns (remove begin and trail spaces, remove Non-alphanumeric)
  8. Are the categorical columns consistent? (correct categories, correct spelling)
  9. Text length is within limits – consider empty strings and nulls
  10. Text data has consistent formatting (phone numbers, postal codes, etc.)
  11. Numeric Unit Uniformity (numbers are in same units – money, temperature etc.)
  12. Datetime Uniformity (mm-dd-yyyy or dd-mm-yyyy or yyyy-mm-dd)
  13. Crossfield Validation (check calculations in calculated columns) and/or create calculated columns
  14. Missing Data

Remove unnecessary columns from the dataset. Locate errors in spelling of words and categories and potentially use string manipulation to fix those errors. Locate outliers.

From a Rows Perspective

Rows. Find missing data in one or more columns of one or more rows. Check for duplicate rows.

  1. Uniqueness constraints (are there any duplicate rows?)
  2. Are there any missing rows? Do we have all the data?

Data Wrangling. What does this mean? Sorting and re-ordering.

From a Multiple-Dataset (multiple table) Perspective

Here we’ll look at multiple ‘tables’ or ‘datasets’ or ‘worksheets’. We can split or combine. There are two general ways to combine two datasets. We can join them based on a key in a row. Those familiar with SQL will recognize this as joining. We can concatenate (stack) datasets on on top of the other as long as the columns are the same data types. SQL users will recognize this as UNION and UNION ALL. One use case of concatenating is when we have transactional data stored in files where each month is in its own file and we need to combine those.

Series Navigation<< Cleaning Data with AlexLoop Through pandas DataFrame >>

Leave a Reply