What is pandas?

Pandas is a python open source library which allow you to perform data manipulation, analysis and cleaning. It is build on top of NumPy . It is a most important library for data science.

According to Wikipedia “Pandas is derived from the term “panel data”, an econometrics term for data sets that include observations over multiple time periods for the same individuals.”

Why Pandas?

Following are the advantages of pandas for Data Scientist.

  • Easily handling missing data.
  • It provides an efficient way to slicing and data wrangling.
  • It is helpful to merge, concatenate or reshape the data.
  • It has includes a powerful time series tool to work with.

How to install Pandas?

To install python pandas go to command line/terminal and type “pip install pandas” or else if you have anaconda install in the system just type in “conda install pandas”. Once the installation is completed, go to your IDE(Jupyter) and simply import it by typing “import pandas as pd”.

In next chapter we will learn about pandas Series.


The first main data type we will learn about for pandas is the Series data type.

A series is a one-dimensional data structure. A Series is very similar to a NumPy array (in fact it is built on top of the NumPy array object). What differentiates the NumPy array from a Series, is that a Series can have axis labels, meaning it can be indexed by a label, instead of just a number location. It also doesn’t need to hold numeric data, it can hold any arbitrary Python Object.

10 23 56 17 52 61 73 90 26 72

So important point to remember for pandas series is:

  • Homogeneous data
  • Size Immutable
  • Values of Data Mutable

Let’s import Pandas and explore the Series object with the help of python.


A data frame is a standard way to store data and data is aligned in a tabular fashion in rows and columns.

DataFrames are the workhorse of pandas and are directly inspired by the R programming language. We can think of a DataFrame as a bunch of Series objects put together to share the same index Let us assume that we are creating a data frame with student’s data, it will look something like this.

A pandas DataFrame can be created using the following constructor

pandas.DataFrame( data, index, columns, dtype, copy)

  • Data –  data takes various forms like ndarray, series, map, lists, dict, constants and also another DataFrame.
  • Index – For the row labels, the Index to be used for the resulting frame is Optional Default np.arrange(n) if no index is passed.
  • Columns – For column labels, the optional default syntax is – np.arrange(n). This is only true if no index is passed.
  • dtype – Data type of each column.
  • Copy – This command (or whatever it is) is used for copying of data, if the default is False.

Creations of DataFrame:

A pandas DataFrame can be created using various inputs like list, dict, series, numpy ndarray, another dataframe.

Let’s explore DataFrame with python in jupyter notebook.

Pandas-Data input and Output

To do data analysis successfully, a Data analyst should know how to read and write different file format such as .CSV, .XLS, .HTML, JASON etc.

DataFrame has a Reader and a Writer function. The Reader function allows you to read the different data formats while the Writer function enables you to save the data in a particular format.

Below is a table containing available readers and writers.

Following notebook is the reference code for getting input and output, pandas can read a variety of file types using it’s pd.read_ methods. Let’s take a look at the most common data types:

Pandas–Missing Data

In real scenario missing data is a big problem in data analysis. In machine learning and data mining accuracy get compromised because of poor quality of data caused by missing values.

Missing Data is represented as NA(Not Available) or NAN(Not a number) values in pandas.

Why data is missing?

Let’s suppose you have surveyed different people where you need their name, address, phone number and income, but some user don’t want to share their address and income so in this way many datasets went missing.

Finding missing values

To check missing values in pandas DataFrame we use function isnull() and notnull(). Both of the function checks whether the values is nan or not. These functions also used in Pandas Series, to find null values.

Cleaning / Filling Missing values:

There are following ways to treat missing values.

  1. Filling missing values using fillna(), replace():

To fill null values in data set we use fillna() and replace().To do this, we can call the fillna() function on a dataframe column and specifying either mean() or median() as a parameter.

#Impute with mean on column_1
df[‘column_1’] = df[‘column_1’].fillna( df[‘column_1’].mean() )

#Impute with median on column_1
df[‘column_1’] = df[‘column_1’].fillna( df[‘column_1’].median() )

Besides mean and median, imputing missing data with 0 can also be a good idea in some cases.

Impute with value 0 on column_1
df[‘column_1’] = df[‘column_1’].fillna(0)

2. Dropping missing values using dropna():

This is not a good method to handle missing value treatment. If your data has large number of missing value you can’t use this method because when you use this method you might be loose some important information.

In order to drop a null values from a dataframe, we used dropna() function this fuction drop Rows/Columns of datasets with Null values in different ways.

#Drop rows with null values
df = df.dropna(axis=0)

#Drop column_1 rows with null values
df[‘column_1’] = df[‘column_1’].dropna(axis=0)

The axis parameter determines the dimension that the function will act on.
axis=0 removes all rows that contain null values.
axis=1 removes all columns instead that contain null values.

Let’s understand the concept with python.