### Seaborn-Categorical Data Plots

Now let’s discuss using seaborn to plot categorical data! There are a few main plot types for this:

- factorplot
- boxplot
- violinplot
- stripplot
- swarmplot
- barplot
- countplot

Let’s go through examples of each!

Now let’s discuss using seaborn to plot categorical data! There are a few main plot types for this:

- factorplot
- boxplot
- violinplot
- stripplot
- swarmplot
- barplot
- countplot

Let’s go through examples of each!

**Python **is very easy to learn and implement. For many people including myself python language is easy to fall in love with. Since his first appearance in 1991, python popularity is increasing day by day. Among interpreted languages Python is distinguished by its large and active scientific computing community. Adoption of Python for scientific computing in both industry applications and academic research has increased significantly since the early 2000s.

For data analysis and exploratory analysis and data visualization, Python has upper hand as compare with the many other domain-specific open source and commercial programming languages and tools, such as R, MATLAB, SAS, Stata, and others. In recent years, Python’s improved library support (primarily pandas) has made it a strong alternative for data manipulation tasks. Combined with python’s strength in general purpose programming, it is an excellent choice as a single language for building data-centric applications.

So in short we can say due to following reason we should choose python for data analysis.

- It’s very simple language to understand.
- It’s an open source.
- Strong data science inbuilt library.
- Apart from the long existing demand in the web development projects, the use of Python is only growing to grow as AI/ML projects become more main stream and popular with global businesses.

As you can see below chart, python is the most shouting language in the industry.

To successfully create and run the code we will required environment set up which will have both general-purpose python as well as the special packages required for Data science.

In this tutorial we will discuss about python 3, because Python 2 won’t be supported after 2020 and Python 3 has been around since 2008. So if you are new to Python, it is definitely worth much more to learn the new Python 3 and not the old Python 2.

Anaconda is a package manager, an environment manager, a Python/R data science distribution, and a collection of over 1,500+ open source packages. Anaconda is free and easy to install, and it offers free community support too.

To Download Anaconda click on https://www.anaconda.com/distribution/

Over 250+ packages are automatically installed with Anaconda. You can also download other packages using the pip install command.

If you need installation guide you can check the same on anaconda website https://docs.anaconda.com/anaconda/install/

From the Start menu, click the Anaconda Navigator desktop app.

- On Navigator’s Home tab, in the Applications panel on the right, scroll to the Jupyter Notebook tile and click the Install button to install Jupyter Notebook.
- Launch Jupyter Notebook by clicking Jupyter Notebook’s Launch button.This will launch a new browser window (or a new tab) showing the.

- On the top of the right hand side, there is a drop down menu labeled “New”. Create a new Notebook with the Python version you installed.
- Rename your Notebook. Either click on the current name and edit it or find rename under File in the top menu bar. You can name it to whatever you’d like, but for this example we’ll use MyFirstAnacondaNotebook.
- In the first line of the Notebook, type or copy/paste print(“Hello Anaconda”)
- Save your Notebook by either clicking the save and checkpoint icon or select File – Save and Checkpoint in the top menu.
- Select cell and press CTR+Enter or Shift+Enter

NumPy is the most basic and a powerful package for working with data in python. It stands for ‘Numerical Python’. It is a library consisting of multidimensional array objects and a collection of routines for processing of array. It contains a collection of tools and technique that can be used to solve on a computer mathematical models of problem in science and engineering.

If you are going to work on data analysis or machine learning projects, then you should have solid understanding of NumPy . Because other packages for data analysis (like pandas) is built on top of NumPy and the scikit-learn package which is used to build machine learning applications works heavily with NumPy as well .

A array is basically nothing but a pointer. It is a combination of memory address, a data type, a shapes and strides.

- The data pointer indicates the memory address of the first bytes in the array.
- The data type or dtype pointer describes the kind of elements that are contained within the array.
- The shape indicates the shape of array
- The strides are the numbers of bytes that should be skipped in memory to go to the next element. If your strides are (10,1) you need to proceed one byte to get the next column and 10 bytes to locate the next row.

So in short we can say an array contains information about the raw data, how to locate an element and how to interpret an element.

Using NumPy, a developer can perform the following operations −

- Mathematical and logical operations on arrays.
- Operations related to linear algebra. NumPy has in-built functions for linear algebra and random number generation.

It is highly recommended you install Python using the Anaconda distribution to make sure all underlying dependencies (such as Linear Algebra libraries) all sync up with the use of a conda install. If you have Anaconda, install NumPy by going to your terminal or command prompt and typing:

conda install numpy

or

pip install numpy

If you do not have Anaconda and can not install it, please refer to following url http://www.datasciencelovers.com/python-for-data-science/python-environment-setup/

NumPy has many built-in functions and capabilities. We won’t cover them all but instead we will focus on some of the most important aspects of NumPy such as vectors, arrays, matrices, and number generation. Let’s start by discussing arrays.

NumPy arrays are the main way we will use NumPy throughout the course. NumPy arrays essentially come in two flavors: vectors and matrices. Vectors are strictly 1-d arrays and matrices are 2-d (but you should note a matrix can still have only one row or one column).

To know more about numpy function check the official documentation https://docs.scipy.org/doc/numpy/user/quickstart.html

Let’s begin our introduction by exploring how to create NumPy arrays. Please go through the jupyter notebook code. I have explained the code with comment, hope it will help you to understand the important functions of NumPy.

Indexing and Slicing are the important operations that you need to be familiar with when working with Numpy arrays. You can use them when you would like to work with a subset of the array. This tutorial will take you through Indexing and Slicing on multi-dimensional arrays.

Please refer to following .ipynb file for numpy implementation through python.

In this chapter we are going to see how various operation we can perform on NumPy array. Operation such as addition, subtraction, multiplication, division of two matrices.

Please go through the .ipynb below, it will give you more idea how we can do python operation with python.

Pandas is a python open source library which allow you to perform data manipulation, analysis and cleaning. It is build on top of NumPy . It is a most important library for data science.

According to Wikipedia “Pandas is derived from the term “panel data”, an econometrics term for data sets that include observations over multiple time periods for the same individuals.”

Following are the advantages of pandas for Data Scientist.

- Easily handling missing data.
- It provides an efficient way to slicing and data wrangling.
- It is helpful to merge, concatenate or reshape the data.
- It has includes a powerful time series tool to work with.

To install python pandas go to command line/terminal and type “**pip install pandas**” or else if you have anaconda install in the system just type in “**conda install pandas**”. Once the installation is completed, go to your IDE(Jupyter) and simply import it by typing **“import pandas as pd”.**

In next chapter we will learn about pandas Series.

The first main data type we will learn about for pandas is the Series data type.

A series is a one-dimensional data structure. A Series is very similar to a NumPy array (in fact it is built on top of the NumPy array object). What differentiates the NumPy array from a Series, is that a Series can have axis labels, meaning it can be indexed by a label, instead of just a number location. It also doesn’t need to hold numeric data, it can hold any arbitrary Python Object.

10 | 23 | 56 | 17 | 52 | 61 | 73 | 90 | 26 | 72 |

So important point to remember for pandas series is:

- Homogeneous data
- Size Immutable
- Values of Data Mutable

Let’s import Pandas and explore the Series object with the help of python.

A data frame is a standard way to store data and data is aligned in a tabular fashion in rows and columns.

DataFrames are the workhorse of pandas and are directly inspired by the R programming language. We can think of a DataFrame as a bunch of Series objects put together to share the same index Let us assume that we are creating a data frame with student’s data, it will look something like this.

A pandas DataFrame can be created using the following constructor

pandas.DataFrame( data, index, columns, dtype, copy)

**Data**– data takes various forms like ndarray, series, map, lists, dict, constants and also another DataFrame.**Index**– For the row labels, the Index to be used for the resulting frame is Optional Default np.arrange(n) if no index is passed.**Columns**– For column labels, the optional default syntax is – np.arrange(n). This is only true if no index is passed.**dtype**– Data type of each column.**Copy**– This command (or whatever it is) is used for copying of data, if the default is False.

A pandas DataFrame can be created using various inputs like list, dict, series, numpy ndarray, another dataframe.

Let’s explore DataFrame with python in jupyter notebook.