Banking Credit Card Spend Prediction and Identify Drivers for Spends

Business Problem:

One of the global banks would like to understand what factors driving credit card spend are. The bank want use these insights to calculate credit limit. In order to solve the problem, the bank conducted survey of 5000 customers and collected data.

The objective of this case study is to understand what’s driving the total spend (Primary Card + Secondary card). Given the factors, predict credit limit for the new applicants.

Data Availability:

  • Data for the case are available in xlsx format.
  • The data have been provided for 5000 customers.
  • Detailed data dictionary has been provided for understanding the data in the data.
  • Data is encoded in the numerical format to reduce the size of the data however some of the variables are categorical. You can find the details in the data dictionary

Let’s develop a machine learning model for further analysis.

Store Sales Prediction – Forecasting

Business Context:

The objective is predicting store sales using historical markdown data. One challenge of modelling retail data is the need to make decisions based on limited history. If Christmas comes but once a year, so does the chance to see how strategic decisions impacted the bottom line.

Business Problem:

Company provided with historical sales data for 45 Walmart stores located in different regions. Each store contains a number of departments, and you are tasked with predicting the department-wide sales for each store.

In addition, Walmart runs several promotional markdown events throughout the year. These markdowns precede prominent holidays, the four largest of which are the Super Bowl, Labour Day, Thanksgiving, and Christmas. The weeks including these holidays are weighted five times higher in the evaluation than non-holiday weeks. Part of the challenge presented by this competition is modelling the effects of markdowns on these holiday weeks in the absence of complete/ideal historical data.

Data Availability:

stores.csv: This file contains anonymized information about the 45 stores, indicating the type and size of store.

train.csv: This is the historical training data, which covers to 2010-02-05 to 2012-11- 01, Within this file you will find the following fields:

  • Store – the store number
  • Dept – the department number
  • Date – the week
  • Weekly_Sales – sales for the given department in the given store
  • IsHoliday – whether the week is a special holiday week

test.csv: This file is identical to train.csv, except we have withheld the weekly sales. You must predict the sales for each triplet of store, department, and date in this file.

features.csv: This file contains additional data related to the store, department, and regional activity for the given dates. It contains the following fields:

  • Store – the store number
  • Date – the week
  • Temperature – average temperature in the region
  • Fuel_Price – cost of fuel in the region
  • MarkDown1-5 – anonymized data related to promotional markdowns that Walmart is running. MarkDown data is only available after Nov 2011, and is not available for all stores all the time. Any missing value is marked with an NA.
  • CPI – the consumer price index
  • Unemployment – the unemployment rate
  • IsHoliday – whether the week is a special holiday week

Let’s develop a machine learning model for further analysis.

Credit Card Segmentation

Data Available:

  • CC GENERAL.csv

Business Context:

A Bank wants to develop a customer segmentation to define marketing strategy. The sample dataset summarizes the usage behaviour of about 9000 active credit card holders during the last 6 months. The file is at a customer level with 18 behavioural variables.

Business Requirements:

Advanced data preparation: Build an enriched customer profile by deriving “intelligent” KPIs such as:

  • Monthly average purchase and cash advance amount
  • Purchases by type (one-off, instalments)
  • Average amount per purchase and cash advance transaction,
  • Limit usage (balance to credit limit ratio),
  • Payments to minimum payments ratio etc.
  • Advanced reporting: Use the derived KPIs to gain insight on the customer profiles.
  • Identification of the relationships/ affinities between services.
  • Clustering: Apply a data reduction technique factor analysis for variable reduction technique and a clustering algorithm to reveal the behavioural segments of credit card holders
  • Identify cluster characteristics of the cluster using detailed profiling.
  • Provide the strategic insights and implementation of strategies for given set of cluster characteristics.

Data Dictionary:

  • CUST_ID: Credit card holder ID
  • BALANCE: Monthly average balance (based on daily balance averages)
  • BALANCE_FREQUENCY: Ratio of last 12 months with balance
  • PURCHASES: Total purchase amount spent during last 12 months
  • ONEOFF_PURCHASES: Total amount of one-off purchases
  • INSTALLMENTS_PURCHASES: Total amount of installment purchases
  • CASH_ADVANCE: Total cash-advance amount
  • PURCHASES_ FREQUENCY: Frequency of purchases (Percent of months with at least one purchase)
  • ONEOFF_PURCHASES_FREQUENCY: Frequency of one-off-purchases PURCHASES_INSTALLMENTS_FREQUENCY: Frequency of installment purchases
  • CASH_ADVANCE_ FREQUENCY: Cash-Advance frequency
  • AVERAGE_PURCHASE_TRX: Average amount per purchase transaction
  • CASH_ADVANCE_TRX: Average amount per cash-advance transaction
  • PURCHASES_TRX: Average amount per purchase transaction
  • CREDIT_LIMIT: Credit limit
  • PAYMENTS: Total payments (due amount paid by the customer to decrease their statement balance) in the period
  • MINIMUM_PAYMENTS: Total minimum payments due in the period.
  • PRC_FULL_PAYMEN: Percentage of months with full payment of the due statement balance
  • TENURE: Number of months as a customer

Let’s develop a machine learning model for further analysis.

Network Intrusion Detection

In this case study we need to predict anomalies and attacks in the network.

Business Problem:

The task is to build network intrusion detection system to detect anomalies and attacks in the network.

There are two problems.

  1. Binomial Classification: Activity is normal or attack.
  2. Multinomial classification: Activity is normal or DOS or PROBE or R2L or U2R .

Data Availability:

This data is KDDCUP’99 data set, which is widely used as one of the few publicly available data sets for network-based anomaly detection systems.

For more about data you can visit to http://www.unb.ca/cic/datasets/nsl.html

BASIC FEATURES OF EACH NETWORK CONNECTION VECTOR

  1. Duration: Length of time duration of the connection
  2.  Protocol_type: Protocol used in the connection
  3.  Service: Destination network service used
  4.  Flag: Status of the connection – Normal or Error
  5.  Src_bytes: Number of data bytes transferred from source to destination in single connection
  6.  Dst_bytes: Number of data bytes transferred from destination to source in single connection
  7.  Land: if source and destination IP addresses and port numbers are equal then, this variable takes value 1 else 0
  8.  Wrong_fragment: Total number of wrong fragments in this connection
  9.  Urgent: Number of urgent packets in this connection. Urgent packets are packets with the urgent bit activated.
  10. Hot: Number of „hot‟ indicators in the content such as: entering a system directory, creating programs and executing programs.
  11. Num_failed _logins: Count of failed login attempts.
  12. Logged_in Login Status: 1 if successfully logged in; 0 otherwise.
  13. Num_compromised: Number of “compromised’ ‘ conditions.
  14. Root_shell: 1 if root shell is obtained; 0 otherwise.
  15.  Su_attempted: 1 if “su root” command attempted or used; 0 otherwise.
  16.  Num_root: Number of “root” accesses or number of operations performed as a root in the connection.
  17. Num_file_creations: Number of file creation operations in the connection.
  18. Num_shells: Number of shell prompts.
  19. Num_access_files: Number of operations on access control files .
  20. Num_outbound_cmds: Number of outbound commands in an ftp session.
  21. Is_hot_login: 1 if the login belongs to the “hot” list i.e., root or admin; else 0.
  22. Is_guest_login: 1 if the login is a “guest” login; 0 otherwise .
  23. Count: Number of connections to the same destination host as the current connection in the past two seconds
  24. Srv_count: Number of connections to the same service (port number) as the current connection in the past two seconds.
  25. Serror_rate: The percentage of connections that have activated the flag (4) s0, s1, s2 or s3, among the connections aggregated in count (23 )
  26. Srv_serror_rate: The percentage of connections that have activated the flag (4) s0, s1, s2 or s3, among the connections aggregated in srv_count (24)
  27. Rerror_rate: The percentage of connections that have activated the flag (4) REJ, among the connections aggregated in count (23)
  28. Srv_rerror_rate: The percentage of connections that have activated the flag (4) REJ, among the connections aggregated in srv_count (24)
  29. Same_srv_rate: The percentage of connections that were to the same service, among the connections aggregated in count (23)
  30. Diff_srv_rate: The percentage of connections that were to different services, among the connections aggregated in count (23)
  31. Srv_diff_host_ rate: The percentage of connections that were to different destination machines among the connections aggregated in srv_count (24)
  32. Dst_host_count: Number of connections having the same destination host IP address.
  33. Dst_host_srv_ count: Number of connections having the same port number.
  34. Dst_host_same _srv_rate: The percentage of connections that were to the same service, among the connections aggregated in dst_host_count (32) .
  35. Dst_host_diff_ srv_rate: The percentage of connections that were to different services, among the connections aggregated in dst_host_count (32)
  36. Dst_host_same _src_port_rate: The percentage of connections that were to the same source port, among the connections aggregated in dst_host_srv_c ount (33) .
  37. Dst_host_srv_ diff_host_rate: The percentage of connections that were to different destination machines, among the connections aggregated in dst_host_srv_count (33).
  38. Dst_host_serro r_rate: The percentage of connections that have activated the flag (4) s0, s1, s2 or s3, among the connections aggregated in dst_host_count (32).
  39. Dst_host_srv_s error_rate: The percent of connections that have activated the flag (4) s0, s1, s2 or s3, among the connections aggregated in dst_host_srv_c ount (33).
  40. Dst_host_rerro r_rate: The percentage of connections that have activated the flag (4) REJ, among the connections aggregated in dst_host_count (32) .
  41. Dst_host_srv_r error_rate: The percentage of connections that have activated the flag (4) REJ, among the connections aggregated in dst_host_srv_c ount (33).

Attack Class:

Let’s develop a machine learning model for further analysis.

Online Job Posting Analysis

Business Context:

The project seeks to understand the overall demand for labour in the Armenian online job market from the 19,000 job postings from 2004 to 2015 posted on Career Center, an Armenian human resource portal. Through text mining on this data, we will be able to understand the nature of the ever-changing job market, as well as the overall demand for labour in the Armenia economy. The data was originally scraped from a Yahoo! Mailing group.

Business Objectives:

Our main business objectives are to understand the dynamics of the labour market of Armenia using the online job portal post as a proxy. A secondary objective is to implement advanced text analytics as a proof of concept to create additional features such as enhanced search function that can add additional value to the users of the job portal.

So as a Data scientist you need to answer following business questions .

Job Nature and Company Profiles:

What are the types of jobs that are in demand in Armenia? How are the job natures changing over time?

Desired Characteristics and Skill-Sets:

What are the desired characteristics and skill -set of the candidates based on the job description dataset? How these are desired characteristics changing over time?

IT Job Classification:

Build a classifier that can tell us from the job description and company description whether a job is IT or not, so that this column can be automatically populated for new job postings. After doing so, understand what important factors are which drives this classification.

Similarity of Jobs:

Given a job title, find the 5 top jobs that are of a similar nature, based on the job post.

What should be our Text mining goal?

For the IT Job classification business question, you should aim to create supervised learning classification models that are able to classify based on the job text data accurately, is it an IT job.

On the business question of Job Nature and Company Profiles. Unsupervised learning techniques, such as topic modelling and other techniques such as term frequency counting will be applied to the data, including time period segmented dataset. Qualitative assessment will be done on the results to help us understand the job postings.

To understand the desired characteristics and skill -sets demanded by employers in the job ads, unsupervised learning methods such as K-means clustering will be used after appropriate dimension reduction.

For Job Queries business question, we propose exploring the usage of Latent Semantic Model and Matrix Similarity methods for information retrieval. The results will be assessed qualitatively. To return the top 5 most similar job posting, the job text data are vectorised using different models such as word2vec, and doc2vec and similarity scores are obtained using cosine similarity scores, ranked and returned as the answer which is then evaluated individually for relevance.

Data Understanding:

The data was obtained from Kaggle competition. Each row represents a job post. The dataset representation is tabular, but many of the columns are textual/unstructured in nature. Most notably, the columns job Description, Job Requirement, Required Qual, ApplicationP and AboutC are textual. The column job post is an amalgamation of these various textual columns.

Also provided sample job posting (attached with data set)

Let’s develop a machine learning model for further analysis.

Bank Review and Complaints Analysis

Business Problem

Central banks collecting information about customer satisfaction with the services provided by different bank. Also collects the information about the complaints.

  • Bank users give ratings and write reviews about services on central bank websites. These reviews and ratings help to banks evaluate services provided and take necessary to action improve customer service. While ratings are useful to convey the overall experience, they do not convey the context which led a reviewer to that experience.
  • If we look at only the rating, it is difficult to guess why the user rated the service as 4 stars. However, after reading the review, it is not difficult to identify that the review talks about good “service” and “expectations”.

So the Business Requirement is to analyze customer reviews and predict customer satisfaction with the reviews. It should include following tasks.

  • Data processing
  • Key positive words/negative words (most frequent words)
  • Classification of reviews into positive, negative and neutral
  • Identify key themes of problems (using clustering, topic models)
  • Predicting star ratings using reviews
  • Perform intent analysis

Datasets:

BankReviews.xlsx.

The data is a detailed dump of customer reviews/complaints (~500) of different services at different banks.

Data Dictionary:

  • Date (Day the review was posted)
  • Stars (1–5 rating for the business)
  • Text (Review text),
  • Bank name

Let’s develop a machine learning model for further analysis.

Why Python for data Analysis?

Python is very easy to learn and implement. For many people including myself python language is easy to fall in love with. Since his first appearance in 1991, python popularity is increasing day by day. Among interpreted languages Python is distinguished by its large and active scientific computing community. Adoption of Python for scientific computing in both industry applications and academic research has increased significantly since the early 2000s.

For data analysis and exploratory analysis and data visualization, Python has upper hand as compare with the many other domain-specific open source and commercial programming languages and tools, such as R, MATLAB, SAS, Stata, and others. In recent years, Python’s improved library support (primarily pandas) has made it a strong alternative for data manipulation tasks. Combined with python’s strength in general purpose programming, it is an excellent choice as a single language for building data-centric applications.

So in short we can say due to following reason we should choose python for data analysis.

  • It’s very simple language to understand.
  • It’s an open source.
  • Strong data science inbuilt library.
  • Apart from the long existing  demand in the web development projects, the use of Python is only growing to grow as AI/ML projects become more main stream and popular with global businesses.

As you can see below chart, python is the most shouting language in the industry.

Over the year popularity

Trend in one year

IEEE Spectrum 2017 Survey

Python-Environment Setup

To successfully create and run the code we will required environment set up which will have both general-purpose python as well as the special packages required for Data science.

In this tutorial we will discuss about python 3, because Python 2 won’t be supported after 2020 and Python 3 has been around since 2008. So if you are new to Python, it is definitely worth much more to learn the new Python 3 and not the old Python 2.

Anaconda Installation:

Anaconda is a package manager, an environment manager, a Python/R data science distribution, and a collection of over 1,500+ open source packages. Anaconda is free and easy to install, and it offers free community support too.

To Download Anaconda click on https://www.anaconda.com/distribution/

Over 250+ packages are automatically installed with Anaconda. You can also download other packages using the pip install command.

If you need installation guide you can check the same on anaconda website https://docs.anaconda.com/anaconda/install/

Open Navigator for Window:

From the Start menu, click the Anaconda Navigator desktop app.

Anaconda Navigation

Run Python in a Jupyter Notebook:

  • On Navigator’s Home tab, in the Applications panel on the right, scroll to the Jupyter Notebook tile and click the Install button to install Jupyter Notebook.
  • Launch Jupyter Notebook by clicking Jupyter Notebook’s Launch button.This will launch a new browser window (or a new tab) showing the.
  • On the top of the right hand side, there is a drop down menu labeled “New”. Create a new Notebook with the Python version you installed.
  • Rename your Notebook. Either click on the current name and edit it or find rename  under File in the top menu bar. You can name it to whatever you’d like, but for this  example we’ll use MyFirstAnacondaNotebook.
  • In the first line of the Notebook, type or copy/paste print(“Hello Anaconda”)
  • Save your Notebook by either clicking the save and checkpoint icon or select File – Save and Checkpoint in the top menu.
  • Select cell and press CTR+Enter or Shift+Enter

Python Variables

Variables are nothing but reserved memory locations to store values. This means that when you create a variable you reserve some space in memory. Variable also known as identifier and used to hold value.

In Python, we don’t need to specify the type of variable, because Python is language and smart enough to get variable type.

How to define Variable Names

A variable can have a short name (like x and y) or a more descriptive name (age, carname, total_volume). Rules for Python variables:

  • A variable name must start with a letter or the underscore character
  • A variable name cannot start with a number
  • A variable name can only contain alpha-numeric characters and underscores (A-z, 0-9, and _ )
  • Variable names are case-sensitive (age, Age and AGE are three different variables)

Declaring Variable and Assigning Values

Python allows us to create variable at required time. It does not bound us to declare variable before using in the application. We don’t need to declare explicitly variable in Python. When we assign any value to the variable that variable is declared automatically.

The equal (=) operator is used to assign value to a variable.

counter = 100 # An integer assignment
miles = 1000.0 # A floating point
name = “John” # A string

Here, 100, 1000.0 and “John” are the values assigned to countermiles, and name variables, respectively.

Multiple Assignments

Python allows us to assign a value to multiple variables in a single statement which is also known as multiple assignments. We can apply multiple assignments in two ways either by assigning a single value to multiple variables or assigning multiple values to multiple variables. Lets see given examples.

X = Y = Z = 50

Let’s explore above concept through jupyter notebook