R Programming Series: Exploratory Data Analysis

In the last article, we have explored “Data Wrangling and Visualization in R Programming Languages“. Here we will learn about Exploratory Data Analysis in R.

A typical data science use case usually takes a path of a core business-analytics problem or a machine-learning problem. Exploratory Data Analysis is considered as an inevitable approach. The figure below demonstrates the life cycle of a basic data science use case. It includes the initialization of a problem statement using one or more standard frameworks, and then it shifts to data gathering where it reaches the point of EDA. The majority of efforts and time in any project in this phase. Once the process of understanding the data is complete, a project may take a different path based on the requirements and the scope of use case.

The most important step is to assimilate all the observed patterns into meaningful insights. There are various scenarios where the objective is to develop a particular predictive model where the next step would be to create a machine learning model and then deploy it into a production system/product.

From a common man’s point of view, we can define EDA as the skill of understanding data from scratch. A more formal definition is nothing but the process of analyzing and exploring datasets to summarize its characteristics, properties, and latent relationships using statistical, visual, analytical, or a combination of techniques.

Breaking this further down, we can consider other dimensions to cater to is nothing but the types of features — numeric or categorical. In each of the type of analysis mentioned which is mentioned below:

  1. Univariate
  2. Bivariate

These types of EDA are based on the type of feature, where we can have a different visual technique to accomplish the study. So, for univariate analysis, we can consider a numeric variable, which creates a histogram or a boxplot, whereas we might use a frequency bar chart for a categorical variable. In this blog, we will focus on the steps which are needed for the implementation of exploratory data analysis using R.

Bank Dataset: We will focus on the attributes which are needed for understanding the bank dataset. This dataset is related to the direct marketing campaigns of a Portuguese banking institution. The marketing campaigns are completely based on phone calls. More than one contact will have to connect with the same client that was required in order to access if the product (bank term deposit) would be (‘yes’) or not (‘no’) subscribed.

Step 1: Let us understand the packages which are needed to install and create the required exploratory data analysis.

Step 2: Include the libraries in the workspace to implement the model.

Step 3: Convert the required dataset into a data frame to start with exploratory data analysis with R

As you can see from the dataset, we have 20 independent variables, such as age, job, and education, and one outcome/dependent variable — y. Here, the outcome variable defines whether the campaign call made to the client resulted in a successful deposit sign-up with yes or no. To understand the overall dataset, we now need to study each variable in the dataset. Let’s first hop on to univariate analysis.

Step 4: It is important to take into consideration of univariate analysis. Univariate analysis is the study of a single feature or rather a single variable through which we get the overall view of how the data is organized. For numeric features, such as columns called age, duration, nr.employed (numeric features in the dataset) and many others, we look at summary statistics such as min, max, mean, standard deviation, and percentile distribution.

R includes an inbuilt function called a summary, which takes the print of summary statistics which is needed to draw min, median, max, 75th percentile, and 25th percentile values. After that, we implement the sd function to compute the standard deviation, and, lastly, we implement the ggplot library to plot the boxplot for the data. The boxplot helps in visualizing the information in a simple and lucid way. The boxplot splits the required data into three quartiles. The lower quartile is the section that is present below the line represents the min and the 25th percentile. The middle quartile represents the 25th to 50th to 75thpercentile. The upper quartile represents the 75th to the 100th percentile. The dots which are represented above the 100th percentile are outliers determined by the internal functions.

Step 5: The ggplot function defines the base layer for visualization, which is then followed by the geom_histogram function with parameters that define the histogram-related aspects such as the number of bins, color to fill, alpha (opacity), and many more.

Step 6: Now we will focus on visualizing multiple variables using a histogram of our mentioned dataset. Multiple variables can be plotted together in one particular grid with the help of cowplot.

When we define the function plot_grid_numeric, which accepts the parameters dataset, we will consider the number of points to be plotted in the graph. The function provides the list of provided variables using for loop and collects the individual plots into a list called plt_matrix. With the help of cowplot library, we can arrange the plots into a grid with two columns.

Step 7: Now we will move to the concept of bivariate analysis. In bivariate analysis, we extend the analysis with respect to two variables. To understand it is important the relationship between two numeric variables so that we can leverage the required scatter plots. It is usually considered as a 2-dimensional visualization of the data, where each variable is plotted with respect to the axis of the required length.

With the help of the mentioned plot, we can see the increasing trend as the employment variance rate increases, the number of employees also increases. The fewer number of dots are due to repetitive records in the mentioned column nr.employed.


In this article, we explored the process of EDA with the help of practical use cases and traversed the business problem. We started by understanding the overall process of executing a data science problem which is a must for every data scientist to analyze (bank dataset as an example) and then defined our business problem using an industry-standard framework. We focussed on exploring the journey of EDA, with the help of univariate, bivariate, and multivariate analysis.

The next article will be the last article of this series on R programming language. in the last article, we will be going to cover Clustering using FactoExtra Package.

So, see you there!

Originally published at https://blog.eduonix.com on April 2, 2020

I’m passionate Web Developer & Data Analyst. I like to read and write about emerging technologies.