In [1]:
%matplotlib inline

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

Student Performance Data Set

This is a data set in UCI Machine Learning Repository. Provided 30 features of students in math and Portuguese classes, the goal is to predict the final grade of them (numeric: from 0 to 20). The data set consisted of two data files, student-mat.csv (Math course) and student-por.csv (Portuguese language course).

Part 0 - Loading data file

In [2]:
std_math = pd.read_csv('./student-mat.csv', sep=';')
std_math.head()
Out[2]:
school sex age address famsize Pstatus Medu Fedu Mjob Fjob ... famrel freetime goout Dalc Walc health absences G1 G2 G3
0 GP F 18 U GT3 A 4 4 at_home teacher ... 4 3 4 1 1 3 6 5 6 6
1 GP F 17 U GT3 T 1 1 at_home other ... 5 3 3 1 1 3 4 5 5 6
2 GP F 15 U LE3 T 1 1 at_home other ... 4 3 2 2 3 3 10 7 8 10
3 GP F 15 U GT3 T 4 2 health services ... 3 2 2 1 1 5 2 15 14 15
4 GP F 16 U GT3 T 3 3 other other ... 4 3 2 1 2 5 4 6 10 10

5 rows × 33 columns

In [3]:
std_math.shape
Out[3]:
(395, 33)

There are 395 students in math class with 33 features provided.
Check for missing values.

In [4]:
std_math.isna().sum()
Out[4]:
school        0
sex           0
age           0
address       0
famsize       0
Pstatus       0
Medu          0
Fedu          0
Mjob          0
Fjob          0
reason        0
guardian      0
traveltime    0
studytime     0
failures      0
schoolsup     0
famsup        0
paid          0
activities    0
nursery       0
higher        0
internet      0
romantic      0
famrel        0
freetime      0
goout         0
Dalc          0
Walc          0
health        0
absences      0
G1            0
G2            0
G3            0
dtype: int64

List of features:

In [5]:
std_math.columns
Out[5]:
Index(['school', 'sex', 'age', 'address', 'famsize', 'Pstatus', 'Medu', 'Fedu',
       'Mjob', 'Fjob', 'reason', 'guardian', 'traveltime', 'studytime',
       'failures', 'schoolsup', 'famsup', 'paid', 'activities', 'nursery',
       'higher', 'internet', 'romantic', 'famrel', 'freetime', 'goout', 'Dalc',
       'Walc', 'health', 'absences', 'G1', 'G2', 'G3'],
      dtype='object')
In [6]:
std_math.dtypes
Out[6]:
school        object
sex           object
age            int64
address       object
famsize       object
Pstatus       object
Medu           int64
Fedu           int64
Mjob          object
Fjob          object
reason        object
guardian      object
traveltime     int64
studytime      int64
failures       int64
schoolsup     object
famsup        object
paid          object
activities    object
nursery       object
higher        object
internet      object
romantic      object
famrel         int64
freetime       int64
goout          int64
Dalc           int64
Walc           int64
health         int64
absences       int64
G1             int64
G2             int64
G3             int64
dtype: object

From the documentation:

Source:

Paulo Cortez, University of Minho, Guimarães, Portugal.

Data Set Information:

This data approach student achievement in secondary education of two Portuguese schools. The data attributes include student grades, demographic, social and school related features) and it was collected by using school reports and questionnaires. Two datasets are provided regarding the performance in two distinct subjects: Mathematics (mat) and Portuguese language (por). In [Cortez and Silva, 2008], the two datasets were modeled under binary/five-level classification and regression tasks. Important note: the target attribute G3 has a strong correlation with attributes G2 and G1. This occurs because G3 is the final year grade (issued at the 3rd period), while G1 and G2 correspond to the 1st and 2nd period grades. It is more difficult to predict G3 without G2 and G1, but such prediction is much more useful (see paper source for more details).

Attributes for both student-mat.csv (Math course) and student-por.csv (Portuguese language course) datasets:

1 school - student's school (binary: 'GP' - Gabriel Pereira or 'MS' - Mousinho da Silveira)
2 sex - student's sex (binary: 'F' - female or 'M' - male)
3 age - student's age (numeric: from 15 to 22)
4 address - student's home address type (binary: 'U' - urban or 'R' - rural)
5 famsize - family size (binary: 'LE3' - less or equal to 3 or 'GT3' - greater than 3)
6 Pstatus - parent's cohabitation status (binary: 'T' - living together or 'A' - apart)
7 Medu - mother's education (numeric: 0 - none, 1 - primary education (4th grade), 2 - 5th to 9th grade, 3 - secondary education or 4 - higher education)
8 Fedu - father's education (numeric: 0 - none, 1 - primary education (4th grade), 2 - 5th to 9th grade, 3 - secondary education or 4 - higher education)
9 Mjob - mother's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other')
10 Fjob - father's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other')
11 reason - reason to choose this school (nominal: close to 'home', school 'reputation', 'course' preference or 'other')
12 guardian - student's guardian (nominal: 'mother', 'father' or 'other')
13 traveltime - home to school travel time (numeric: 1 - <15 min., 2 - 15 to 30 min., 3 - 30 min. to 1 hour, or 4 - >1 hour)
14 studytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours)
15 failures - number of past class failures (numeric: n if 1<=n<3, else 4)
16 schoolsup - extra educational support (binary: yes or no)
17 famsup - family educational support (binary: yes or no)
18 paid - extra paid classes within the course subject (Math or Portuguese) (binary: yes or no)
19 activities - extra-curricular activities (binary: yes or no)
20 nursery - attended nursery school (binary: yes or no)
21 higher - wants to take higher education (binary: yes or no)
22 internet - Internet access at home (binary: yes or no)
23 romantic - with a romantic relationship (binary: yes or no)
24 famrel - quality of family relationships (numeric: from 1 - very bad to 5 - excellent)
25 freetime - free time after school (numeric: from 1 - very low to 5 - very high)
26 goout - going out with friends (numeric: from 1 - very low to 5 - very high)
27 Dalc - workday alcohol consumption (numeric: from 1 - very low to 5 - very high)
28 Walc - weekend alcohol consumption (numeric: from 1 - very low to 5 - very high)
29 health - current health status (numeric: from 1 - very bad to 5 - very good)
30 absences - number of school absences (numeric: from 0 to 93)

31 G1 - first period grade (numeric: from 0 to 20)
31 G2 - second period grade (numeric: from 0 to 20)
32 G3 - final grade (numeric: from 0 to 20, output target)

Replace string items into integers.

In [7]:
typeList = []
for i in std_math.columns:
    if str(std_math[i].dtypes) != 'int64':
        std_math[i], uniques = pd.factorize(std_math[i], sort=True)
        typeList.append(uniques)
In [8]:
std_math.dtypes
Out[8]:
school        int64
sex           int64
age           int64
address       int64
famsize       int64
Pstatus       int64
Medu          int64
Fedu          int64
Mjob          int64
Fjob          int64
reason        int64
guardian      int64
traveltime    int64
studytime     int64
failures      int64
schoolsup     int64
famsup        int64
paid          int64
activities    int64
nursery       int64
higher        int64
internet      int64
romantic      int64
famrel        int64
freetime      int64
goout         int64
Dalc          int64
Walc          int64
health        int64
absences      int64
G1            int64
G2            int64
G3            int64
dtype: object
In [9]:
std_math.head()
Out[9]:
school sex age address famsize Pstatus Medu Fedu Mjob Fjob ... famrel freetime goout Dalc Walc health absences G1 G2 G3
0 0 0 18 1 0 0 4 4 0 4 ... 4 3 4 1 1 3 6 5 6 6
1 0 0 17 1 0 1 1 1 0 2 ... 5 3 3 1 1 3 4 5 5 6
2 0 0 15 1 1 1 1 1 0 2 ... 4 3 2 2 3 3 10 7 8 10
3 0 0 15 1 0 1 4 2 1 3 ... 3 2 2 1 1 5 2 15 14 15
4 0 0 16 1 0 1 3 3 2 2 ... 4 3 2 1 2 5 4 6 10 10

5 rows × 33 columns

In [10]:
typeList
Out[10]:
[Index(['GP', 'MS'], dtype='object'),
 Index(['F', 'M'], dtype='object'),
 Index(['R', 'U'], dtype='object'),
 Index(['GT3', 'LE3'], dtype='object'),
 Index(['A', 'T'], dtype='object'),
 Index(['at_home', 'health', 'other', 'services', 'teacher'], dtype='object'),
 Index(['at_home', 'health', 'other', 'services', 'teacher'], dtype='object'),
 Index(['course', 'home', 'other', 'reputation'], dtype='object'),
 Index(['father', 'mother', 'other'], dtype='object'),
 Index(['no', 'yes'], dtype='object'),
 Index(['no', 'yes'], dtype='object'),
 Index(['no', 'yes'], dtype='object'),
 Index(['no', 'yes'], dtype='object'),
 Index(['no', 'yes'], dtype='object'),
 Index(['no', 'yes'], dtype='object'),
 Index(['no', 'yes'], dtype='object'),
 Index(['no', 'yes'], dtype='object')]

With the knowledge of G3 have a large correlation to G1 and G2, let's start playing with a easy task: predicting G3 (final grade) using G1 and G2.

In [11]:
x = std_math[['G1', 'G2']]
y = std_math['G3']

The simpliest prediction of G3 could be the mean of G1 and G2, as we expect most student won't preform too different then before. (Surely we can simply pick a random integer between 0 to 20, by probability we know that the accuracy would be about 5 %.)

In [12]:
x_mean = np.mean(x, axis=1)
x_mean.head()
Out[12]:
0     5.5
1     5.0
2     7.5
3    14.5
4     8.0
dtype: float64

A major problem of this prediction method is that the grade is an integer, we can samply handle it by rounding it up. (Again, this is a very rough prediction, so we don't care whether we should rounding up or down.)

In [13]:
x_mean = round(x_mean)
x_mean.head()
Out[13]:
0     6.0
1     5.0
2     8.0
3    14.0
4     8.0
dtype: float64

Let's see how well (bad) the prediction is:

In [14]:
corr = (x_mean == y).value_counts()
print(f'The accuracy of the predicted value is: {corr[1] / len(x_mean) * 100:.03f} %')
mse = np.mean((y - x_mean)**2)
print(f'The mean square error is: {mse:.03f}')
The accuracy of the predicted value is: 34.684 %
The mean square error is: 5.248

Compare to a random guess, this is a huge improvement, but can we do better by machine learning method?
Let's plot out G1 and G2 vs G3 to observe how are they related.

In [17]:
plt.scatter(std_math.G1, std_math.G3, alpha=0.5, marker='o')
plt.scatter(std_math.G2, std_math.G3, alpha=0.5, marker='s')
plt.plot(range(20), range(20), color='r')
plt.title('G1, G2 vs G3')
plt.xlabel('G1, G2')
plt.ylabel('G3')
plt.legend(['y = x', 'G1', 'G2'])
plt.xlim(-2, 20)
plt.ylim(-2, 20)
Out[17]:
(-2.0, 20.0)

It seems that G3 is linearly dependent on both G1 and G2. i.e. We could consider using Linear Regression for the prediction.

In [15]:
plt.hist(std_math.G1, bins=10)
Out[15]:
(array([ 2., 31., 37., 72., 51., 74., 63., 24., 30., 11.]),
 array([ 3. ,  4.6,  6.2,  7.8,  9.4, 11. , 12.6, 14.2, 15.8, 17.4, 19. ]),
 <a list of 10 Patch objects>)
In [16]:
plt.hist(std_math.G2, bins=10)
Out[16]:
(array([13.,  0., 16., 35., 82., 81., 78., 57., 18., 15.]),
 array([ 0. ,  1.9,  3.8,  5.7,  7.6,  9.5, 11.4, 13.3, 15.2, 17.1, 19. ]),
 <a list of 10 Patch objects>)

It seems that both G1 and G2 are somehow similar to normal distribution.

Now let's try to predict G3 by Linear regression.

To do this, we need to separate the data into train and test set. We can either do it by hand or using the train test set funciton in sci-kit learn. (Here we use 1/3 of the data as test set.)

In [18]:
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
In [28]:
allData = std_math
xtrain, xtest, ytrain, ytest = train_test_split(allData.drop(columns='G3'), std_math['G3'], test_size=0.33, random_state=2)
lr = LinearRegression()
lr.fit(xtrain, ytrain)
Out[28]:
LinearRegression()
In [30]:
ypred = lr.predict(xtest)
ypred = (np.round(ypred)).astype(int)
x = np.equal(np.array(ytest), ypred)
unique, counts = np.unique(x, return_counts=True)
print(np.asarray((unique, counts / len(ypred) * 100)).T)
print(mean_squared_error(ytest, ypred))
[[ 0.         74.04580153]
 [ 1.         25.95419847]]
3.1374045801526718
In [19]:
allData = std_math[['G1', 'G2', 'G3']]
xtrain, xtest, ytrain, ytest = train_test_split(allData[['G1', 'G2']], allData['G3'], test_size=0.33, random_state=2)
lr = LinearRegression()
lr.fit(xtrain, ytrain)
Out[19]:
LinearRegression()

Now put the xtest in the trained model to see how it does. (Again we the grade is an integer, let's round the float.)

In [20]:
ypred = lr.predict(xtest)
ypred = (np.round(ypred)).astype(int)
ypred
Out[20]:
array([13, 16, 16, 14, 12,  8,  6,  8,  7, 13, 13, 11,  7,  9, 10, 13,  9,
       13, 12,  7, 14,  7,  9, 12, 15, 16, 14,  6,  8, 15,  8, 13,  9, 13,
        4, 10, -1,  9, 11, 11, 19,  8,  6,  8,  8,  8,  6,  7, 15,  4,  6,
        9, 11, 12,  4,  8, 11,  4,  8, 10, 19, 10, 12, 15,  5, 11,  9,  7,
       16,  8,  8, 18, 12, 12, 17, 12,  5, 15, 11,  7,  5,  9, 14, 12,  7,
       11, -1, 13, 10,  9, 11, 13, 10, 10, 12, 15, 14, 10, 13,  8, 10, 12,
       12, 15, 10, -1,  3, 11, 12,  7, 10,  8, 12,  6,  9, 12,  8, 14, 12,
        8, 12, 15, 12, 12, 12,  5,  9, 13, 17,  9,  5])

There are some -1s in the prediction, we can simply replace it by 0.

In [21]:
ypred = np.where(ypred < 0, 0, ypred)
In [22]:
x = np.equal(np.array(ytest), ypred)
unique, counts = np.unique(x, return_counts=True)
print(np.asarray((unique, counts / len(ypred) * 100)).T)
[[ 0.         60.30534351]
 [ 1.         39.69465649]]
In [23]:
var = mean_squared_error(ytest, ypred)
print(f'The variance of the prediction and the actual value is: {var:.03f}')
The variance of the prediction and the actual value is: 2.710

We can see that bu using a simple machine learning method, although the accurancy did not improve much (about 5%) but the mean square error is now much smaller than the mean method.

Relevant Papers:

P. Cortez and A. Silva. Using Data Mining to Predict Secondary School Student Performance. In A. Brito and J. Teixeira Eds., Proceedings of 5th FUture BUsiness TEChnology Conference (FUBUTEC 2008) pp. 5-12, Porto, Portugal, April, 2008, EUROSIS, ISBN 978-9077381-39-7.