%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
This is a data set in UCI Machine Learning Repository. Provided 30 features of students in math and Portuguese classes, the goal is to predict the final grade of them (numeric: from 0 to 20). The data set consisted of two data files, student-mat.csv (Math course) and student-por.csv (Portuguese language course).
std_math = pd.read_csv('./student-mat.csv', sep=';')
std_math.head()
std_math.shape
There are 395 students in math class with 33 features provided.
Check for missing values.
std_math.isna().sum()
std_math.columns
std_math.dtypes
Paulo Cortez, University of Minho, Guimarães, Portugal.
This data approach student achievement in secondary education of two Portuguese schools. The data attributes include student grades, demographic, social and school related features) and it was collected by using school reports and questionnaires. Two datasets are provided regarding the performance in two distinct subjects: Mathematics (mat) and Portuguese language (por). In [Cortez and Silva, 2008], the two datasets were modeled under binary/five-level classification and regression tasks. Important note: the target attribute G3 has a strong correlation with attributes G2 and G1. This occurs because G3 is the final year grade (issued at the 3rd period), while G1 and G2 correspond to the 1st and 2nd period grades. It is more difficult to predict G3 without G2 and G1, but such prediction is much more useful (see paper source for more details).
1 school - student's school (binary: 'GP' - Gabriel Pereira or 'MS' - Mousinho da Silveira)
2 sex - student's sex (binary: 'F' - female or 'M' - male)
3 age - student's age (numeric: from 15 to 22)
4 address - student's home address type (binary: 'U' - urban or 'R' - rural)
5 famsize - family size (binary: 'LE3' - less or equal to 3 or 'GT3' - greater than 3)
6 Pstatus - parent's cohabitation status (binary: 'T' - living together or 'A' - apart)
7 Medu - mother's education (numeric: 0 - none, 1 - primary education (4th grade), 2 - 5th to 9th grade, 3 - secondary education or 4 - higher education)
8 Fedu - father's education (numeric: 0 - none, 1 - primary education (4th grade), 2 - 5th to 9th grade, 3 - secondary education or 4 - higher education)
9 Mjob - mother's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other')
10 Fjob - father's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other')
11 reason - reason to choose this school (nominal: close to 'home', school 'reputation', 'course' preference or 'other')
12 guardian - student's guardian (nominal: 'mother', 'father' or 'other')
13 traveltime - home to school travel time (numeric: 1 - <15 min., 2 - 15 to 30 min., 3 - 30 min. to 1 hour, or 4 - >1 hour)
14 studytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours)
15 failures - number of past class failures (numeric: n if 1<=n<3, else 4)
16 schoolsup - extra educational support (binary: yes or no)
17 famsup - family educational support (binary: yes or no)
18 paid - extra paid classes within the course subject (Math or Portuguese) (binary: yes or no)
19 activities - extra-curricular activities (binary: yes or no)
20 nursery - attended nursery school (binary: yes or no)
21 higher - wants to take higher education (binary: yes or no)
22 internet - Internet access at home (binary: yes or no)
23 romantic - with a romantic relationship (binary: yes or no)
24 famrel - quality of family relationships (numeric: from 1 - very bad to 5 - excellent)
25 freetime - free time after school (numeric: from 1 - very low to 5 - very high)
26 goout - going out with friends (numeric: from 1 - very low to 5 - very high)
27 Dalc - workday alcohol consumption (numeric: from 1 - very low to 5 - very high)
28 Walc - weekend alcohol consumption (numeric: from 1 - very low to 5 - very high)
29 health - current health status (numeric: from 1 - very bad to 5 - very good)
30 absences - number of school absences (numeric: from 0 to 93)
31 G1 - first period grade (numeric: from 0 to 20)
31 G2 - second period grade (numeric: from 0 to 20)
32 G3 - final grade (numeric: from 0 to 20, output target)
typeList = []
for i in std_math.columns:
if str(std_math[i].dtypes) != 'int64':
std_math[i], uniques = pd.factorize(std_math[i], sort=True)
typeList.append(uniques)
std_math.dtypes
std_math.head()
typeList
x = std_math[['G1', 'G2']]
y = std_math['G3']
The simpliest prediction of G3 could be the mean of G1 and G2, as we expect most student won't preform too different then before. (Surely we can simply pick a random integer between 0 to 20, by probability we know that the accuracy would be about 5 %.)
x_mean = np.mean(x, axis=1)
x_mean.head()
A major problem of this prediction method is that the grade is an integer, we can samply handle it by rounding it up. (Again, this is a very rough prediction, so we don't care whether we should rounding up or down.)
x_mean = round(x_mean)
x_mean.head()
Let's see how well (bad) the prediction is:
corr = (x_mean == y).value_counts()
print(f'The accuracy of the predicted value is: {corr[1] / len(x_mean) * 100:.03f} %')
mse = np.mean((y - x_mean)**2)
print(f'The mean square error is: {mse:.03f}')
Compare to a random guess, this is a huge improvement, but can we do better by machine learning method?
Let's plot out G1 and G2 vs G3 to observe how are they related.
plt.scatter(std_math.G1, std_math.G3, alpha=0.5, marker='o')
plt.scatter(std_math.G2, std_math.G3, alpha=0.5, marker='s')
plt.plot(range(20), range(20), color='r')
plt.title('G1, G2 vs G3')
plt.xlabel('G1, G2')
plt.ylabel('G3')
plt.legend(['y = x', 'G1', 'G2'])
plt.xlim(-2, 20)
plt.ylim(-2, 20)
It seems that G3 is linearly dependent on both G1 and G2. i.e. We could consider using Linear Regression for the prediction.
plt.hist(std_math.G1, bins=10)
plt.hist(std_math.G2, bins=10)
It seems that both G1 and G2 are somehow similar to normal distribution.
To do this, we need to separate the data into train and test set. We can either do it by hand or using the train test set funciton in sci-kit learn. (Here we use 1/3 of the data as test set.)
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
allData = std_math
xtrain, xtest, ytrain, ytest = train_test_split(allData.drop(columns='G3'), std_math['G3'], test_size=0.33, random_state=2)
lr = LinearRegression()
lr.fit(xtrain, ytrain)
ypred = lr.predict(xtest)
ypred = (np.round(ypred)).astype(int)
x = np.equal(np.array(ytest), ypred)
unique, counts = np.unique(x, return_counts=True)
print(np.asarray((unique, counts / len(ypred) * 100)).T)
print(mean_squared_error(ytest, ypred))
allData = std_math[['G1', 'G2', 'G3']]
xtrain, xtest, ytrain, ytest = train_test_split(allData[['G1', 'G2']], allData['G3'], test_size=0.33, random_state=2)
lr = LinearRegression()
lr.fit(xtrain, ytrain)
Now put the xtest in the trained model to see how it does. (Again we the grade is an integer, let's round the float.)
ypred = lr.predict(xtest)
ypred = (np.round(ypred)).astype(int)
ypred
There are some -1s in the prediction, we can simply replace it by 0.
ypred = np.where(ypred < 0, 0, ypred)
x = np.equal(np.array(ytest), ypred)
unique, counts = np.unique(x, return_counts=True)
print(np.asarray((unique, counts / len(ypred) * 100)).T)
var = mean_squared_error(ytest, ypred)
print(f'The variance of the prediction and the actual value is: {var:.03f}')
We can see that bu using a simple machine learning method, although the accurancy did not improve much (about 5%) but the mean square error is now much smaller than the mean method.
P. Cortez and A. Silva. Using Data Mining to Predict Secondary School Student Performance. In A. Brito and J. Teixeira Eds., Proceedings of 5th FUture BUsiness TEChnology Conference (FUBUTEC 2008) pp. 5-12, Porto, Portugal, April, 2008, EUROSIS, ISBN 978-9077381-39-7.