The Titanic Data Set And The Woman-Child Model – 82% Test Set Accuracy

In this tutorial, I will be showing you how to achieve 82% accuracy with the titanic data set and the woman-child model.

There are two rules to the titanic data set and the woman-child model:

  • We predict that all males die except for boys in families where most females and boys live.
  • We predict that all females live except for females in families where most females and boys die.

Last tutorial, we achieved an accuracy of around 79%. We used a random forest algorithm, a logistic regression, k-nearest neighbors, and linear discriminant analysis. All of these models were decent but are not able to reach the women-child model.

The the titanic data set and the women-child model can be created by only looking at the last names of passengers and also their corresponding ticket numbers. Let’s jump into our analysis.

The Titanic Data Set And The Woman-Child Model

First, we have to load in the data.

Again, we are engeneering a new variable called titles like last time.

Sometimes, ticket numbers differ from each other in their last digits. For example, in the table below we can be sure that the first three passengers travelled together. This is because they embarked in Southampton and all are class 3. There ticket number only differs in the last digits.

titanic data set tables

So what we will be doing is to substitute the last 2 digits or letters of a ticket with “XX”. Below is a little working example of how this works. We are grabbing the last 2 characters of the string with str_sub from the stringr package and then we are substituting it with “XX”. This will be done with the gsub function.

Now, we are applying this method to the entire Ticket column in the titanic data set.

Before we are continuing with our analysis, we will be throwing out all rows with passengers who are male, except for children.

titanic data set data frame

After we have ordered the data, we have to identify families now. We do that by comparing the last_name and ticket_number columns. When either the preceding or following ticket_number and last_name are identical, then the familyID will be family. Otherwise, no family. Afterwards, we have to throw out passengers who travelled alone.

We are ending up with 81 unique families.

Visualizing The Titanic Data Set And The Woman-Child Model With ggplot

The plot makes the power of the women-child model visible. Either all family members die or survive. Only family Alison and family Asplund have family members who died and survived. For other families, either all members survived or died.

What we will be doing next is labelling who survived and died. All NA values for families who all survived will be predicted to have survived as well. All NA values for families who all died will be predicted to have died as well. For the family Alison and Asplund we are assiging the survived and died labels to the NA values based on the majority voting. If the majority of the family survived, then the NA values will be substituted with survived (1). If the majority died, then the NA value will be substituted with died (0).

For the families Gibson, Klasen, and Peacock, all values are missing.

titanic data set passenger class

titanic data set peacock table

Because the Klasen family and the Peacock family are class 3 passengers, we are predicting that they died. Because the Gibson family travelled in class 1, we are predicting that they survived.

Now, we have to merge our results back into our original titanic data set. We can do that with a double for loop. However, this takes some time.

So, if we want to save time we can also vectorize our operation like in the code below.

Now, that our gender_model results are back in our original data frame, we will be predicting that every male passengers who’s survival is stil unkown will die and every female who’s survival is still unkown will live.

The Titanic Data Set And The Woman-Child Model – Submission

After that, we will be sending in our predictions.

titanic data set gender model submission

Heya! Almost 82%. That is fantastic. The women-child model is way less time consuming than all the model building in part 1. It is easy, straight forward but still very powerful. So powerful that it gives us around 3% better accuracy on the test set.

Other interesting Kernels on Kaggle that you might like can be explored with the links below:

I hope you have enjoyed the second part of the tutorial and if you have any questions, you can write them in the comments below.