注:本文是从kaggle官方教程Titanic Data Science Solutions翻译过来的,原则上这并不是一篇翻译文章,因为我们并没有完全的忠实于原文,权当一份注解文档更为恰当。如果有不到之处,还望谅解。如有问题,请联系hzsong@outlook.com。
给你一组训练数据,里面包含性别、年龄、是否存活等特征,根据这组训练数据进行数据分析,建立模型,评估模型,选择一种模型来预测另一组测试数据(其中不包括是否存活特征)每个样本的存活特征是0还是1。详见Kaggle。
从题目描述中得到的假设
女人的存活率比较高儿童(age<?)的存活率比较高上层阶级(Pclass=1)的存活率比较高Classifying: 分类,我们需要理解不同分类与我们的目标之间的关系。
Correlating: 相关度,我们需要了解每个特征对目标的贡献值,也就是相关度,要了解是否具有相关性,是正相关还是负相关或者其他。
Coverting: 转换,我们根据不同的模型需要可能需要把某特征的类型转变为合适的类型,比如将String类型转变为数值类型。
Completing: 补全,我们需要为某些特征的缺少值插入合适的值,以方便建模
Correcting: 修正,我们需要修正某些特征可能存在的一些不正确的值,比如年龄大于四百,这显然是不正确的
Creating: 创新,我们可以根据需要利用现存的一些特征勾勒出一个或几个全新的特征。
Charting: 图表,我们使用可视化的图表来揭示数据和目标之间的内涵。详见如何选择正确的图表。
Categorical
Categorical:Survived, Sex Ordinal:Pclass
Numerical
Continue:Age Discrete:SibSp,Parch
Mixed data types
Ticket is a mix of numeric and alphanumeric data types. Cabin is alphanumeric
Errors or typos
Name feature may contain errors or typos
–——–
训练数据(891):Cabin(204),Age(714),Embarked(889) 测试数据(418):Cabin(91),Age(332)
训练数据:7个特征,5个特征类型为String 测试数据:6个特征,5个特征类型为String
Age与Survived特征的相关性
g = sns.FacetGrid(train_df, col='Survived') g.map(plt.hist, 'Age', bins=20) 观察 a. 婴儿(age<4)具有较大的存活率 b. 年长者(age=80)可以存活。 c. 大量的青年(15< age<25)没有存活。 d. 乘客的大部分年龄为15-35 结论 a. 我们应该使用Age特征来进行建模 b. 我们应该根据年龄进行分组Pclass, Age和Survived特征的相关性
grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6) grid.map(plt.hist, 'Age', alpha=.5, bins=20) grid.add_legend(); 观察 a. Pclass=3的乘客最多,但是大部分都没有存活 b. Pclass=2和Pclass=3的婴儿大部分多存活 c. Pclass=1的多部分乘客都存活下来了 d. 乘客的Pclass特征对应不同的Age特征分布 结论 我们应该考虑Pclass特征进行建模Embarked, Pclass, Sex和Survived特征的相关性
grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6) grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep') grid.add_legend() 观察 a. 女性乘客的存活率高于男性 b. 除了Embarked=C中男性高于女性的存活率,但是这可能是与Pclass和Embarked特征相关,而不是Embarked与Survived有直接相关性。 c.在C和Q港口,Pclass=3的男性存活于比Pclass=2的高 d. 登陆港口的存活率读对应Pclass=3的男性乘客 结论 a. 增加Sex特征进行模型训练 b. 补全Embarked特征来进行模型训练Embarked, Sex, Fare和Survived特征的相关性
grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=2.2, aspect=1.6) grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None) grid.add_legend() 观察 a. 船票价比较高的存活率比较高 b. 登陆口与Survived有相关性 结论 a. 考虑对Fare进行分组Correlating:
我们了解每个特征或者几个特征组合与Survived特征的相关度
Coverting:
我们需要将Sex,Embarked特征的类型转变为Ordinal类型。
Completing:
我们应该补全Age特征的取值我们应该补全Embarked特征的取值Correcting:
我们应该去除Ticket特征,因为其冗余度很大,而且与Survived特征没有很大的关联我们应该去除Cabin特征,因为在训练集和测试集中其都存在着大量缺失值我们应该去除PassengerId特征,因为它和Survived特征没有关联我们应该去除Name特征,因为其相对来说数据不够规范,而且与Suvived特征没有直接关系。Creating:
我们应该根据Parch和SibSp特征创建Family特征,来表示每个人所属的家庭的成员总数我们应该从Name特征提取Title来创建一种新的特征我们应该创建新特征来表示Age分组我们应该创建新特征来表示Fare分组删除Ticket和Cabin特征
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1) test_df = test_df.drop(['Ticket', 'Cabin'], axis=1) combine = [train_df, test_df]提取Title信息,并进行观察
for dataset in combine: dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False) pd.crosstab(train_df['Title'], train_df['Sex'])调整Title取值
for dataset in combine: dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\ 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare') dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss') dataset['Title'] = dataset['Title'].replace('Ms', 'Miss') dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs') train_df[['Title', 'Survived']].groupby(['Title'], as_index=False).mean() 观察 a. 大部分的titles都可以聚集成祖 b. Title特征和Survived相关 c. 特定的titles存活下来了(Mme, Lady, Sir) 结论 a. 我们采用Title特征来进行模型训练将Titles特征的类型转化为Ordinal
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5} for dataset in combine: dataset['Title'] = dataset['Title'].map(title_mapping) dataset['Title'] = dataset['Title'].fillna(0) train_df.head()提取Family 特征
for dataset in combine: dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1 train_df[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean().sort_values(by='Survived', ascending=False)Family Size特征和Survived相关度不大,那么我们重新生成一个新的特征IsALone
for dataset in combine: dataset['IsAlone'] = 0 dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1 train_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean()删除Parch,SibSp和FamilySize特征
train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1) test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1) combine = [train_df, test_df] train_df.head()我们采用第二种方法,因为其他方法都会引来随机干扰。
guess_ages = np.zeros((2,3)) guess_ages for dataset in combine: for i in range(0, 2): for j in range(0, 3): guess_df = dataset[(dataset['Sex'] == i) & \ (dataset['Pclass'] == j+1)]['Age'].dropna() # age_mean = guess_df.mean() # age_std = guess_df.std() # age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std) age_guess = guess_df.median() # Convert random age float to nearest .5 age guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5 for i in range(0, 2): for j in range(0, 3): dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\ 'Age'] = guess_ages[i,j] dataset['Age'] = dataset['Age'].astype(int) train_df.head()我们为Age特征创建分组
train_df['AgeBand'] = pd.cut(train_df['Age'], 5) train_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean().sort_values(by='AgeBand', ascending=True) for dataset in combine: dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0 dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1 dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2 dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3 dataset.loc[ dataset['Age'] > 64, 'Age'] train_df.head()最后删除AgeBand特征即可
train_df = train_df.drop(['AgeBand'], axis=1) combine = [train_df, test_df] train_df.head()使用出现最频繁的登陆口来补全Embarked特征
freq_port = train_df.Embarked.dropna().mode()[0] for dataset in combine: dataset['Embarked'] = dataset['Embarked'].fillna(freq_port) train_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean().sort_values(by='Survived', ascending=False)将Embarked特征值转变为Odinal
for dataset in combine: dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int) train_df.head()使用中值补全测试集的Fare特征
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True) test_df.head()我们为Fare创建分组
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4) train_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True) for dataset in combine: dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0 dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1 dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2 dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3 dataset['Fare'] = dataset['Fare'].astype(int) train_df = train_df.drop(['FareBand'], axis=1) combine = [train_df, test_df]模型如下:
Logistic Regression KNN or k-Nearest NeighborsSupport Vector Machines Naive Bayes classifier Decision Tree Random Forrest Perceptron Artificial neural network RVM or Relevance Vector Machine # Logistic Regression logreg = LogisticRegression() logreg.fit(X_train, Y_train) Y_pred = logreg.predict(X_test) acc_log = round(logreg.score(X_train, Y_train) * 100, 2) # Support Vector Machines svc = SVC() svc.fit(X_train, Y_train) Y_pred = svc.predict(X_test) acc_svc = round(svc.score(X_train, Y_train) * 100, 2) # k-Nearest Neighbors knn = KNeighborsClassifier(n_neighbors = 3) knn.fit(X_train, Y_train) Y_pred = knn.predict(X_test) acc_knn = round(knn.score(X_train, Y_train) * 100, 2) # Gaussian Naive Bayes gaussian = GaussianNB() gaussian.fit(X_train, Y_train) Y_pred = gaussian.predict(X_test) acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2) # Perceptron perceptron = Perceptron() perceptron.fit(X_train, Y_train) Y_pred = perceptron.predict(X_test) acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2) # Linear SVC linear_svc = LinearSVC() linear_svc.fit(X_train, Y_train) Y_pred = linear_svc.predict(X_test) acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2) # Stochastic Gradient Descent sgd = SGDClassifier() sgd.fit(X_train, Y_train) Y_pred = sgd.predict(X_test) acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2) # Decision Tree decision_tree = DecisionTreeClassifier() decision_tree.fit(X_train, Y_train) Y_pred = decision_tree.predict(X_test) acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2) # Random Forest random_forest = RandomForestClassifier(n_estimators=100) random_forest.fit(X_train, Y_train) Y_pred = random_forest.predict(X_test) random_forest.score(X_train, Y_train) acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2) acc_random_forest最终,我们可以提交答案了
submission = pd.DataFrame({ "PassengerId": test_df["PassengerId"], "Survived": Y_pred }) submission.to_csv('submission.csv', index=False)