library(tidyverse)
library(tidymodels)
Stat. 652 Midterm
Midterm
For the titanic data set try the following machine learning classification algorithms.
Use the training and test datasets from the titanic R package.
You should note that the titanic_train has the Survived variable and the titanic_test does not. So to select your best model you need to use the titanic_train dataset to train and test your models. So that means you will need to select a training dataset from titanic_train and select a testing dataset (this would be a validation dataset) from titanic_train to evaluate the models you try.
I have not demonstrated the use of cross-validation, once you are comfortable running all of the models see if you can figure out how to use cross-validation to pick the best model.
Once you have picked the best model you should do the following:
- Re-run your chosen model on the full titanic_train dataset.
- Then produce predictions for the titanic_test dataset. This is what you would submit in a .csv to kaggle in a competition.
Build classification models for the Survived variable. Pick a model scoring function and determine which model is the best. I would suggest making a confusion matrix and computing the accuracy or kappa.
- Null Model
- kNN (the sample code given did not scale or normalize, if you use this model you need to do that.)
- Boosted C5.0
- Random Forest
- Logistic Regression using regularization
- Naive Bayes
Extra Credit:
Make one plot containing all of the ROC curves for the algorithms trained.
Data
Loading
library(titanic)
data(titanic_train)
data(titanic_test)
Exploration
library(naniar)
::gg_miss_var(titanic_train) naniar
::gg_miss_var(titanic_test) naniar
Prep
<- titanic_train |>
titanic_train mutate(
Survived = as.factor(Survived),
Sex = as.factor(Sex),
Embarked = as.factor(Embarked),
Pclass = as.factor(Pclass),
SibSp = as.double(SibSp),
Parch = as.double(Parch)
)<- titanic_test |>
titanic_test mutate(
Sex = as.factor(Sex),
Embarked = as.factor(Embarked),
Pclass = as.factor(Pclass),
SibSp = as.double(SibSp),
Parch = as.double(Parch)
)
# Training/validation split.
set.seed(108)
<- titanic_train |> initial_split(prop = 0.8, strata = Survived)
parts <- parts |> training()
train <- parts |> testing() val
<- recipe(Survived ~ ., data = train) |>
rec update_role(PassengerId, Name, Ticket, Cabin, new_role = "id") |>
step_impute_mean(all_numeric_predictors()) |>
step_nzv(all_predictors()) |>
step_normalize(all_numeric_predictors())
<- rec |> prep() preprocess
Models
Null model
The null model just predicts not survived, and has a validation accuracy of 61.5%.
<- null_model() |>
model_null set_engine("parsnip") |>
set_mode("classification")
<- workflow() |>
fit_null add_formula(Survived ~ .) |>
add_model(model_null) |>
last_fit(parts)
→ A | warning: Novel levels found in column 'Name': 'McCarthy, Mr. Timothy J', 'Hewlett, Mrs. (Mary D Kingcome) ', 'Williams, Mr. Charles Eugene', 'Fynney, Mr. Joseph J', 'Sloper, Mr. William Thompson', 'Asplund, Mrs. Carl Oscar (Selma Augusta Emilia Johansson)', 'Uruchurtu, Don. Manuel E', 'Meyer, Mr. Edgar Joseph', 'Holverson, Mr. Alexander Oskar', 'Kraeff, Mr. Theodor', 'Arnold-Franchi, Mrs. Josef (Josefine Franchi)', 'Faunthorpe, Mrs. Lizzie (Elizabeth Anne Wilkinson)', 'Sirayanian, Mr. Orsen', 'Harris, Mr. Henry Birkhardt', 'Skoog, Master. Harald', 'Andersson, Miss. Erna Alexandra', 'Staneff, Mr. Ivan', 'Moutal, Mr. Rahamin Haim', 'Caldwell, Master. Alden Gates', 'Dowdell, Miss. Elizabeth', 'Carrau, Mr. Francisco M', 'Celotti, Mr. Francesco', 'Kantor, Mr. Sinai', 'Johansson, Mr. Gustaf Joel', 'Mionoff, Mr. Stoytcho', 'Moss, Mr. Albert Johan', 'Rekic, Mr. Tido', 'Porter, Mr. Walter Chamberlain', 'Attalah, Miss. Malake', 'Nicola-Yarred, Master. Elias', 'McMahon, Mr. Martin', 'Drazenoic, Mr. Jozef', 'Osen, Mr. Olaf Elon', 'Navratil, Mr. Michel ("Louis M Hoffman")', 'van Billiard, Mr. Austin Blyler', 'Olsen, Mr. Ole Martin', 'Gilnagh, Miss. Katherine "Katie"', 'Corn, Mr. Harry', 'Cribb, Mr. John Hatfield', 'Panula, Master. Eino Viljami', 'Chibnall, Mrs. (Edith Martha Bowerman)', 'Johnson, Miss. Eleanor Ileen', 'Isham, Miss. Ann Elizabeth', 'Leonard, Mr. Lionel', 'Becker, Master. Richard F', 'Kink-Heilmann, Miss. Luise Gretchen', 'Romaine, Mr. Charles Hallace ("Mr C Rolmane")', 'Lurette, Miss. Elise', 'Albimona, Mr. Nassef Cassem', 'Perkin, Mr. John Henry', 'Honkanen, Miss. Eliina', 'Jacobsohn, Mr. Sidney Samuel', 'Sunderland, Mr. Victor Francis', 'Nenkoff, Mr. Christo', 'Hoyt, Mr. Frederick Maxfield', 'Larsson, Mr. Bengt Edvin', 'Coleridge, Mr. Reginald Charles', 'Carter, Rev. Ernest Courtenay', 'Lobb, Mr. William Arthur', 'Ward, Miss. Anna', 'Healy, Miss. Hanora "Nora"', 'Andrews, Miss. Kornelia Theodosia', 'Saalfeld, Mr. Adolphe', 'Williams, Mr. Howard Hugh "Harry"', 'Danoff, Mr. Yoto', 'Partner, Mr. Austen', 'Navratil, Master. Edmond Roger', 'Fortune, Miss. Alice Elizabeth', 'Brown, Miss. Amelia "Mildred"', 'Smith, Miss. Marion Elsie', 'Elias, Mr. Tannous', 'Arnold-Franchi, Mr. Josef', 'Bowerman, Miss. Elsie Edith', 'Skoog, Mr. Wilhelm', 'del Carlo, Mr. Sebastiano', 'Aubart, Mme. Leontine Pauline', 'Palsson, Miss. Stina Viola', 'Sadlier, Mr. Matthew', 'Lehmann, Miss. Bertha', 'Jansson, Mr. Carl Olof', 'Newell, Miss. Marjorie', 'Johansson, Mr. Erik', 'Olsson, Miss. Elina', 'Pain, Dr. Alfred', 'Niskanen, Mr. Juha', 'Adams, Mr. John', 'Hakkarainen, Mr. Pekka Pietari', 'Oreskovic, Miss. Marija', 'Birkeland, Mr. Hans Martin Monsen', 'Lefebre, Miss. Ida', 'Cunningham, Mr. Alfred Fleming', 'Matthews, Mr. William John', 'Charters, Mr. David', 'Wiseman, Mr. Phillippe', 'Bjornstrom-Steffansson, Mr. Mauritz Hakan', 'Kallio, Mr. Nikolai Erland', 'Ford, Miss. Doolina Margaret "Daisy"', 'Mellinger, Miss. Madeleine Violet', 'Peuchen, Major. Arthur Godfrey', 'Foreman, Mr. Benjamin Laventall', 'Kenyon, Mrs. Frederick R (Marion)', 'Smart, Mr. John Montgomery', 'West, Mrs. Edwy Arthur (Ada Mary Worth)', 'Clifford, Mr. George Quincy', 'Renouf, Mr. Peter Henry', 'Braund, Mr. Lewis Richard', 'Frost, Mr. Anthony Wood "Archie"', 'Bishop, Mr. Dickinson H', 'Hoyt, Mrs. Frederick Maxfield (Jane Anne Forby)', 'Hagland, Mr. Konrad Mathias Reiersen', 'Canavan, Miss. Mary', 'Penasco y Castellana, Mr. Victor de Satode', 'Daly, Mr. Eugene Patrick', 'Lemore, Mrs. (Amelia Milley)', 'Angle, Mrs. William A (Florence "Mary" Agnes Hughes)', 'Pavlovic, Mr. Stefo', 'Kassem, Mr. Fared', 'Farrell, Mr. James', 'Ridsdale, Miss. Lucy', 'Salonen, Mr. Johan Werner', 'Cacic, Miss. Marija', 'LeRoy, Miss. Bertha', 'Douglas, Mr. Walter Donald', 'Padro y Manent, Mr. Julian', 'Morrow, Mr. Thomas Rowan', 'Simmons, Mr. John', 'Garside, Miss. Ethel', 'Jussila, Mr. Eiriik', 'Thayer, Mrs. John Borland (Marian Longstreth Morris)', 'Frolicher-Stehli, Mr. Maxmillian', 'Rintamaki, Mr. Matti', 'Elsbury, Mr. William James', 'Torber, Mr. Ernst William', 'Lindell, Mr. Edvard Bengtsson', 'Daniel, Mr. Robert Williams', 'Horgan, Mr. John', 'Yasbeck, Mr. Antoni', 'Bostandyeff, Mr. Guentcho', 'Lundahl, Mr. Johan Svensson', 'Stahelin-Maeglin, Dr. Max', 'Leinonen, Mr. Antti Gustaf', 'Sagesser, Mlle. Emma', 'Cor, Mr. Liudevit', 'Willey, Mr. Edward', 'Hickman, Mr. Leonard Mark', 'Wilhelms, Mr. Charles', 'Dakic, Mr. Branko', 'Dick, Mr. Albert Adrian', 'Mayne, Mlle. Berthe Antonine ("Mrs de Villiers")', 'Klaber, Mr. Herman', 'Troutt, Miss. Edwina Celia "Winnie"', 'Jensen, Mr. Svend Lauritz', 'Oreskovic, Mr. Luka', 'Troupiansky, Mr. Moses Aaron', 'Lesurer, Mr. Gustave J', 'Ivanoff, Mr. Kanio', 'Connaghton, Mr. Michael', 'Wells, Miss. Joan', 'Barah, Mr. Hanna Assi', 'Gronnestad, Mr. Daniel Danielsen', 'Hocking, Mrs. Elizabeth (Eliza Needs)', 'Myhrman, Mr. Pehr Fabian Oliver Malkolm', 'Ali, Mr. William', 'Dean, Master. Bertram Vere', 'Sage, Miss. Stella Anna', 'Meyer, Mr. August', 'Slemen, Mr. Richard James', 'Lam, Mr. Len', 'McCormack, Mr. Thomas Joseph', 'Yasbeck, Mrs. Antoni (Selini Alexander)', 'Saad, Mr. Amin', 'Serepeca, Miss. Augusta', 'Abbing, Mr. Anthony', 'Sage, Mr. Douglas Bullen', 'Goldenberg, Mrs. Samuel L (Edwiga Grabowska)', 'van Melkebeke, Mr. Philemon', 'Beckwith, Mrs. Richard Leonard (Sallie Monypeny)', 'Carlsson, Mr. Frans Olof', 'Petroff, Mr. Nedelio'. The levels have been removed, and values have been coerced to 'NA'., Novel levels found in column 'Ticket': '17463', '248706', '244373', '113788', 'PC 17601', '349253', '349237', '2926', '2669', '3101281', '349208', '374746', '113059', '343275', '7540', '349207', '312991', '349249', '110465', '370372', '349241', 'A/5. 851', 'Fa 265302', '35851', 'SOTON/OQ 392090', '371362', '113505', 'PC 17595', '315153', '111428', 'A/5 21174', 'STON/O2. 3101283', 'SOTON/OQ 392089', '349234', '19943', '347067', 'W./C. 14263', '370375', '19988', 'A/5 2466', '349219', '113043', '248733', '31418', '2695', 'SC/PARIS 2167', 'PC 17477', '367655', 'SC 1748', '350034', '350052', '350407', '244278', 'STON/O 2. 3101289', '341826', '315096', '312992', '28228', 'A/5. 13032', 'A/4. 34244', '110564', 'STON/O 2. 3101274', '113786', '113051', '17464', '113792', '3460', '239854', '65304', '364846', '382651', 'C.A. 34260', '226875', '349242', '2700', '367232', 'W./C. 14258', '3101296', '315084', 'PC 17761', 'SC/PARIS 2146', '372622', 'SOTON/OQ 392082', '243880', 'STON/O 2. 3101286', '13567', 'STON/O 2. 3101273', 'A/5 3902', '364511', '349910', '113804', '370377', '2659', '349224', '347743', '13214', 'STON/O 2. 3101292', '349231', 'S.O./P.P. 751', '244270', '349228', 'PC 17482', '113028', '34218', '350048', '315094', '233639', '349201', '335097', '29103', '2663', '8471', '29105', '347078', 'SOTON/O.Q. 3101312', '248723', '28206', '367228', '2671', 'C.A. 5547', '345777', '695', '349212'. The levels have been removed, and values have been coerced to 'NA'., Novel levels found in column 'Cabin': 'E46', 'A6', 'C110', 'E33', 'C49', 'B80', 'C93', 'D7', 'C106', 'C124', 'B35', 'C104', 'C111', 'D21', 'A14', 'C86', 'B41', 'B50', 'C90', 'B101'. The levels have been removed, and values have been coerced to 'NA'.
There were issues with some computations A: x1
There were issues with some computations A: x1
|> collect_metrics() fit_null
# A tibble: 2 × 4
.metric .estimator .estimate .config
<chr> <chr> <dbl> <chr>
1 accuracy binary 0.615 Preprocessor1_Model1
2 roc_auc binary 0.5 Preprocessor1_Model1
|> collect_predictions() |> conf_mat(truth = Survived, estimate = .pred_class) fit_null
Truth
Prediction 0 1
0 110 69
1 0 0
k-NN
For k nearest neighbors, after doing a grid search of k = 1 to 10 along with 10-fold cross validation, the optimal k value was found to be 7, yielding an accuracy of 79.3%.
The k nearest neighbors model found k = 8 to be the optimal v
<- nearest_neighbor(neighbors = tune()) |>
model_knn set_engine("kknn") |>
set_mode("classification")
<- workflow() |>
workflow_knn add_recipe(preprocess) |>
add_model(model_knn)
<- workflow_knn |>
fit_knn tune_grid(
resamples = vfold_cv(train, v = 10, strata = Survived),
grid = expand_grid(neighbors = 1:10),
control = control_grid(save_pred = T)
)
<- fit_knn |> select_best("accuracy")
best_knn best_knn
# A tibble: 1 × 2
neighbors .config
<int> <chr>
1 7 Preprocessor1_Model07
<- workflow_knn |> finalize_workflow(best_knn)
workflow_knn_final
<- workflow_knn_final |> last_fit(parts)
fit_knn_final
|> collect_metrics() fit_knn_final
# A tibble: 2 × 4
.metric .estimator .estimate .config
<chr> <chr> <dbl> <chr>
1 accuracy binary 0.793 Preprocessor1_Model1
2 roc_auc binary 0.808 Preprocessor1_Model1
|>
fit_knn_final collect_predictions() |>
conf_mat(truth = Survived, estimate = .pred_class)
Truth
Prediction 0 1
0 98 25
1 12 44
Boosted C5.0
Wasn’t able to get C5.0 to work with tidymodels tuning.
<- boost_tree(trees = 10) |>
model_c50 set_engine("C5.0") |>
set_mode("classification")
<- workflow() |>
workflow_c50 add_recipe(preprocess) |>
add_model(model_c50)
<- workflow_c50 |> fit(train) fit_c50
c50 code called exit with value 1
fit_c50
══ Workflow [trained] ══════════════════════════════════════════════════════════
Preprocessor: Recipe
Model: boost_tree()
── Preprocessor ────────────────────────────────────────────────────────────────
3 Recipe Steps
• step_impute_mean()
• step_nzv()
• step_normalize()
── Model ───────────────────────────────────────────────────────────────────────
Call:
C5.0.default(x = x, y = y, trials = 10, control = C50::C5.0Control(minCases
= 2, sample = 0))
Classification Tree
Number of samples: 712
Number of predictors: 7
Number of boosting iterations: 10
Non-standard options: attempt to group attributes
Random forest
The random forest model has an accuracy of 81.6%.
<- rand_forest(mtry = tune(), trees = tune(), min_n = tune()) |>
model_rf set_engine("randomForest") |>
set_mode("classification")
<- workflow() |>
workflow_rf add_recipe(preprocess) |>
add_model(model_rf)
<- workflow_rf |>
fit_rf tune_grid(
resamples = vfold_cv(train, v = 10, strata = Survived),
grid = grid_latin_hypercube(
mtry(range = c(1, 7)),
trees(), min_n(),
size = 10
),control = control_grid(save_pred = T)
)
<- fit_rf |> select_best("accuracy")
best_rf best_rf
# A tibble: 1 × 4
mtry trees min_n .config
<int> <int> <int> <chr>
1 4 1156 5 Preprocessor1_Model06
<- workflow_rf |> finalize_workflow(best_rf)
workflow_rf_final
<- workflow_rf_final |> last_fit(parts)
fit_rf_final
|> collect_metrics() fit_rf_final
# A tibble: 2 × 4
.metric .estimator .estimate .config
<chr> <chr> <dbl> <chr>
1 accuracy binary 0.816 Preprocessor1_Model1
2 roc_auc binary 0.853 Preprocessor1_Model1
|>
fit_rf_final collect_predictions() |>
conf_mat(truth = Survived, estimate = .pred_class)
Truth
Prediction 0 1
0 104 27
1 6 42
Logistic regression using regularization
The logistic regression model obtained an accuracy of 78.8%.
<- logistic_reg(penalty = tune(), mixture = tune()) |>
model_lr set_engine("glmnet") |>
set_mode("classification")
# Extra preprocessing since logistic regression only works with numeric or dummy
# variables.
<- rec |>
preprocess_lr step_dummy(Pclass, Sex, Embarked) |>
prep()
<- workflow() |>
workflow_lr add_recipe(preprocess_lr) |>
add_model(model_lr)
<- workflow_lr |>
fit_lr tune_grid(
resamples = vfold_cv(train, v = 10, strata = Survived),
grid = grid_latin_hypercube(penalty(), mixture(), size = 10),
control = control_grid(save_pred = T)
)
<- fit_lr |> select_best("accuracy")
best_lr best_lr
# A tibble: 1 × 3
penalty mixture .config
<dbl> <dbl> <chr>
1 0.0382 0.0125 Preprocessor1_Model01
<- workflow_lr |> finalize_workflow(best_lr)
workflow_lr_final
<- workflow_lr_final |> last_fit(parts)
fit_lr_final
|> collect_metrics() fit_lr_final
# A tibble: 2 × 4
.metric .estimator .estimate .config
<chr> <chr> <dbl> <chr>
1 accuracy binary 0.788 Preprocessor1_Model1
2 roc_auc binary 0.856 Preprocessor1_Model1
|>
fit_lr_final collect_predictions() |>
conf_mat(truth = Survived, estimate = .pred_class)
Truth
Prediction 0 1
0 101 29
1 9 40
Naive Bayes
The Naive Bayes model obtained an accuracy of 78.2%.
# Required for Naive Bayes implementation.
library(agua)
Attaching package: 'agua'
The following object is masked from 'package:workflowsets':
rank_results
library(discrim)
Attaching package: 'discrim'
The following object is masked from 'package:dials':
smoothness
<- naive_Bayes(smoothness = tune(), Laplace = tune()) |>
model_nb set_engine("klaR") |>
set_mode("classification")
<- workflow() |>
workflow_nb add_recipe(preprocess) |>
add_model(model_nb)
<- workflow_nb |>
fit_nb tune_grid(
resamples = vfold_cv(train, v = 10, strata = Survived),
grid = grid_latin_hypercube(smoothness(), Laplace(), size = 10),
control = control_grid(save_pred = T)
)
<- fit_nb |> select_best("accuracy")
best_nb best_nb
# A tibble: 1 × 3
smoothness Laplace .config
<dbl> <dbl> <chr>
1 0.578 1.51 Preprocessor1_Model07
<- workflow_nb |> finalize_workflow(best_nb)
workflow_nb_final
<- workflow_nb_final |> last_fit(parts)
fit_nb_final
|> collect_metrics() fit_nb_final
# A tibble: 2 × 4
.metric .estimator .estimate .config
<chr> <chr> <dbl> <chr>
1 accuracy binary 0.782 Preprocessor1_Model1
2 roc_auc binary 0.815 Preprocessor1_Model1
|>
fit_nb_final collect_predictions() |>
conf_mat(truth = Survived, estimate = .pred_class)
Truth
Prediction 0 1
0 99 28
1 11 41
Final
The model with the highest accuracy was the random forest model, so that will be used to fit on the entire training set.
<- model_rf |> finalize_model(best_rf)
model_final
<- workflow() |>
workflow_final add_recipe(preprocess) |>
add_model(model_final)
<- workflow_final |> fit(titanic_train)
fit_final
<- fit_final |> predict(new_data = preprocess |> bake(titanic_test))
pred_final
|>
titanic_test ::select(PassengerId) |>
dplyrmutate(Survived = pred_final$.pred_class)
PassengerId Survived
1 892 1
2 893 1
3 894 1
4 895 1
5 896 1
6 897 1
7 898 1
8 899 1
9 900 1
10 901 1
11 902 1
12 903 1
13 904 1
14 905 1
15 906 1
16 907 1
17 908 1
18 909 1
19 910 1
20 911 1
21 912 1
22 913 1
23 914 1
24 915 1
25 916 1
26 917 1
27 918 1
28 919 1
29 920 1
30 921 1
31 922 1
32 923 1
33 924 1
34 925 1
35 926 1
36 927 1
37 928 1
38 929 1
39 930 1
40 931 1
41 932 1
42 933 1
43 934 1
44 935 1
45 936 1
46 937 1
47 938 1
48 939 1
49 940 1
50 941 1
51 942 1
52 943 1
53 944 1
54 945 1
55 946 1
56 947 0
57 948 1
58 949 1
59 950 1
60 951 1
61 952 1
62 953 1
63 954 1
64 955 1
65 956 1
66 957 1
67 958 1
68 959 1
69 960 1
70 961 1
71 962 1
72 963 1
73 964 1
74 965 1
75 966 1
76 967 1
77 968 1
78 969 1
79 970 1
80 971 1
81 972 1
82 973 1
83 974 1
84 975 1
85 976 1
86 977 1
87 978 1
88 979 1
89 980 1
90 981 1
91 982 1
92 983 1
93 984 1
94 985 1
95 986 1
96 987 1
97 988 1
98 989 1
99 990 1
100 991 1
101 992 1
102 993 1
103 994 1
104 995 1
105 996 1
106 997 1
107 998 1
108 999 1
109 1000 1
110 1001 1
111 1002 1
112 1003 1
113 1004 1
114 1005 1
115 1006 1
116 1007 1
117 1008 1
118 1009 1
119 1010 1
120 1011 1
121 1012 1
122 1013 1
123 1014 1
124 1015 1
125 1016 1
126 1017 1
127 1018 1
128 1019 1
129 1020 1
130 1021 1
131 1022 1
132 1023 1
133 1024 1
134 1025 1
135 1026 1
136 1027 1
137 1028 1
138 1029 1
139 1030 1
140 1031 1
141 1032 1
142 1033 1
143 1034 1
144 1035 1
145 1036 1
146 1037 1
147 1038 1
148 1039 1
149 1040 1
150 1041 1
151 1042 1
152 1043 1
153 1044 1
154 1045 1
155 1046 0
156 1047 1
157 1048 1
158 1049 1
159 1050 1
160 1051 1
161 1052 1
162 1053 1
163 1054 1
164 1055 1
165 1056 1
166 1057 1
167 1058 1
168 1059 1
169 1060 1
170 1061 1
171 1062 1
172 1063 1
173 1064 1
174 1065 1
175 1066 1
176 1067 1
177 1068 1
178 1069 1
179 1070 1
180 1071 1
181 1072 1
182 1073 1
183 1074 1
184 1075 1
185 1076 1
186 1077 1
187 1078 1
188 1079 1
189 1080 1
190 1081 1
191 1082 1
192 1083 1
193 1084 1
194 1085 1
195 1086 1
196 1087 1
197 1088 1
198 1089 1
199 1090 1
200 1091 1
201 1092 1
202 1093 1
203 1094 1
204 1095 1
205 1096 1
206 1097 1
207 1098 1
208 1099 1
209 1100 1
210 1101 1
211 1102 1
212 1103 1
213 1104 1
214 1105 1
215 1106 1
216 1107 1
217 1108 1
218 1109 1
219 1110 1
220 1111 1
221 1112 1
222 1113 1
223 1114 1
224 1115 1
225 1116 1
226 1117 1
227 1118 1
228 1119 1
229 1120 1
230 1121 1
231 1122 1
232 1123 1
233 1124 1
234 1125 1
235 1126 1
236 1127 1
237 1128 1
238 1129 1
239 1130 1
240 1131 1
241 1132 1
242 1133 1
243 1134 1
244 1135 1
245 1136 1
246 1137 1
247 1138 1
248 1139 1
249 1140 1
250 1141 1
251 1142 1
252 1143 1
253 1144 1
254 1145 1
255 1146 1
256 1147 1
257 1148 1
258 1149 1
259 1150 1
260 1151 1
261 1152 1
262 1153 1
263 1154 1
264 1155 1
265 1156 1
266 1157 1
267 1158 1
268 1159 1
269 1160 1
270 1161 1
271 1162 1
272 1163 1
273 1164 1
274 1165 1
275 1166 1
276 1167 1
277 1168 1
278 1169 1
279 1170 1
280 1171 1
281 1172 1
282 1173 1
283 1174 1
284 1175 1
285 1176 1
286 1177 1
287 1178 1
288 1179 1
289 1180 1
290 1181 1
291 1182 1
292 1183 1
293 1184 1
294 1185 1
295 1186 1
296 1187 1
297 1188 1
298 1189 1
299 1190 1
300 1191 1
301 1192 1
302 1193 1
303 1194 1
304 1195 1
305 1196 1
306 1197 1
307 1198 1
308 1199 1
309 1200 1
310 1201 1
311 1202 1
312 1203 1
313 1204 1
314 1205 1
315 1206 1
316 1207 1
317 1208 1
318 1209 1
319 1210 1
320 1211 1
321 1212 1
322 1213 1
323 1214 1
324 1215 1
325 1216 1
326 1217 1
327 1218 1
328 1219 1
329 1220 1
330 1221 1
331 1222 1
332 1223 1
333 1224 1
334 1225 1
335 1226 1
336 1227 1
337 1228 1
338 1229 1
339 1230 1
340 1231 1
341 1232 1
342 1233 1
343 1234 1
344 1235 1
345 1236 1
346 1237 1
347 1238 1
348 1239 1
349 1240 1
350 1241 1
351 1242 1
352 1243 1
353 1244 1
354 1245 1
355 1246 1
356 1247 1
357 1248 1
358 1249 1
359 1250 1
360 1251 1
361 1252 0
362 1253 1
363 1254 1
364 1255 1
365 1256 1
366 1257 1
367 1258 1
368 1259 1
369 1260 1
370 1261 1
371 1262 1
372 1263 1
373 1264 1
374 1265 1
375 1266 1
376 1267 1
377 1268 1
378 1269 1
379 1270 1
380 1271 0
381 1272 1
382 1273 1
383 1274 1
384 1275 1
385 1276 1
386 1277 1
387 1278 1
388 1279 1
389 1280 1
390 1281 1
391 1282 1
392 1283 1
393 1284 1
394 1285 1
395 1286 1
396 1287 1
397 1288 1
398 1289 1
399 1290 1
400 1291 1
401 1292 1
402 1293 1
403 1294 1
404 1295 1
405 1296 1
406 1297 1
407 1298 1
408 1299 1
409 1300 1
410 1301 1
411 1302 1
412 1303 1
413 1304 1
414 1305 1
415 1306 1
416 1307 1
417 1308 1
418 1309 1