-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathexecises-02.qmd
460 lines (279 loc) · 17 KB
/
execises-02.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
# 02 - Statistical Learning {.unnumbered}
<style>
h3, h4 {
position: absolute;
width: 1px;
height: 1px;
padding: 0;
margin: -1px;
overflow: hidden;
clip: rect(0,0,0,0);
border: 0;
}
</style>
## Conceptual
1. **For each of parts (a) through (d), indicate whether we would generally expect the performance of a flexible statistical learning method to be better or worse than an inflexible method. Justify your answer.**
- **The sample size n is extremely large, and the number of predictors p is small.**
Better, flexible models reduce bias and quit the variance low when having a large n.
- **The number of predictors p is extremely large, and the number of observations n is small.**
Worse, a flexible model could increase its variance very high when having small n.
- **The relationship between the predictors and response is highly non-linear.**
Better, a lineal model would have a very high bias in this case.
- **The variance of the error terms, i.e. σ2 = Var(ϵ), is extremely high.**
Worse, a flexible model would over-fit trying to follow the irreducible error.
2. **Explain whether each scenario is a classification or regression problem, and indicate whether we are most interested in inference or prediction. Finally, provide n and p.**
- **We collect a set of data on the top 500 firms in the US. For each firm we record profit, number of employees, industry and the CEO salary. We are interested in understanding which factors affect CEO salary.**
Regression, inference, 500, 4
- **We are considering launching a new product and wish to know whether it will be a success or a failure. We collect data on 20 similar products that were previously launched. For each product we have recorded whether it was a success or failure, price charged for the product, marketing budget, competition price, and ten other variables.**
Classification, prediction, 20, 14
- **We are interested in predicting the % change in the USD/Euro exchange rate in relation to the weekly changes in the world stock markets. Hence we collect weekly data for all of 2012. For each week we record the % change in the USD/Euro, the % change in the US market, the % change in the British market, and the % change in the German market.**
Regression, prediction, 52, 4
3. **We now revisit the bias-variance decomposition.**
- **Provide a sketch of typical (squared) bias, variance, training error, test error, and Bayes (or irreducible) error curves, on a single plot, as we go from less flexible statistical learning methods towards more flexible approaches. The x-axis should represent the amount of flexibility in the method, and the y-axis should represent the values for each curve. There should be five curves. Make sure to label each one.**
<img src="img/01-sketch.png" width="400" height="350" />
- **Explain why each of the five curves has the shape displayed in part (a).**
In the example f isn't lineal, so the the **test error** lower as we add flexibility until the point the models starts to overfit. The **training error** always goes down as we increase the flexibility. As we make the model more flexible **variance** always increase as the model is more likely to change as we change the training data and the **bias** always goes down as a more flexible model has fewer assumptions. The **Bayes error** is the irreducible error we can not change it.
4. **You will now think of some real-life applications for statistical learning.**
- **Describe three real-life applications in which classification might be useful. Describe the response, as well as the predictors. Is the goal of each application inference or prediction? Explain your answer.**
| Goal | Response | Predictors |
|:----:|:---------|:-----------|
|Inference|Customer Tower Churn (0 or 1)| Annual Rent, Tower Lat, Tower Log, Tower Type, Number of sites around 10 km, Population around 10 km, Average Annual Salary in the city, contract Rent increases, customer technology|
|Inference|Employee Churn (0 or 1)|Months in company, Salary, Number of positions, Major, Sex, Total Salary Change, Bono, Wellness Expend, Number of depends, Home location|
|Inference|Absent (0 or 1)|Salary, Rain?, Holiday?, Number of uniforms, distance from home to work place, Months in company, Neighborhood median Salary, number of depends, number of marriage, Work start, Work End, Free day|
- **Describe three real-life applications in which regression might be useful. Describe the response, as well as the predictors. Is the goal of each application inference or prediction? Explain your answer.**
| Goal | Response | Predictors |
|:----:|:---------|:-----------|
|Inference|Number of likes|Words, Has a video?, Has a picture?, Post time, hashtag used|
|Inference|Velocidad de picheo|Edad, Altura, Peso, Horas de corrida, Cantidad de sentadillas, cantidad de practicas por semana, Años practicando el deporte|
|Inference|Food Satisfaction level (0 to 10)|Country, City, Height, Weight, Salary (US $), Salt, Spacy Level, Sugar (gr), Meat Type, Cheese (gr), Cheese Type|
- **Describe three real-life applications in which cluster analysis might be useful.**
| Goal | Predictors |
|:----:|-----------|
|Classify costumer to improve advertising|Words searched, products clicked, Explored image, Seconds spent on each product, start time, end time, customer location| number of flowers on social media|
|Classify company towers to see patterns in customers|Tower Lat, Tower Log, Tower Type, Number of sites around 10 km, Population around 10 km, Average Annual Salary in the city, BTS?, start date, Height|
|Classify football players check which players have similar results|Number of passes on each game, Number of meters run on each game, Position Played, Number of goals, Number of stolen balls, total time played|
5. **What are the advantages and disadvantages of a very flexible (versus a less flexible) approach for regression or classification? Under what circumstances might a more flexible approach be preferred to a less flexible approach? When might a less flexible approach be preferred?**
- Flexible model advantages
- They have the potential to accurately fit a wider range of possible shapes for f
- Flexible model disadvantages
- They do not reduce the problem of estimating f to a small number of parameters.
- A very large number of observations is required in order to obtain an accurate estimate for f.
- They are harder to interpret
It is preferred when we have a lot of data to train the model and the goal is to get accurate predictions rather than good interpretations.
A less flexible approach is preferred when we don't have a lot data to train the model or when the main goal is to make inferences to understand business rules.
6. **Describe the differences between a parametric and a non-parametric statistical learning approach. What are the advantages of a parametric approach to regression or classification (as opposed to a nonparametric approach)? What are its disadvantages?**
|Parametric|Non-parametric|
|:---------|:-------------|
|Make an assumption about the functional form|Don't make an assumption about the functional form, to accurately fit a wider range of possible shapes for $f$|
|Estimates a small number parameters based on training data|Estimates a large number parameters based on training data|
|Can be trained with few examples|Needs many examples to be trained|
|Smoothness level is fixed|Data analyst needs define a level of smoothness|
- Parametric model advantages
- Reduce the problem of estimating f to a small number of parameters.
- Can be trained with few examples.
- They are easy to interpret.
- Parametric model disadvantages
- In many times $f$ doesn't have the assumed shape adding a lot of bias to the model.
7. **The table below provides a training data set containing six observations, three predictors, and one qualitative response variable.**
```{r}
DF_07 <-
data.frame(X1 = c(0,2,0,0,-1,1),
X2 = c(3,0,1,1,0,0),
X3 = c(0,0,3,2,1,1),
Y = c("Red","Red","Red","Green","Green","Red"))
DF_07
```
**Suppose we wish to use this data set to make a prediction for Y when X1 = X2 = X3 = 0 using K-nearest neighbors.**
- **Compute the Euclidean distance between each observation and the test point, X1 = X2 = X3 = 0.**
```{r}
# As (C - 0)^2 = C^2
DF_07 <- transform(DF_07, dist = sqrt(X1^2+X2^2+X3^2))
```
- **What is our prediction with K = 1? Why?**
```{r}
DF_07[order(DF_07$dist),]
```
In the is case, the point would be in the Bayes decision boundary as there are two points of different colors at the same distance.
- **What is our prediction with K = 3? Why?**
In this case, the point would be a **Red** one as 2 of 3 of them are from that color.
- **If the Bayes decision boundary in this problem is highly nonlinear, then would we expect the best value for K to be large or small? Why?**
As flexibility decrease as K gets bigger, for highly nonlinear Bayes decision boundary the best K value should be a small one.
## Applied
8. **This exercise relates to the College data set, which can be found in the file College.csv on the book website. It contains a number of variables for 777 different universities and colleges in the US**
**Before reading the data into R, it can be viewed in Excel or a text editor.**
- **Use the read.csv() function to read the data into R. Call the loaded data college. Make sure that you have the directory set to the correct location for the data. You should notice that the first column is just the name of each university. We don’t really want R to treat this as data.**
```{r}
college <-
here::here("data/College.csv") |>
read.csv(row.names = 1, stringsAsFactors = TRUE)
```
- **Use the summary() function to produce a numerical summary of the variables in the data set.**
```{r}
summary(college)
```
- **Use the pairs() function to produce a scatterplot matrix of the first ten columns or variables of the data. **
```{r}
pairs(college[,1:10])
```
- **Use the plot() function to produce side-by-side boxplots of Outstate versus Private.**
```{r}
plot(college$Private, college$Outstate)
```
- **Create a new qualitative variable, called Elite, by binning the Top10perc variable. We are going to divide universities into two groups based on whether or not the proportion of students coming from the top 10 % of their high school classes exceeds 50 %.**
```{r}
college$Elite <- ifelse(college$Top10perc > 50, "Yes", "No") |> as.factor()
```
- **Then Use the summary() function to see how many elite universities there are.**
```{r}
summary(college$Elite)
```
- **Now use the plot() function to produce side-by-side boxplots of Outstate versus Elite.**
```{r}
plot(college$Elite, college$Outstate)
```
- **Use the hist() function to produce some histograms with differing numbers of bins for a few of the quantitative variables. You may find the command par(mfrow = c(2, 2)) useful: it will divide the print window into four regions so that four plots can be made simultaneously. Modifying the arguments to this function will divide the screen in other ways.**
```{r}
par(mfrow = c(3, 1))
hist(college$Apps)
hist(college$Accept)
hist(college$Enroll)
```
- **Continue exploring the data, and provide a brief summary of what you discover.**
```{r}
par(mfrow = c(2, 2))
plot(college$S.F.Ratio, college$Expend)
plot(college$S.F.Ratio, college$Outstate)
plot(college$Top10perc, college$Terminal)
plot(college$Top10perc, college$Room.Board)
par(mfrow = c(1, 1))
```
As students have more resources like teaching, supervision, curriculum development, and pastoral support institutions tend to expend less on each student and quest less money from out state students.
We also can see that students from top 10 % of high school class tend to go to universities where most the professors have the highest academic level available for each field or the highest room and board costs
9. **This exercise involves the Auto data set studied in the lab. Make sure that the missing values have been removed from the data.**
- **Which of the predictors are quantitative, and which are qualitative?**
```{r}
library(ISLR2)
quantitative_vars <-
sapply(Auto, is.numeric) |>
(\(x) names(x)[x])()
qualitative_vars <- setdiff(names(Auto), quantitative_vars)
```
- **What is the range of each quantitative predictor? You can answer this using the range() function.**
```{r}
sapply(quantitative_vars, \(var) range(Auto[[var]]))
```
- **What is the mean and standard deviation of each quantitative predictor?**
```{r}
sapply(quantitative_vars, \(var) c(mean(Auto[[var]]), sd(Auto[[var]])))
```
- **Now remove the 10th through 85th observations. What is the range, mean, and standard deviation of each predictor in the subset of the data that remains?**
```{r}
AutoFiltered <-Auto[-c(10:85),]
sapply(quantitative_vars, \(var) c(mean(AutoFiltered[[var]]), sd(AutoFiltered[[var]])))
```
- **Using the full data set, investigate the predictors graphically, using scatterplots or other tools of your choice. Create some plots highlighting the relationships among the predictors. Comment on your findings.**
Cars with 4 or 5 cylinders are more efficient than others.
```{r}
plot(factor(Auto$cylinders) ,Auto$mpg,
xlab = "cylinders", ylab = "mpg", col = 2)
```
Cars have improved their efficiency each year.
```{r}
plot(factor(Auto$year) ,Auto$mpg,
xlab = "year", ylab = "mpg", col = "blue")
```
- **Suppose that we wish to predict gas mileage (mpg) on the basis of the other variables. Do your plots suggest that any of the other variables might be useful in predicting mpg? Justify your answer.**
```{r}
pairs(Auto[, quantitative_vars])
```
*cylinders*, *horsepower* and *year* are good candidates as they have correlations with the *mpg* variable.
10. **This exercise involves the Boston housing data set.**
- **How many rows are in this data set? How many columns? What do the rows and columns represent?**
A data frame with 506 rows and 13 variables. Suburbs are represented as rows and columns represent different indicators.
- **Make some pairwise scatterplots of the predictors (columns) in this data set. Describe your findings.**
As all the data is numeric, let's get the highest correlations.
```{r}
BostonRelations <-
cor(Boston) |>
(\(m) data.frame(row = rownames(m)[row(m)[upper.tri(m)]],
col = colnames(m)[col(m)[upper.tri(m)]],
cor = m[upper.tri(m)]))() |>
(\(DF) DF[order(-abs(DF$cor)),])()
head(BostonRelations,3)
```
As we can see, if a suburb has high accessibility to radial highways the house value also increase.
```{r}
with(Boston, plot(rad, tax,
col = rgb(0 , 0, 1, alpha = 0.1),
pch = 16, cex = 1.5))
```
```{r}
with(Boston, plot(dis, nox,
col = rgb(0 , 0, 1, alpha = 0.2),
pch = 16, cex = 1.5))
```
```{r}
with(Boston, plot(indus, nox,
col = rgb(0 , 0, 1, alpha = 0.1),
pch = 16, cex = 1.5))
```
- **Are any of the predictors associated with per capita crime rate? If so, explain the relationship.**
```{r}
with(BostonRelations, BostonRelations[row == "crim", ])
```
```{r}
with(Boston, plot(rad, crim,
col = rgb(0 , 0, 1, alpha = 0.1),
pch = 16, cex = 1.5))
```
```{r}
with(Boston, plot(tax, crim,
col = rgb(0 , 0, 1, alpha = 0.1),
pch = 16, cex = 1.5))
```
- **Do any of the census tracts of Boston appear to have particularly high crime rates? Tax rates? Pupil-teacher ratios? Comment on the range of each predictor.**
```{r}
par(mfrow=c(3,1))
boxplot(Boston$crim, horizontal = TRUE)
hist(Boston$tax)
hist(Boston$ptratio)
par(mfrow=c(1,1))
```
- **How many of the census tracts in this data set bound the Charles river?**
```{r}
sum(Boston$chas)
```
- **What is the median pupil-teacher ratio among the towns in this data set?**
```{r}
median(Boston$ptratio)
```
- **Which census tract of Boston has lowest median value of owner-occupied homes? What are the values of the other predictors for that census tract, and how do those values compare to the overall ranges for those predictors? Comment on your findings.**
```{r}
which.min(Boston$age)
MinOwnerOccupiedHomes <- Boston[which.min(Boston$age),]
MinOwnerOccupiedHomes
VarsToPlot <-
names(Boston) |>
setdiff("crim")
for(variable in VarsToPlot){
hist(Boston[[variable]],
main = paste("Histogram of" , variable), xlab = variable)
abline(v=MinOwnerOccupiedHomes[[variable]],col="blue",lwd=2)
}
```
- **In this data set, how many of the census tracts average more than seven rooms per dwelling? More than eight rooms per dwelling? Comment on the census tracts that average more than eight rooms per dwelling.**
```{r}
sum(Boston$rm > 7)
sum(Boston$rm > 8)
BostonRelations[BostonRelations$row == "rm" | BostonRelations$col == "rm",]
par(mfrow=c(1,2))
with(Boston, plot(rm, medv,
col = rgb(0 , 0, 1, alpha = 0.2),
pch = 16, cex = 1.5))
abline(v=8,col="red",lwd=2)
with(Boston, plot(rm, lstat,
col = rgb(0 , 0, 1, alpha = 0.2),
pch = 16, cex = 1.5))
abline(v=8,col="red",lwd=2)
par(mfrow=c(1,1))
```