Bayes Rules! Chapter 9 Exercises

Quest into the Bayesian realm.
Author
Published

September 25, 2022

Modified

September 25, 2022

Conceptual exercises

Exercise 9.1 (Normal regression priors)

For the Normal regression model (9.6) with Yi|β0,β1,σN(μi,σ) where μi=β0+β1Xi, we utilized Normal priors for β0, β1 and an Exponential prior on σ.

  1. Why is a Normal prior a reasonable choice for β0 and β1?
  2. Why isn’t a Normal prior a reasonable choice for σ?
  3. What’s the difference between weakly informative and vague priors?

Answer

  1. Since the slope of the line could be any positive or negative value. In other words, it could positively or negatively affected the outcome variable.
  2. The standard deviation in the model must be positive, so it’s not reasonable to choose Normal prior, which could take negative value.
  3. While the vague prior might include a non-sensible values, the weakly informative has more sensible values. For instance, choosing a vague prior might include a possibility that an increase of 1 degree celsius could increase the sales of an ice cream by 100,000.

Exercise 9.2 (Identify the variable)

Identify the response variable (Y) and predictor variable (X) in each given relationship of interest.

  1. We want to use a person’s arm length to understand their height.
  2. We want to predict a person’s carbon footprint (in annual CO2) emissions) with the distance between their home and work.
  3. We want to understand how a child’s vocabulary level might increase with age.
  4. We want to use information about a person’s sleep habits to predict their reaction time.

Answer

  1. X = arm length and Y = height.
  2. X = distance between home and work and Y = carbon footprint.
  3. X = age and Y = child’s vocabulary.
  4. X = sleep habits and Y = reaction time.

Exercise 9.3 (Interpreting coefficients)

In each situation below, suppose that the typical relationship between the given response variable Y and predictor X can be described by β0+β1X. Interpret the meaning of β0 and β1 and indicate whether your prior understanding suggests that β1 is negative or positive.

  1. Y = height in cm of a baby kangaroo, X = its age in months.
  2. Y = a data scientist’s number of GitHub followers, X = their number of GitHub commits in the past week.
  3. Y = number of visitors to a local park on a given day, X = rainfall in inches on that day.
  4. Y = the daily hours of Netflix that a person watches, X = the typical number of hours that they sleep.

Answer

  1. For every one month increase in age, we expect the height to increase by β1 in cm. The β0 denotes the expected height when age equals to zero month.
  2. For every one commit increase in the past week, we expect the data scientist’s number of Github followers by β1. The β0 denotes the expected followers when commit equals to zero.
  3. For every one inch increase of rainfall, we expect the number of visitors to a local park to decrease by β1. The β0 denotes the expected number of visitors when rainfalls equals to zero.
  4. For every number one hour increase that they sleep, we expect the daily hours of Netfliz that person watches decrease by β1. The β0 denotes the expected hours of Netflix watches when number of hours of sleep equals to zero.

Exercise 9.4 (Deviation from the average)

Consider the Normal regression model (9.6). Explain in one or two sentences, in a way that one of your non-stats friends could understand, how σ is related to the strength of the relationship between a response variable Y and predictor X.

Answer

Our data’s variability is measured by sigma, which indicates that the lower the value, the less variable our data are. To give an example, if we know that our average data is 100 and that our variability is 10, we should anticipate that our data will vary by 10, demonstrating the tightness of our data and the strength of our predictor. The association between our predictor and outcome variables is weaker when our variability is 50, as opposed to low variability, which means that our data may vary by as much as 50.

Exercise 9.5 (Bayesian model building: Part I)

A researcher wants to use a person’s age (in years) to predict their annual orange juice consumption (in gallons). Here you’ll build up a relevant Bayesian regression model, step by step.

  1. Identify the Y and X variables in this study.
  2. Use mathematical notation to specify an appropriate structure for the model of data Y (ignoring X for now).
  3. Rewrite the structure of the data model to incorporate information about predictor X. In doing so, assume there’s a linear relationship between Y and X.
  4. Identify all unknown parameters in your model. For each, indicate the values the parameter can take.
  5. Identify and tune suitable prior models for your parameters. Explain your rationale.

Answer

Lorem

Exercise 9.6 (Bayesian model building: Part II)

Repeat the above exercise for the following scenario. A researcher wishes to predict tomorrow’s high temperature by today’s high temperature.

Answer

Lorem

Exercise 9.7 (Posterior simulation T/F)

Mark each statement about posterior regression simulation as True or False.

  • MCMC provides the exact posterior model of our regression parameters (β0, β1, σ).
  • MCMC allows us to avoid complicated mathematical derivations.

Answer

Lorem

Exercise 9.8 (Posterior simulation)

For each situation, specify the appropriate stan_glm() syntax for simulating the Normal regression model using 4 chains, each of length 10000. (You won’t actually run any code.)

  1. X = age; Y = height; dataset name: bunnies.
  2. Clicksi|β0,β1,σN(μi,σ2)withμi=β0+β1Snapsi; dataset name: songs.
  3. Happinessi|β0,β1,σN(μi,σ2))withμi=β0+β1Agei; dataset name: dogs.

Applied exercises

Exercise 9.9 (How humid is too humid: model building)

Throughout this chapter, we explored how bike ridership fluctuates with temperature. But what about humidity? In the next exercises, you will explore the Normal regression model of rides (Y) by humidity (X) using the bikes dataset. Based on past bikeshare analyses, suppose we have the following prior understanding of this relationship:

  • On an average humidity day, there are typically around 5000 riders, though this average could be somewhere between 1000 and 9000.
  • Ridership tends to decrease as humidity increases. Specifically, for every one percentage point increase in humidity level, ridership tends to decrease by 10 rides, though this average decrease could be anywhere between 0 and 20.
  • Ridership is only weakly related to humidity. At any given humidity, ridership will tend to vary with a large standard deviation of 2000 rides.
  1. Tune the Normal regression model (9.6) to match our prior understanding. Use careful notation to write out the complete Bayesian structure of this model.
  2. To explore our combined prior understanding of the model parameters, simulate the Normal regression prior model with 5 chains run for 8000 iterations each. HINT: You can use the same stan_glm() syntax that you would use to simulate the posterior, but include prior_PD = TRUE.
  3. Plot 100 prior plausible model lines (β0+β1Xi) and 4 datasets simulated under the priors.
  4. Describe our overall prior understanding of the relationship between ridership and humidity.

Answer

$$ Yi|β0,β1,σN(μi,σ2)withμi=β0+β1Xiβ0cN(5000,2000)β1N(10,5)σExp(1/2000) $$ b.

Code
library(bayesrules)
library(tidyverse)
library(rstanarm)
library(tidybayes)
library(bayesplot)
library(broom.mixed)
library(patchwork)
library(janitor)

data(bikes)
bike_model_prior <- stan_glm(rides ~ humidity, data = bikes,
                       family = gaussian,
                       prior_intercept = normal(5000, 2000),
                       prior = normal(-10, 5), 
                       prior_aux = exponential(1/2000),
                       chains = 5, iter = 8000, prior_PD = TRUE)
bikes %>% 
  add_epred_draws(bike_model_prior, ndraws = 100) %>% 
  ggplot(aes(humidity, rides)) + 
  geom_line(aes(y = .epred, group = .draw), size = 0.1)

set.seed(12)

bikes %>%
  add_predicted_draws(bike_model_prior, ndraws = 4) %>% 
  ggplot(aes(humidity, rides)) + 
  geom_point(aes(y = .prediction, group = .draw), size = 0.2) + 
  facet_wrap(~ .draw)

  1. our initial prior is that as the humidity increase, the number of rides on average are decreasing.

Exercise 9.10 (How humid is too humid: data)

With the priors in place, let’s examine the data.

  1. Plot and discuss the observed relationship between ridership and humidity in the bikes data.
  2. Does simple Normal regression seem to be a reasonable approach to modeling this relationship? Explain.

Answer

  1. The relationship between humidity and number of rides seems are not really clear, however fitting a linear model to the data showed that there are on average slight decrease in the number of ride in response to increase in humidity.
ggplot(bikes, aes(x = humidity, y = rides)) + 
  geom_point(size = 0.5) + 
  geom_smooth(method = "lm", se = FALSE)

  1. Yes, since based on the figure above, we can see that there is slight negative linear relationship between the humidity and ridership.

Exercise 9.11 (How humid is too humid: posterior simulation)

We can now simulate our posterior model of the relationship between ridership and humidity, a balance between our prior understanding and the data.

  1. Use stan_glm() to simulate the Normal regression posterior model. Do so with 5 chains run for 8000 iterations each. HINT: You can either do this from scratch or update() your prior simulation from Exercise 9.9 using prior_PD = FALSE.
  2. Perform and discuss some MCMC diagnostics to determine whether or not we can “trust” these simulation results.
  3. Plot 100 posterior model lines for the relationship between ridership and humidity. Compare and contrast these to the prior model lines from Exercise 9.9.

Answer

bike_model <- update(bike_model_prior, prior_PD = FALSE)
  1. The result below shows that the effective sample size ratio and R-hat are very close to 1, indicating that the chains are stable and well-mixed. The trace plot and the density plot below show visually that the chains are well-mixed and stable.
# Effective sample size ratio and Rhat
neff_ratio(bike_model)
(Intercept)    humidity       sigma 
    0.97620     0.97205     0.98455 
rhat(bike_model)
(Intercept)    humidity       sigma 
  0.9999065   0.9999237   0.9998410 
# Trace plots of parallel chains
trace_plot <- mcmc_trace(bike_model, size = 0.1)

# Density plots of parallel chains
density_plot <- mcmc_dens_overlay(bike_model)

trace_plot / density_plot

  1. Compare to the prior, the relationship between humidity and rides is steeper than we expected. Moreover, the posterior are less variable in comparison to the prior, in which we have more confident in the relationship between humidity and prior after conditioning on the data.
bikes %>%
  add_epred_draws(bike_model, ndraws = 100) %>% 
  ggplot(aes(x = humidity, y = rides)) +
    geom_line(aes(y = .epred, group = .draw), alpha = 0.15) + 
    geom_point(data = bikes, size = 0.05)

Exercise 9.12 (How humid is too humid: posterior interpretation)

Finally, let’s dig deeper into our posterior understanding of the relationship between ridership and humidity.

  1. Provide a tidy() summary of your posterior model, including 95% credible intervals.
  2. Interpret the posterior median value of the σ parameter.
  3. Interpret the 95% posterior credible interval for the humidity coefficient, β1.
  4. Do we have ample posterior evidence that there’s a negative association between ridership and humidity? Explain.

Answer

# Posterior summary statistics
tidy(bike_model, effects = c("fixed", "aux"),
     conf.int = TRUE, conf.level = 0.95)
# A tibble: 4 × 5
  term        estimate std.error conf.low conf.high
  <chr>          <dbl>     <dbl>    <dbl>     <dbl>
1 (Intercept)  4019.      226.     3575.    4461.  
2 humidity       -8.42      3.38    -15.0     -1.83
3 sigma        1573.       49.6    1481.    1674.  
4 mean_PPD     3483.       98.5    3292.    3676.  
  1. Based on the table above, we estimated that the σ has a posterior median of 1573, with 95% credible interval of [1481, 1674]. That means we expect the ridership on a given day could be as high or as low as 1573 riders from the average ridership on the same day and the same humidity.

  2. The 95% credible interval indicates that the humidity coefficient (β1) could be anywhere between -15 to -1.8.

  3. Yes, we have an ample evidence that there’s negative association between ridership and humidity. First, based on the visual inspection from the posterior we plotted above, we saw negative association between ridership and humidity. Had no association or positive association, the figure above would showed a flat or positive slope. Second, the posterior summary statistics for the humidity showed that the credible interval had no overlap with zero. Lastly, the quick tabulation from posterior probability showed that there’s almost certainly positive association between ridership and humidity.

bike_model_df <- as.data.frame(bike_model)

bike_model_df %>% 
  mutate(exceeds_0 = humidity > 0) %>% 
  tabyl(exceeds_0)
 exceeds_0     n percent
     FALSE 19871 0.99355
      TRUE   129 0.00645

Exercise 9.13 (How humid is too humid: prediction)

Tomorrow is supposed to be 90% humidity in Washington, D.C. What levels of ridership should we expect?

  1. Without using the posterior_predict() shortcut function, simulate two posterior models:
  • the posterior model for the typical number of riders on 90% humidity days; and
  • the posterior predictive model for the number of riders tomorrow.
  1. Construct, discuss, and compare density plot visualizations for the two separate posterior models in part a.
  2. Calculate and interpret an 80% posterior prediction interval for the number of riders tomorrow.
  3. Use posterior_predict() to confirm the results from your posterior predictive model of tomorrow’s ridership.

Answer

  1. As shown in the table below, the posterior model for typical number of riders on 90% humidity is 3260 with a 90% credible interval of [3035, 3483]. The posterior predictive model for number of riders tomorrow itself would be around 3247, with 90% credible interval of [185, 6320].
predict_90 <- bike_model_df %>% 
  mutate(mu = `(Intercept)` + humidity * 90,
         y_new = rnorm(20000, mean = mu, sd = sigma))

predict_90 %>% 
    summarize(
      median_mu = quantile(mu, 0.5),
      lower_mu = quantile(mu, 0.025),
      upper_mu = quantile(mu, 0.975),
      median_new = quantile(y_new, 0.5),
      lower_new = quantile(y_new, 0.025),
      upper_new = quantile(y_new, 0.975)
      )
  median_mu lower_mu upper_mu median_new lower_new upper_new
1  3260.318 3037.016 3482.108   3259.729  193.3223  6345.985
mu_plot <- predict_90 %>% 
  ggplot(aes(mu)) + 
  geom_density()

y_new_plot <- predict_90 %>% 
  ggplot(aes(y_new)) + 
  geom_density()

mu_plot + y_new_plot

prediction <- 
  posterior_predict(bike_model, newdata = data.frame(humidity = 90))

posterior_interval(prediction, prob = 0.95)
      2.5%    97.5%
1 216.9485 6364.144

Exercise 9.14 (On your own: Part I)

Temperature and humidity aren’t the only possible weather factors in ridership. Let’s explore the relationship between ridership (Y) and windspeed (X).

  1. Describe your own prior understanding of the relationship between ridership and wind speed.
  2. Tune the Normal regression model (9.6) to match your prior understanding. Use careful notation to write out the complete Bayesian structure of this model.
  3. Plot and discuss 100 prior plausible model lines (β0+β1X) and 4 datasets simulated under the priors.
  4. Construct and discuss a plot of rides versus windspeed using the bikes data. How consistent are the observed patterns with your prior understanding of this relationship?

Answer

  1. The prior understanding for the relationship between windspeed and rides as follows:
  • On an average humidity day, there are typically around 5000 riders, though this average could be somewhere between 2000 and 8000.
  • Ridership tends decrease as wind speed increases. For every 1 unit decrease in wind speed, we suspect that the ridership decreases on average by 30 rides, although the average could be anywhere between 10 and 50.
  • Ridership has a relatively weak relationship with wind speed. At any given wind speed, ridership will tend to vary with a large standard deviation of 1500 rides.
  1. Plugging our prior, the mathematical notation is written as follows:

$$ Yi|β0,β1,σN(μi,σ2)withμi=β0+β1Xiβ0cN(5000,1500)β1N(30,10)σExp(1/1500) $$ c. 

wind_model_prior <- stan_glm(rides ~ windspeed, data = bikes, family = gaussian, 
                       prior_intercept = normal(5000, 1500), 
                       prior = normal(-30, 10), 
                       prior_aux = exponential(1/1500), 
                       prior_PD = TRUE)
bikes %>% 
  add_epred_draws(wind_model_prior, ndraws = 100) %>% 
  ggplot(aes(windspeed, rides)) + 
  geom_line(aes(y = .epred, group = .draw), size = 0.1)

set.seed(12)

bikes %>% 
  add_predicted_draws(wind_model_prior, ndraws = 4) %>% 
  ggplot(aes(windspeed, rides)) + 
  geom_point(aes(y = .prediction, group = .draw), size = 0.1) + 
  facet_wrap(~ .draw)

ggplot(bikes, aes(x = windspeed, y = rides)) + 
  geom_point(size = 0.5) + 
  geom_smooth(method = "lm", se = FALSE)

Exercise 9.15 (On your own: Part II)

In this open-ended exercise, conduct a posterior analysis of the relationship between ridership (Y) and windspeed (X). This should include a discussion of your posterior understanding of this relationship along with supporting evidence.

wind_model <- update(wind_model_prior, prior_PD = FALSE) 
neff_ratio(wind_model)
(Intercept)   windspeed       sigma 
    0.98875     0.96925     0.94625 
rhat(wind_model)
(Intercept)   windspeed       sigma 
   1.000475    1.001087    1.000832 
# Trace plots of parallel chains
trace_plot <- mcmc_trace(wind_model, size = 0.1)

# Density plots of parallel chains
density_plot <- mcmc_dens_overlay(wind_model)

trace_plot / density_plot

bikes %>% 
  add_epred_draws(wind_model, ndraws = 100) %>% 
  ggplot(aes(windspeed, rides)) + 
  geom_line(aes(y = .epred, group = .draw), alpha = 0.15) + 
  geom_point(size = 0.1)

tidy(wind_model, effects = c("fixed", "aux"), 
     conf.int = TRUE, conf.level = 0.95)
# A tibble: 4 × 5
  term        estimate std.error conf.low conf.high
  <chr>          <dbl>     <dbl>    <dbl>     <dbl>
1 (Intercept)   4005.     123.     3769.     4241. 
2 windspeed      -40.0      7.74    -55.6     -24.0
3 sigma         1548.      48.0    1457.     1649. 
4 mean_PPD      3483.      95.8    3299.     3671. 
wind_model_df <- as.data.frame(wind_model)

wind_model_df %>% 
  mutate(exceeds_0 = windspeed > 0) %>% 
  tabyl(exceeds_0)
 exceeds_0    n percent
     FALSE 4000       1

Exercise 9.16 (Penguins: model building and simulation)

The penguins_bayes dataset in the bayesrules package includes data on 344 penguins. In the next exercises, you will use this data to model the length of penguin flippers in mm (Y) by the length of their bills in mm (X). We have a general sense that the average penguin has flippers that are somewhere between 150mm and 250mm long. Beyond that, we don’t have a strong understanding of the relationship between flipper and bill length, and thus will otherwise utilize weakly informative priors.

  • Simulate the Normal regression prior model of flipper_length_mm by bill_length_mm using 4 chains for 10000 iterations each. HINT: You can use the same stan_glm() syntax that you would use to simulate the posterior, but include prior_PD = TRUE.
  • Check the prior_summary() and use this to write out the complete structure of your Normal regression model (9.6).
  • Plot 100 prior plausible model lines (β0+β1X) and 4 datasets simulated under the priors.
  • Summarize your weakly informative prior understanding of the relationship between flipper and bill length.

Answer

data(penguins_bayes)

flipper_prior <- stan_glm(
  flipper_length_mm ~ bill_length_mm, 
  data = penguins_bayes, 
  family = gaussian, 
  prior_intercept = normal(200, 2.5, autoscale = TRUE),
  prior = normal(0, 2.5, autoscale = TRUE), 
  prior_aux = exponential(1, autoscale = TRUE), 
  prior_PD = TRUE,
  chains = 4,
  iter = 10000
)

SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 1e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 10000 [  0%]  (Warmup)
Chain 1: Iteration: 1000 / 10000 [ 10%]  (Warmup)
Chain 1: Iteration: 2000 / 10000 [ 20%]  (Warmup)
Chain 1: Iteration: 3000 / 10000 [ 30%]  (Warmup)
Chain 1: Iteration: 4000 / 10000 [ 40%]  (Warmup)
Chain 1: Iteration: 5000 / 10000 [ 50%]  (Warmup)
Chain 1: Iteration: 5001 / 10000 [ 50%]  (Sampling)
Chain 1: Iteration: 6000 / 10000 [ 60%]  (Sampling)
Chain 1: Iteration: 7000 / 10000 [ 70%]  (Sampling)
Chain 1: Iteration: 8000 / 10000 [ 80%]  (Sampling)
Chain 1: Iteration: 9000 / 10000 [ 90%]  (Sampling)
Chain 1: Iteration: 10000 / 10000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.051 seconds (Warm-up)
Chain 1:                0.064 seconds (Sampling)
Chain 1:                0.115 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 5e-06 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 10000 [  0%]  (Warmup)
Chain 2: Iteration: 1000 / 10000 [ 10%]  (Warmup)
Chain 2: Iteration: 2000 / 10000 [ 20%]  (Warmup)
Chain 2: Iteration: 3000 / 10000 [ 30%]  (Warmup)
Chain 2: Iteration: 4000 / 10000 [ 40%]  (Warmup)
Chain 2: Iteration: 5000 / 10000 [ 50%]  (Warmup)
Chain 2: Iteration: 5001 / 10000 [ 50%]  (Sampling)
Chain 2: Iteration: 6000 / 10000 [ 60%]  (Sampling)
Chain 2: Iteration: 7000 / 10000 [ 70%]  (Sampling)
Chain 2: Iteration: 8000 / 10000 [ 80%]  (Sampling)
Chain 2: Iteration: 9000 / 10000 [ 90%]  (Sampling)
Chain 2: Iteration: 10000 / 10000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.052 seconds (Warm-up)
Chain 2:                0.08 seconds (Sampling)
Chain 2:                0.132 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 4e-06 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 10000 [  0%]  (Warmup)
Chain 3: Iteration: 1000 / 10000 [ 10%]  (Warmup)
Chain 3: Iteration: 2000 / 10000 [ 20%]  (Warmup)
Chain 3: Iteration: 3000 / 10000 [ 30%]  (Warmup)
Chain 3: Iteration: 4000 / 10000 [ 40%]  (Warmup)
Chain 3: Iteration: 5000 / 10000 [ 50%]  (Warmup)
Chain 3: Iteration: 5001 / 10000 [ 50%]  (Sampling)
Chain 3: Iteration: 6000 / 10000 [ 60%]  (Sampling)
Chain 3: Iteration: 7000 / 10000 [ 70%]  (Sampling)
Chain 3: Iteration: 8000 / 10000 [ 80%]  (Sampling)
Chain 3: Iteration: 9000 / 10000 [ 90%]  (Sampling)
Chain 3: Iteration: 10000 / 10000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.053 seconds (Warm-up)
Chain 3:                0.05 seconds (Sampling)
Chain 3:                0.103 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 2e-06 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.02 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 10000 [  0%]  (Warmup)
Chain 4: Iteration: 1000 / 10000 [ 10%]  (Warmup)
Chain 4: Iteration: 2000 / 10000 [ 20%]  (Warmup)
Chain 4: Iteration: 3000 / 10000 [ 30%]  (Warmup)
Chain 4: Iteration: 4000 / 10000 [ 40%]  (Warmup)
Chain 4: Iteration: 5000 / 10000 [ 50%]  (Warmup)
Chain 4: Iteration: 5001 / 10000 [ 50%]  (Sampling)
Chain 4: Iteration: 6000 / 10000 [ 60%]  (Sampling)
Chain 4: Iteration: 7000 / 10000 [ 70%]  (Sampling)
Chain 4: Iteration: 8000 / 10000 [ 80%]  (Sampling)
Chain 4: Iteration: 9000 / 10000 [ 90%]  (Sampling)
Chain 4: Iteration: 10000 / 10000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.052 seconds (Warm-up)
Chain 4:                0.051 seconds (Sampling)
Chain 4:                0.103 seconds (Total)
Chain 4: 
prior_summary(flipper_prior)
Priors for model 'flipper_prior' 
------
Intercept (after predictors centered)
  Specified prior:
    ~ normal(location = 200, scale = 2.5)
  Adjusted prior:
    ~ normal(location = 200, scale = 35)

Coefficients
  Specified prior:
    ~ normal(location = 0, scale = 2.5)
  Adjusted prior:
    ~ normal(location = 0, scale = 6.4)

Auxiliary (sigma)
  Specified prior:
    ~ exponential(rate = 1)
  Adjusted prior:
    ~ exponential(rate = 0.071)
------
See help('prior_summary.stanreg') for more details
penguins_bayes %>% 
  drop_na() %>% 
  add_epred_draws(flipper_prior, ndraws = 100) %>% 
  ggplot(aes(bill_length_mm, flipper_length_mm)) + 
  geom_line(aes(y = .epred, group = .draw), size = 0.1)

set.seed(12)

penguins_bayes %>% 
  drop_na() %>% 
  add_predicted_draws(flipper_prior, ndraws = 4) %>% 
  ggplot(aes(bill_length_mm, flipper_length_mm)) + 
  geom_point(aes(y = .prediction, group = .draw), size = 0.1) + 
  facet_wrap(~.draw)

Exercise 9.17 (Penguins: data)

With the priors in place, let’s examine the data a. Plot and discuss the observed relationship between flipper_length_mm and bill_length_mm among the 344 sampled penguins. b. Does simple Normal regression seem to be a reasonable approach to modeling this relationship? Explain.

ggplot(penguins_bayes, aes(bill_length_mm, flipper_length_mm)) +
  geom_point() + 
  geom_smooth(method = lm, se = FALSE)

Back to top

Reuse

Citation

For attribution, please cite this work as:
Reynaldo Hutabarat, Farhan. 2022. “Bayes Rules! Chapter 9 Exercises.” September 25, 2022. https://weaklyinformative.com/posts/2022-09-25_bayes-rules-chapter-9-exercises/.