The second of a series of tutorial posts on Bayesian analyses. In this post I focus on using brms to run a regression with a single predictor.
The third of a series of tutorial posts on Bayesian analyses. In this post I focus on using brms to model a correlation.
The fourth of a series of tutorial posts on Bayesian analyses. In this post I focus on using brms to model the difference between two groups.
The first of a series of tutorial posts on Bayesian analyses. In this post I focus on using brms to run an intercept-only regression model.
This is the first post in a series on understanding regression. In this first post we focus on what the main question is that we should be asking when using regression.
In Part 1, we proposed that regression is about choosing and fitting distributions to data. In this post, we explain why the normal distribution is often the right choice.
We've chosen the normal distribution to describe heights. Now we need to estimate its parameters μ and σ from our sample. This post shows that lm() does exactly this: the intercept estimates μ (the mean) and the residual standard error estimates σ.
Every estimate is based on a sample, and a different sample would give different results. This post builds intuition for what that means: simulating how much the sample mean would vary across studies and deriving the standard error formula.