1. Probabilities
After completion of this topic you will be able to
- Describe and distinguish between continuous and discrete random variables
- Define & distinguish between the probability density function, the cumulative distribution function and the inverse cumulative distribution function
- Calculate the probability of an event given a discrete probability function
- Distinguish between independent and mutually exclusive events
- Define joint probability, describe a probability matrix and calculate joint probabilities using probability matrices
- Define and calculate a conditional probability and distinguish between conditional & unconditional probabilities
2. Basic Statistics
After completion of this topic you will be able to
- Interpret and apply the mean, standard deviation and variance of a random variable
- Calculate the mean, standard deviation and variance of a discrete random variables
- Interpret and calculate the expected value of a discrete random variable
- Calculate and interpret the covariance and correlation between two random variables.
- Calculate the mean and variance of sums of variables
- Describe the four central moments of a statistical variable or distribution: mean variance, skewness and kurtosis
- Interpret the skewness and kurtosis of a statistical distribution and interpret the concepts of coskewness and cokurtosis
- Describe and interpret the best linear unbiased estimator
3. Distributions
After completion of this topic you will be able to
- Distinguish the key properties among the following distributions: uniform distribution, Bernoulli distribution, Binomial distribution, Poisson distribution, normal distribution, lognormal distribution, Chi-squared distribution, Student's and F-distributions and identify common occurrences of each distribution
- Describe the central limit theorem and the implications it has when combining independent and identically distributed (i.i.d.) random variables
- Describe i.i.d. random variables and the implications of the i.i.d. assumption when combining random variables
- Describe a mixture distribution and explain the creation and characteristics of mixture distributions
4. Bayesian Analysis
After completion of this topic you will be able to
- Describe Bayes' theorem and apply this theorem in the calculation of conditional probabilities
- Compare the Bayesian approach to the frequentist approach
- Apply Bayes' theorem to scenarios with more than two possible outcomes and calculate posterior probabilities
5. Hypothesis Testing and Confidence Interval
After completion of this topic you will be able to
- Calculate and interpret the sample mean and ample variance
- After completing this reading, you should be able to: Construct and interpret a confidence interval
- Construct an appropriate null and alternative hypothesis and calculate an appropriate test statistic
- Differentiate between a one-tailed and a two-tailed test and identify when to use each test
- Interpret the results of hypothesis tests with a specific level of confidence
- Demonstrate the process of backtesting VaR by calculating the number of exceedances
6. Linear Regression with One Regressor
After completion of this topic you will be able to
- Explain how regression analysis in econometrics measures the relationship between dependent and independent variables
- Interpret a population regression function, regression coefficients, parameters, slope, intercept and the error term
- Interpret a sample regression function, regression coefficients, parameters, slope, intercept and the error term
- Describe the key properties of a linear regression
- Define an ordinary least squares (OLS) regression and calculate the intercept and slope of the regression
- Describe the method and three key assumptions of OLS for estimation of parameters
- Summarize the benefits of using OLS estimators
- Describe the properties of OLS estimators and their sampling distributions and explain the properties of consistent estimators in general
- Interpret the explained sum of squares, the total sum of squares, the residual sum of squares, the standard error of the regression and the regression R
- Interpret the results of an OLS regression
7. Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals
After completion of this topic you will be able to
- Calculate and interpret confidence intervals for regression coefficients
- Interpret the p-value
- Interpret hypothesis tests about regression coefficients
- Evaluate the implications of homoskedasticity and heteroskedasticity
- Determine the conditions under which the OLS is the best linear conditionally unbiased estimator
- Explain the Gauss-Markov Theorem and its limitations and alternatives to the OLS.
8. Linear Regression with Multiple Regressor
After completion of this topic you will be able to
- Define and interpret omitted variable bias and describe the methods for addressing this bias
- Distinguish between single and multiple regression
- Interpret the slope coefficient in a multiple regression
- Describe homoscedasticity and heteroskedasticity in a multiple regression
- Describe the OLS estimator in a multiple regression
- Calculate and interpret measures of fit in multiple regression
- Explain the assumptions of the multiple linear regression model
- Explain the concepts of imperfect and perfect multicollinearity and their implications
9. Hypothesis Tests and Confidence Intervals in Multiple Regression
After completion of this topic you will be able to
- Construct, apply and interpret hypothesis tests and confidence intervals for a single coefficient in a multiple regression
- Construct, apply and interpret joint hypothesis tests and confidence intervals for multiple coefficients in a multiple regression
- Interpret the F-statistic
- Interpret tests of a single restriction involving multiple coefficients
- Interpret confidence sets for multiple coefficients
- Identify examples of omitted variable bias in multiple regressions
- Interpret the R2 and adjusted R2 in a multiple regression
10. Modeling and Forecasting Trend
After completion of this topic you will be able to
- Describe linear and nonlinear trends
- Describe trend models to estimate and forecast trends
- Compare and evaluate model selection criteria, including mean squared error (MSE), s2, the Akaike information criterion (AIC) and the Schwarz information criterion (SIC
- Explain the necessary conditions for a model selection criterion to demonstrate consistency
11. Modeling and Forecasting Seasonality
After completion of this topic you will be able to
- Describe the sources of seasonality and how to deal with it in time series analysis
- Explain how to use regression analysis to model seasonality
- Explain how to construct an h-step-ahead point forecast
12. Characterizing Cycles
After completion of this topic you will be able to
- Define covariance stationary, autocovariance function, autocorrelation function, partial autocorrelation function and autoregression
- Describe the requirements for a series to be covariance stationary
- Explain the implications of working with models that are not covariance stationary
- Define white noise and describe independent white noise and normal (Gaussian) white noise
- Explain the characteristics of the dynamic structure of white noise
- Explain how a lag operator works
- Describe Wold's theorem
- Define a general linear process
- Relate rational distributed Tags to Wold theorem
- Calculate the sample mean and sample autocorrelation and describe the Box-Pierce Q-statistic and the Ljung-Box Q-Statistic
- Describe sample partial autocorrelation
13. Modeling Cycles MA, AR and ARMA Models
After completion of this topic you will be able to
- Describe the properties of the first-order moving average (MA(1)) process and distinguish between autoregressive representation and moving average representation.
- Describe the properties of a general finite-order process of order (MA(q)) process
- Describe the properties of the first-order autoregressive (AR(1)) process and define and explain the Yule-Walker equation
- Describe the properties of a general pth order autoregressive (AR(p)) process
- Define and describe the properties of the autoregressive moving average (ARMA) process
- Describe the application of AR and ARMA processes
14. Volatility
After completion of this topic you will be able to
- Define and distinguish between volatility, variance rate and implied volatility
- Describe the power law
- Explain how various weighting schemes can be used in estimating volatility
- Apply the exponentially weighted moving average (EWMA) model to estimate volatility
- Describe the generalized autoregressive conditional heteroskedasticity (GARCH (p,q)) model for estimating volatility and its properties
- Calculate volatility using the GARCH(1,1) model
- Explain mean reversion and how it is captured in the GARCH(1,1) model
- Explain the weights in the EWMA and GARCH(1,1) models
- Explain how GARCH models perform in volatility forecasting
- Describe the volatility term structure and the impact of volatility changes
15. Correlations and Copulas
After completion of this topic you will be able to
- Define correlation and covariance and differentiate between correlation and dependence
- Calculate covariance using the EWMA and GARCH(1,1) models
- Apply the consistency condition to covariance
- Describe the procedure of generating samples from a bivariate normal distribution
- Describe properties of correlations between normally distributed variables when using a one-factor model
- Define copula and describe the key properties of copulas and copula correlation
- Describe the Gaussian copula, Student's t-copula, multivariate copula and one factor copula
- Explain tail dependence
16. Simulation Methods
After completion of this topic you will be able to
- Describe the basic steps to conduct a Monte Carlo simulation
- Describe ways to reduce Monte Carlo sampling error
- Explain how to use antithetic variate technique to reduce Monte Carlo sampling error
- Explain how to use control variates to reduce Monte Carlo sampling error and when it is effective
- Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them
- Describe the bootstrapping method and its advantage over Monte Carlo simulation
- Describe situations where the bootstrapping method is ineffective
- Describe the pseudo-random number generation method and how a good simulation design alleviates the effects the choice of the seed has on the properties of the generated series
- Describe disadvantages of the simulation approach to financial problem solving