Skip to content

Pset on Inference

Download notebook

Goal of the problem set:

  • understand heteroskedasticity and clustered standard errors
  • familiarize yourself with pandas, grouping and formating
  • learn about numba to accelerate your code for large data-sets

some useful references:

We are going to reproduce an exercise similar to the example for the computation of standard error. Start by downloading the CPS data from here. We first load the needed libraries, my solutions wich are hidden to you and the data.

%cd ..
%load_ext autoreload
%autoreload 2
from solutions.sol_pset2 import *
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
from tqdm import tqdm

/home/tlamadon/git/econ-21340
data = pd.read_stata("~/Downloads/files to share/CPS_2012_micro.dta")
---------------------------------------------------------------------------

FileNotFoundError                         Traceback (most recent call last)

<ipython-input-2-ec416143a35c> in <module>
----> 1 data = pd.read_stata("~/Downloads/files to share/CPS_2012_micro.dta")


~/git/econ-21340/envs/lib/python3.8/site-packages/pandas/io/stata.py in read_stata(filepath_or_buffer, convert_dates, convert_categoricals, index_col, convert_missing, preserve_dtypes, columns, order_categoricals, chunksize, iterator, compression, storage_options)
   1939 ) -> DataFrame | StataReader:
   1940 
-> 1941     reader = StataReader(
   1942         filepath_or_buffer,
   1943         convert_dates=convert_dates,


~/git/econ-21340/envs/lib/python3.8/site-packages/pandas/io/stata.py in __init__(self, path_or_buf, convert_dates, convert_categoricals, index_col, convert_missing, preserve_dtypes, columns, order_categoricals, chunksize, compression, storage_options)
   1089 
   1090         self._native_byteorder = _set_endianness(sys.byteorder)
-> 1091         with get_handle(
   1092             path_or_buf,
   1093             "rb",


~/git/econ-21340/envs/lib/python3.8/site-packages/pandas/io/common.py in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)
    708         else:
    709             # Binary mode
--> 710             handle = open(handle, ioargs.mode)
    711         handles.append(handle)
    712


FileNotFoundError: [Errno 2] No such file or directory: '/home/tlamadon/Downloads/files to share/CPS_2012_micro.dta'

Next generate a fictuous policy that you randomly assigned at the state times gender level. Run the regression and report standard errors given by R for one draw of the poilcy.

np.random.seed(60356548) # I fix the seed to make sure the draws are reproducible

# we draw the policy for each state
fpol = { k:np.random.uniform() > 0.5 for k in np.unique(data['statefip']) }
data['fp'] = data['statefip'].map(fpol)

data

Question 1

As an exercise to get acquainted with pandas, here I would like for you to implement the following procedure:

  1. compute the average wage in each state for each age decile (less than 20, betwen 20 and 30, ...)
  2. pivot the table with states in rows and age deciles in columns
  3. using pandas styling, I want you to highlight the lowest wage in each age decile and I want you to format the average wage with onely 2 decimal points.
# we tabulate the counts by age group and state
tibo_question1(data)

A first regression

import statsmodels.api as sm
import statsmodels.formula.api as smf

results = smf.ols('lnwage ~ fp', data=data).fit()
print(results.summary())

Note We do not control for state specific fixed effect as these would would be perfectly colinear with the policy.

Now this is surprising. We generated fp randomly across states, yet we find a significant effect of the policy on log wages. To gain understanding on what is happening we will generate our own data in a way where we control exactly what is happening.

IID errors

Let's start by reassuring ourselves. Let's use an IID data generating process (DGP), run the regression and check the significance.

  1. compute the variance of lnwage in the sample. This is an estimate of our homoskedastic error.
  2. simulate a fictuous outcome y2 simply equal to normal error with the estimated variance, and truly independent across individuals. Use .assign(y2 = ...) on the pandas dataFrame.
  3. regress this outcome y2 on fp, our fictuous policy and collect the coefficient, also save if the coefficient is significant at 5%.
  4. run steps (2,3) 500 times.

Question 2

Follow the previous steps and report how often we would consider teh coefficient on fpsignificant at 5%. You should find something close to 5% and you should feel better about the theory!

# I colled tstats across 500 replications and plot the histogram
tstats = tibo_question2(data)
plt.hist(tstats)
plt.axvline(1.96,color='red',linestyle=":")
plt.axvline(-1.96,color='red',linestyle=":")
# I check that the distribution is outside of the 0.025 to 0.0975 quantiles of the normal with probability close to 0.05
(np.abs(tstats)>1.96).mean()

Back to using wages in data

We go back and we do the same exercise, only this time we redraw the policy in each replciations and regress the wages on it, and collect the t-stats. We then plot the histogram of the t-stats and compute the probability of rejecting the null that the coefficient on the policy is 0.

Question 3

Replicate 500 times the very first regression to recreate an histogram of t-stats similar to the following one. Compute the probability to reject the null of effect and comment on its value given the way you simulated the data (2 sentences max).

tstats = tibo_question3(data)
plt.hist(tstats)
plt.axvline(1.96,color='red',linestyle=":")
plt.axvline(-1.96,color='red',linestyle=":")

Heteroskedastic errors

Now we want to simulate heteroskedastic robust standard errors which requires us to use some co-variates. We then want to repeat the previous procedure, but we are going to use a different code for simulation and a new test for the significance. statsmodels can do that for you using the cov_type argument of the fit method.

We want to check this by simulating from a model with heteroskedesatic errors. To do so we are going to use linear model for the variance:

y_{i} = 0 + s(x_i) \cdot \epsilon_i

We are going to use a linear specification for s^2(x_i).

  1. run the following regression lnwage ~ yrseduc + age + age^2 to get the m(x_i) = x_i' \beta where the x_i are education, age and age squared.
  2. extract the residuals \hat{u}_i from the previous regression. Then regress \hat{u}^2_i on yrseduc + age + age^2, to get the s^2(x_i) = x_i' \gamma model.
  3. using s^2(x_i) = x_i' \gamma, construct $ y_{i} = 0 + s(x_i) \cdot \epsilon_i$ where \epsilon_i is drawn iid Normal(0,1).
  4. replicate this 500 times, evaluate the significance of fp using heteroskedastic roduct inference by calling .fit(cov_type='HC0'), also save without using the robust errors.

Question 4

Follow the steps and report the distributions of t-stats, as well as the rejection rates for each of the two variance specifications.

tstats = tibo_question4(data,R=500)

plt.subplot(1,2,1)
plt.hist(tstats[:,0])
plt.axvline(1.96,color='red',linestyle=":")
plt.axvline(-1.96,color='red',linestyle=":")
plt.title('Homoskedastic')

plt.subplot(1,2,2)
plt.hist(tstats[:,1])
plt.axvline(1.96,color='red',linestyle=":")
plt.axvline(-1.96,color='red',linestyle=":")
plt.title('Heteroskedastic')

plt.show()

State clustered errors

We are going to simulate corrolated error within state. To do so we continue to draw independent error terms u_i but we add to them a common draw at the state level. In practice you can do that by doing similar to drawing the fp. We then create the outcome as

y_{ig} = (1-\rho) u_i + \rho v_g

Here is a way of doing this:

rho=0.5
data['u'] = np.random.normal(size=len(data))
data['v'] = data.groupby('statefip')['u'].transform(lambda x : (1-rho)*x.to_numpy() + rho*np.random.normal(size=1))

Question 5

First, explain the groupby expression. Next find another way of doing line 2 and 3, but try to minize the number of characters (see python golfing or more generaly wikipedia).

Before we get started estimating the effect of the policy, let's write a function that computes the within state correlation in the error term.

To that end draw 500 pairs of people where the 2 people in a pair belong to the same state. This gives you 2 vector of length 500. Compute the correlation between these 2 vectors. Use the sample and query methods of pandas. Similarly construct a placebo function that just takes random pairs of workers without imposing that they come from the same state.

Question 6

Write your own version of the tibo_pair_cor and tibo_cor_placebo functions, see what my version delivers:

rho=0.5
data['u'] = np.random.normal(size=len(data))
data['v'] = data.groupby('statefip')['u'].transform(lambda x : (1-rho)*x.to_numpy() + rho*np.random.normal(size=1))

print(tibo_pair_cor(data))
print(tibo_pair_cor_placebo(data))

rho=0.8
data['u'] = np.random.normal(size=len(data))
data['v'] = data.groupby('statefip')['u'].transform(lambda x : (1-rho)*x.to_numpy() + rho*np.random.normal(size=1))

print(tibo_pair_cor(data))
print(tibo_pair_cor_placebo(data))

We are then going to replicate the data construction 500 times.

Question 7

For \rho=0.2,0.5,0.8 run 500 replications and report the proportion at each value of \rho for which the coefficient on\text{fp} is significant at 5%. Report the results for three different regression using regular fit, using cov_type='HC0' and using cov_type='cluster'.

# here is an example of clusetered errors 
rho=0.8
data['u'] = np.random.normal(size=len(data))
data['v'] = data.groupby('statefip')['u'].transform(lambda x : (1-rho)*x.to_numpy() + rho*np.random.normal(size=1))

data['y'] =  data.v
results = smf.ols('y ~ fp', data=data).fit(cov_type='cluster',cov_kwds={"groups":data['statefip']})
print(results.summary())
# and here the regression without clustered standard errors
rho=0.8
data['u'] = np.random.normal(size=len(data))
data['v'] = data.groupby('statefip')['u'].transform(lambda x : (1-rho)*x.to_numpy() + rho*np.random.normal(size=1))

data['y'] =  data.v
results = smf.ols('y ~ fp', data=data).fit()
print(results.summary())