OLS
Very simple notebook to get started with python, simulating and estimating. In this homework, we hope to learn
 setup an python environment, see getting started.
 familiarize ourselves with basic numpy
 familiarize ourselves with keras
We consider the simple linear model
1 2 3 4 5 6 7 8 

1 2 3 

Simulate data
We start by simulating data of N observations and K regressors. We keep everything normal to keep things simple.
1 2 3 4 5 6 7 

As you can see we make extensive use of vector operations from the numpy package.
1 2 3 

Solve using numpy and linear algebra
Question
This is the first question. Write your own numpy_ols
function, which takes as an input (Y,X)
and returns the OLS estimates. You only need to use two numpy functions np.transpose
, np.matmul
and np.linalg.solve
. The function should just a few lines.
We use the created function (in this document it is using the one that I wrote) and we then plot the estimated \beta versus the true \beta.
1 2 3 4 5 6 7 8 9 10 11 

Solve using Keras
Now we are going to make an overkill, which is to use Keras to solve this linear problem. Kera allows to build models by composing layers. For instance, here, imagine I wanted to fit a neural net on the data I have generated, I would do the following:
1 2 

1 2 3 4 

1 2 3 4 5 6 7 8 9 10 11 12 

The first layer is a Dense layer, and it implements the following transformation:
$$ l_i = \sum_{k=1}^K w_k Z_{ik} $$
for a set of weights w_k that we want to estimate and some input Z_{ik}. Later on we will feed bacthes
from X to this.
Because we are using a sequential
construction, the second layer takes the output l_i of the first layer and generate an output
$$ \hat{y}_i = g(l_i) $$
for some non linear function refered to as activation functions.
See this description of activation functions including the one we have specificed.
The next command initializes the model, specifies the optimzer to use and specifies the loss function to minimize. Here we will try to minimize the square loss between provided output and model output:
1 

It is finally time to feed the data to our model. We specify the input X, the output Y, the number of passes we want to do on the data epochs
as well as the size of the batches, meaning the numbers of i observation to give at the same time to compute one update step of the optimizer.
1 

We then extract the estimated weights w_k and compare them to the \beta that generated the data.
1 2 3 4 

Question
Question 2: Clearly the neural net that we used doesn't deliver the right estimates for \beta. The functional form that we specified when creating the model ressembles more a logit
than a linear model. Given what you know about the shape of the Dense layer and the Sigmoid activation, modify the model in order to have a functional form identical to the linear form. Either replace the model2 = kera_linear(K)
int he following code or write you own to try to get a good fit.
1 2 3 4 5 6 

Fitting nonlinear functions
Let's step it up! Let's try to fit a non linear function of one variable. We are going to use the cos
function.
1 2 

As we can see, this is a function which is neither linear or a simple activation function:
1 2 

This is where multilayers shine! We create the following model:
1 2 3 4 5 6 7 

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 

Question
Explain in your own words the structure of this neural net! How many input, what are each layers. Next write your own fit_model3
function which takes X and Y and model3 and fits it. You needs to adapt the few line of codes that we have used in previous sections. You will also need to play a bit with batch sizes and epochs to succeed in achieving a good fit, as in the following figure.
1 

We then construct the predition from the model and plot it on top of the data! Voila!
1 2 3 4 5 
