The code and dataset will be provided. Report the performance of each network in terms of test accuracy, Plot the validation loss vs train loss, and validation accuracy vs train accuracy for all of the following tasks.
For all the following variants you need to add an Embedding layer as the first layer. Here is a good
explanation for what embedding layer does.
https://stats.stackexchange.com/questions/270546/h…
For TensorFlow you can use:
https://keras.io/api/layers/core_layers/embedding/
For PyTorch use:
https://pytorch.org/docs/stable/generated/torch.nn…
The parameters for embedding layer: embedding_dim=64, num_embeddings/ input_dim(Keras) =10000 since we only kept the 10000 most frequent words. (Please refer to provided Jupyter notebook attched zip file)
1. a) Use Vanilla RNN with hidden_dimension=64 followed by a one neuron FC layer with a
sigmoid
activation.
b) Use Vanilla RNN with hidden_dimension=64, followed by Global maxpool 1d, followed by
FC with
16 neurons with ReLU, followed by FC layer with single output with sigmoid
function.
2. a) Use LSTM with hidden_dimension=64 followed by a one neuron FC layer with a sigmoid
activation.
b) Use LSTM with hidden_dimension=64, followed by Global maxpool 1d, followed by FC with
16 neurons with ReLU, followed by FC layer with single output with sigmoid function.
C) Stacke two layers of LSTM, the output of stacked LSTM goes to Global maxpool 1d, followed
by FC with 16 neurons with ReLU, followed by FC layer with single output with sigmoid
function.
3. a) Use GRU with hidden_dimension=64 followed by a one neuron FC layer with a sigmoid
activation.
b) Use GRU with hidden_dimension=64, followed by Global maxpool 1d, followed by FC with
16 neurons with ReLU, followed by FC layer with single output with sigmoid function.