Python Part
You are given code that builds an 8-3-8 autoencoder in ( file attached below ) developed in Google Colab in files. There is also code to extract the outputs of the hidden layer. So, if this was learned well the outputs of the 3 hidden units for the 8 examples will enable you to build a classifier that uses those outputs as inputs and correctly classifies them.
1- You must build the (at least single layer) neural network that does this and show the results. The results will be the output of the predictions of the new network. Report the raw values and the values rounded to the nearest integer (i.e. 0 or 1).
Weka Part
You must take the outputs of the original hidden layer and transfer them to an arff file. Then load that into Weka. Your arff header will be something like that below (I called the outputs px for each of the places a 1 could be).
@ATTRIBUTE f1 REAL@ATTRIBUTE f2 REAL@ATTRIBUTE f3 REAL@ATTRIBUTE class {p1,p2,p3,p4,p5,p6,p7,p8}
Use J48 to test on training data.
1- Show the result in terms of accuracy and confusion matrix.
2- Comment on the quality of the extracted features.
from math import sqrt
from numpy import concatenate
import tensorflow as tf
import math
import os
import numpy as np
from tensorflow import keras
#from tensorflow.keras import backend as K
from keras import backend as K
import random
#from keras.utils import np_utils
from sklearn.utils import shuffle
X = np.array([[1,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,1],[0,0,0,0,0,0,1,0], [0,0,0,0,0,1,0,0], [0,0,0,0,1,0,0,0],
[0,0,0,1,0,0,0,0],[0,0,1,0,0,0,0,0],[0,1,0,0,0,0,0,0]])
y = np.array([[1,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,1],[0,0,0,0,0,0,1,0], [0,0,0,0,0,1,0,0], [0,0,0,0,1,0,0,0],
[0,0,0,1,0,0,0,0],[0,0,1,0,0,0,0,0],[0,1,0,0,0,0,0,0]])
COST_FUNCTION = “mean_squared_error”
#design Network
model = keras.Sequential()
model.add(keras.layers.Dense(3, input_dim=8, activation=’tanh’))
# Output layer
model.add(keras.layers.Dense(8, activation=’sigmoid’))
opt = keras.optimizers.Adam(learning_rate=0.08)
model.compile(loss=COST_FUNCTION, optimizer = opt) # ,metrics=[‘mae’])
print(model.summary())
#print(X.shape)
#print(y.shape)
history = model.fit(X, y, validation_data=(X,y),epochs=800, batch_size=1,
verbose=2, shuffle=False)
model.save(‘autoenc’+’.h5′)
predictions = model.predict(X)
predictions=np.round(predictions)
print(“Predictions are\n”,predictions)
print(“Model Weights”)
print (model.get_weights())
# This works in jupyter notebook only
get_1st_layer_output = K.function([model.layers[0].input],
[model.layers[0].output])
layer_output = get_1st_layer_output(X)[0]
print(“Hidden output: “, layer_output)
new_input=layer_output
# Build model to predict from this compressed input (of 3 values) with a single or more layers
# neural network