Thursday, 6 May 2021

What is the correct last layer for a concatenate multi-input deep neural network in Keras?

I am trying to implement with Keras a multi input model for a multiclass classification problem with 3 possible outputs, but i can't understand if it's correct to leave any layers as the last one, or if it should respect the restrictions of the number of class.
So what is the correct (if is possible to define a correct one) architecture between these 2 below?

1)

def createModel(numData, boolData, ordData):
    numIn = ks.Input(shape=numData.shape[1:3], name='numeric')
    x = ks.layers.Masking(mask_value=np.float64(0), input_shape=numData.shape[1:3])(numIn)
    mod1 = ks.layers.LSTM(128, return_sequences=True)(x)
    mod1 = ks.layers.LSTM(128)(mod1)
    model1 = ks.Model(numIn, mod1)

    boolIn = ks.Input(shape=boolData.shape[1], name='boolean')
    mod2 = ks.layers.Dense(128, activation='relu')(boolIn)
    mod2 = ks.layers.Dense(128, activation='relu')(mod2)
    model2 = ks.Model(boolIn, mod2)

    ordIn = ks.Input(shape=ordData.shape[1], name='ordinal')
    mod3 = ks.layers.Dense(128, activation='relu')(ordIn)
    mod3 = ks.layers.Dense(128, activation='relu')(mod3)
    model3 = ks.Model(ordIn, mod3)

    finMod = ks.layers.concatenate([model1.output, model2.output, model3.output])

    out = ks.layers.Dense(3, activation='softmax', name='out')(finMod)
    model = ks.Model(inputs=[model1.input, model2.input, model3.input], outputs=[out])
    return model
        def createModel(numData, boolData, ordData):
            numIn = ks.Input(shape=numData.shape[1:3], name='numeric')
            x = ks.layers.Masking(mask_value=np.float64(0), input_shape=numData.shape[1:3])(numIn)
            mod1 = ks.layers.LSTM(128, return_sequences=True)(x)
            mod1 = ks.layers.LSTM(128)(mod1)
            mod1 = ks.layers.Dense(3, activation='softmax')(mod1) #added layer
            model1 = ks.Model(numIn, mod1)
    
            boolIn = ks.Input(shape=boolData.shape[1], name='boolean')
            mod2 = ks.layers.Dense(128, activation='relu')(boolIn)
            mod2 = ks.layers.Dense(128, activation='relu')(mod2)
            mod2 = ks.layers.Dense(3, activation='softmax')(mod2) #added layer
            model2 = ks.Model(boolIn, mod2)
    
            ordIn = ks.Input(shape=ordData.shape[1], name='ordinal')
            mod3 = ks.layers.Dense(128, activation='relu')(ordIn)
            mod3 = ks.layers.Dense(128, activation='relu')(mod3)
            mod3 = ks.layers.Dense(3, activation='softmax')(mod3) #added layer
            model3 = ks.Model(ordIn, mod3)
    
            finMod = ks.layers.concatenate([model1.output, model2.output, model3.output])
    
            out = ks.layers.Dense(3, activation='softmax', name='out')(finMod)
            model = ks.Model(inputs=[model1.input, model2.input, model3.input], outputs=[out])
            return model
    

    Obviously if there are other big mistakes please tell me because i am still learning Keras and deep learning and i may have misunderstood some basics.



    from What is the correct last layer for a concatenate multi-input deep neural network in Keras?

    No comments:

    Post a Comment