I'm trying to extend a matching matching algorithm across a sequence. My matches are 20 units long and have 4 channels at each timepoint. I have built a model that encapsulates the matching, I just can't figure out how to use that in a sliding window to apply it across a longer sequence to find the matches within the sequence.
I have 2 (20, 4)
input tensors (query
and target
) that I concatenate, add, flatten, and then apply a simple dense layer. I have data at this stage to train with 100K query, target pairs.
def sum_seqs(seqs):
return K.sum(seqs, axis=3)
def pad_dims(seq):
return K.expand_dims(seq, axis=3)
def pad_outshape(in_shape):
return (in_shape[0], in_shape[1], in_shape[2], 1)
query = Input((20, 4))
query_pad = Lambda(pad_dims, output_shape=pad_outshape, name='gpad')(query)
target = Input((20,4))
target_pad = Lambda(pad_dims, output_shape=pad_outshape)(target)
matching = Concatenate(axis = 3)([query_pad, target_pad])
matching = Lambda(sum_seqs)(matching)
matching = Flatten()(matching)
matching = Dropout(0.1)(matching)
matching = Dense(1, activation = 'sigmoid')(matching)
match_model = Model([query, target], matching)
This works perfectly. Now I want to use this pre-trained model to search a longer target
sequence with varying query
sequences.
It seems it should be something like:
long_target = Input((100, 4))
short_target = Input((20, 4))
choose_query = Input((20, 4))
spec_match = match_model([choose_query, short_target])
mdl = TimeDistributed(spec_match)(long_target)
But TimeDistributed
takes a Layer
not a Tensor
. Is there a wrapper I'm missing? Am I going about this the wrong way? Do I need to reformulate this as a convolution problem somehow?
Continued experimentation: After a day of beating my head against the keyboard it is clear that both TimeDistributed
and backend.rnn
only allow you to apply a model/layer to a single time-slice of the data. It doesn't seem like there is a way to do this. It looks like the only thing that can "walk" across multiple slices of the time dimension is a Conv1D
.
So, I reframed my problem as a convolution but that doesn't work well either. I was able to building a Conv1D
filter that it would match a specific query
. This worked reasonably well and it did allow me to scan longer sequences and get matches. BUT each filter is unique to each query
tensor and there doesn't seem to be a way to go from a novel query
to the appropriate filter weights without training a whole new Conv1D
layer. Since my goal is to find new query
s which match the most targets this doesn't help much.
Since my "matching" requires the interaction of the target AND the query at each window there doesn't seem to be a way I can get an interaction of a 20-length query
tensor at each window across a 100-length target
tensor through Conv1D
.
Is there any way to do this sliding window type evaluation in Keras/tensorflow? It seems like something so simple yet so far away. Is there a way I can do this that I'm not finding?
from Evaluate a function in a sliding window with Keras
No comments:
Post a Comment