How to use kernel method in autotest

Best Python code snippet using autotest_python

multi_lead_branch_fusion.py

Source: multi_lead_branch_fusion.py Github

copy

Full Screen

1# -*- coding: utf-8 -*-2"""Multi Lead Branch Fusion3Automatically generated by Colaboratory.4Original file is located at5 https:/​/​colab.research.google.com/​drive/​1RfMexPUadW9T6txCWITOdRUBhsP-DOE86"""7input = Input(shape=(30000,6), dtype='float32', name= 'input')8x = Conv1D(12,kernel_size=3,strides=1, padding='same')(input)9x = LeakyReLU(alpha=0.3)(x)10x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)11x = LeakyReLU(alpha=0.3)(x)12x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)13x = LeakyReLU(alpha=0.3)(x)14x = Dropout(0.2)(x)15x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)16x = LeakyReLU(alpha=0.3)(x)17x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)18x = LeakyReLU(alpha=0.3)(x)19x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)20x = LeakyReLU(alpha=0.3)(x)21x = Dropout(0.2)(x)22x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)23x = LeakyReLU(alpha=0.3)(x)24x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)25x = LeakyReLU(alpha=0.3)(x)26x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)27x = LeakyReLU(alpha=0.3)(x)28x = Dropout(0.2)(x) 29x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)30x = LeakyReLU(alpha=0.3)(x)31x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)32x = LeakyReLU(alpha=0.3)(x)33x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)34x = LeakyReLU(alpha=0.3)(x)35x = Dropout(0.2)(x)36x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)37x = LeakyReLU(alpha=0.3)(x)38x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)39x = LeakyReLU(alpha=0.3)(x)40x = Conv1D(12,kernel_size=48, strides = 2, padding='same')(x)41x = LeakyReLU(alpha=0.3)(x)42x = Dropout(0.2)(x)43x1= Bidirectional(GRU(12, input_shape=(938,6),return_sequences=True,return_state=False),merge_mode='concat')(x)44x = LeakyReLU(alpha=0.3)(x1)45x = Dropout(0.2)(x)46x = AttentionWithContext()(x)47x = BatchNormalization()(x)48x = LeakyReLU(alpha=0.3)(x)49x = Dropout(0.2)(x)50x = Dense(7,activation='sigmoid')(x)51 52 #SECOND MODEL53x = Conv1D(12,kernel_size=3,strides=1, padding='same')(input)54x = LeakyReLU(alpha=0.3)(x)55x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)56x = LeakyReLU(alpha=0.3)(x)57x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)58x = LeakyReLU(alpha=0.3)(x)59x = Dropout(0.2)(x)60x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)61x = LeakyReLU(alpha=0.3)(x)62x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)63x = LeakyReLU(alpha=0.3)(x)64x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)65x = LeakyReLU(alpha=0.3)(x)66x = Dropout(0.5)(x)67x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)68x = LeakyReLU(alpha=0.3)(x)69x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)70x = LeakyReLU(alpha=0.3)(x)71x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)72x = LeakyReLU(alpha=0.3)(x)73x = Dropout(0.2)(x) 74x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)75x = LeakyReLU(alpha=0.3)(x)76x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)77x = LeakyReLU(alpha=0.3)(x)78x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)79x = LeakyReLU(alpha=0.3)(x)80x = Dropout(0.2)(x)81x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)82x = LeakyReLU(alpha=0.3)(x)83x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)84x = LeakyReLU(alpha=0.3)(x)85x = Conv1D(12,kernel_size=48, strides = 2, padding='same')(x)86x = LeakyReLU(alpha=0.3)(x)87x = Dropout(0.2)(x)88x2 = Bidirectional(GRU(12, input_shape=(938,6),return_sequences=True,return_state=False),merge_mode='concat')(x)89x = LeakyReLU(alpha=0.3)(x2)90x = Dropout(0.2)(x)91x = AttentionWithContext()(x)92x = BatchNormalization()(x)93x = LeakyReLU(alpha=0.3)(x)94x = Dropout(0.2)(x)95x = Dense(7,activation='sigmoid')(x)96 #THIRD MODEL97x = Conv1D(12,kernel_size=3,strides=1, padding='same')(input)98x = LeakyReLU(alpha=0.3)(x)99x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)100x = LeakyReLU(alpha=0.3)(x)101x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)102x = LeakyReLU(alpha=0.3)(x)103x = Dropout(0.2)(x)104x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)105x = LeakyReLU(alpha=0.3)(x)106x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)107x = LeakyReLU(alpha=0.3)(x)108x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)109x = LeakyReLU(alpha=0.3)(x)110x = Dropout(0.2)(x)111x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)112x = LeakyReLU(alpha=0.3)(x)113x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)114x = LeakyReLU(alpha=0.3)(x)115x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)116x = LeakyReLU(alpha=0.3)(x)117x = Dropout(0.2)(x) 118x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)119x = LeakyReLU(alpha=0.3)(x)120x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)121x = LeakyReLU(alpha=0.3)(x)122x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)123x = LeakyReLU(alpha=0.3)(x)124x = Dropout(0.2)(x)125x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)126x = LeakyReLU(alpha=0.3)(x)127x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)128x = LeakyReLU(alpha=0.3)(x)129x = Conv1D(12,kernel_size=48, strides = 2, padding='same')(x)130x = LeakyReLU(alpha=0.3)(x)131x= Dropout(0.2)(x)132x3= Bidirectional(GRU(12, input_shape=(938,6),return_sequences=True,return_state=False),merge_mode='concat')(x)133x = LeakyReLU(alpha=0.3)(x3)134x = Dropout(0.2)(x)135x = AttentionWithContext()(x)136x = BatchNormalization()(x)137x = LeakyReLU(alpha=0.3)(x)138x = Dropout(0.2)(x)139x = Dense(7,activation='sigmoid')(x)140 #MODEL 4141x = Conv1D(12,kernel_size=3,strides=1, padding='same')(input)142x = LeakyReLU(alpha=0.3)(x)143x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)144x = LeakyReLU(alpha=0.3)(x)145x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)146x = LeakyReLU(alpha=0.3)(x)147x = Dropout(0.2)(x)148x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)149x = LeakyReLU(alpha=0.3)(x)150x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)151x = LeakyReLU(alpha=0.3)(x)152x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)153x = LeakyReLU(alpha=0.3)(x)154x = Dropout(0.2)(x)155x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)156x = LeakyReLU(alpha=0.3)(x)157x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)158x = LeakyReLU(alpha=0.3)(x)159x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)160x = LeakyReLU(alpha=0.3)(x)161x = Dropout(0.2)(x) 162x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)163x = LeakyReLU(alpha=0.3)(x)164x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)165x = LeakyReLU(alpha=0.3)(x)166x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)167x = LeakyReLU(alpha=0.3)(x)168x = Dropout(0.2)(x)169x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)170x = LeakyReLU(alpha=0.3)(x)171x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)172x = LeakyReLU(alpha=0.3)(x)173x = Conv1D(12,kernel_size=48, strides = 2, padding='same')(x)174x = LeakyReLU(alpha=0.3)(x)175x= Dropout(0.2)(x)176x4= Bidirectional(GRU(12, input_shape=(938,6),return_sequences=True,return_state=False),merge_mode='concat')(x)177x = LeakyReLU(alpha=0.3)(x4)178x = Dropout(0.2)(x)179x = AttentionWithContext()(x)180x = BatchNormalization()(x)181x = LeakyReLU(alpha=0.3)(x)182x = Dropout(0.2)(x)183x = Dense(7,activation='sigmoid')(x)184 #MODEL 5185x = Conv1D(12,kernel_size=3,strides=1, padding='same')(input)186x = LeakyReLU(alpha=0.3)(x)187x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)188x = LeakyReLU(alpha=0.3)(x)189x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)190x = LeakyReLU(alpha=0.3)(x)191x = Dropout(0.2)(x)192x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)193x = LeakyReLU(alpha=0.3)(x)194x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)195x = LeakyReLU(alpha=0.3)(x)196x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)197x = LeakyReLU(alpha=0.3)(x)198x = Dropout(0.2)(x)199x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)200x = LeakyReLU(alpha=0.3)(x)201x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)202x = LeakyReLU(alpha=0.3)(x)203x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)204x = LeakyReLU(alpha=0.3)(x)205x = Dropout(0.2)(x) 206x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)207x = LeakyReLU(alpha=0.3)(x)208x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)209x = LeakyReLU(alpha=0.3)(x)210x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)211x = LeakyReLU(alpha=0.3)(x)212x = Dropout(0.2)(x)213x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)214x = LeakyReLU(alpha=0.3)(x)215x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)216x = LeakyReLU(alpha=0.3)(x)217x = Conv1D(12,kernel_size=48, strides = 2, padding='same')(x)218x = LeakyReLU(alpha=0.3)(x)219x = Dropout(0.2)(x)220x5= Bidirectional(GRU(12, input_shape=(938,6),return_sequences=True,return_state=False),merge_mode='concat')(x)221x = LeakyReLU(alpha=0.3)(x5)222x = Dropout(0.2)(x)223x = AttentionWithContext()(x)224x = BatchNormalization()(x)225x = LeakyReLU(alpha=0.3)(x)226x = Dropout(0.2)(x)227x= Dense(7,activation='sigmoid')(x)228 229 230 #MODEL 6231x = Conv1D(12,kernel_size=3,strides=1, padding='same')(input)232x = LeakyReLU(alpha=0.3)(x)233x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)234x = LeakyReLU(alpha=0.3)(x)235x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)236x = LeakyReLU(alpha=0.3)(x)237x = Dropout(0.2)(x)238x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)239x = LeakyReLU(alpha=0.3)(x)240x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)241x = LeakyReLU(alpha=0.3)(x)242x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)243x = LeakyReLU(alpha=0.3)(x)244x = Dropout(0.2)(x)245x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)246x = LeakyReLU(alpha=0.3)(x)247x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)248x = LeakyReLU(alpha=0.3)(x)249x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)250x = LeakyReLU(alpha=0.3)(x)251x = Dropout(0.2)(x) 252x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)253x = LeakyReLU(alpha=0.3)(x)254x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)255x = LeakyReLU(alpha=0.3)(x)256x = Conv1D(12,kernel_size=24, strides = 2, padding='same')(x)257x = LeakyReLU(alpha=0.3)(x)258x = Dropout(0.2)(x)259x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)260x = LeakyReLU(alpha=0.3)(x)261x = Conv1D(12,kernel_size=3,strides=1, padding='same')(x)262x = LeakyReLU(alpha=0.3)(x)263x = Conv1D(12,kernel_size=48, strides = 2, padding='same')(x)264x = LeakyReLU(alpha=0.3)(x)265x = Dropout(0.2)(x)266x6= Bidirectional(GRU(12, input_shape=(938,6),return_sequences=True,return_state=False),merge_mode='concat')(x)267x = LeakyReLU(alpha=0.3)(x6)268x = Dropout(0.2)(x)269x = AttentionWithContext()(x)270x = BatchNormalization()(x)271x = LeakyReLU(alpha=0.3)(x)272x = Dropout(0.2)(x)273x= Dense(7,activation='sigmoid')(x)274 #MERGE INPUT MODELS275z=concatenate([x1,x2,x3,x4,x5,x6])276print(z.shape)277#FINAL ATTENTIOON MODULE278x = AttentionWithContext()(z)279x = BatchNormalization()(x)280x = LeakyReLU(alpha=0.3)(x)281x = Dropout(0.2)(x)282x = Dense(50,activation='sigmoid')(x)283output = Dense(7,activation='sigmoid')(x)...

Full Screen

Full Screen

blocks_masked_conv2d_test.py

Source: blocks_masked_conv2d_test.py Github

copy

Full Screen

1# Copyright 2017 The TensorFlow Authors All Rights Reserved.2#3# Licensed under the Apache License, Version 2.0 (the "License");4# you may not use this file except in compliance with the License.5# You may obtain a copy of the License at6#7# http:/​/​www.apache.org/​licenses/​LICENSE-2.08#9# Unless required by applicable law or agreed to in writing, software10# distributed under the License is distributed on an "AS IS" BASIS,11# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.12# See the License for the specific language governing permissions and13# limitations under the License.14# ==============================================================================15"""Tests of the 2D masked convolution blocks."""16from __future__ import division17from __future__ import unicode_literals18import numpy as np19from six.moves import xrange20import tensorflow as tf21import blocks_masked_conv2d22class MaskedConv2DTest(tf.test.TestCase):23 def testRasterScanKernel(self):24 kernel_size = 525 input_depth = 126 output_depth = 127 kernel_shape = [kernel_size, kernel_size, input_depth, output_depth]28 # pylint: disable=bad-whitespace29 kernel_feed = [[ 1.0, 2.0, 3.0, 4.0, 5.0],30 [ 6.0, 7.0, 8.0, 9.0, 10.0],31 [11.0, 12.0, 13.0, 14.0, 15.0],32 [16.0, 17.0, 18.0, 19.0, 20.0],33 [21.0, 22.0, 23.0, 24.0, 25.0]]34 kernel_feed = np.reshape(kernel_feed, kernel_shape)35 kernel_expected = [[ 1.0, 2.0, 3.0, 4.0, 5.0],36 [ 6.0, 7.0, 8.0, 9.0, 10.0],37 [11.0, 12.0, 0.0, 0.0, 0.0],38 [ 0.0, 0.0, 0.0, 0.0, 0.0],39 [ 0.0, 0.0, 0.0, 0.0, 0.0]]40 kernel_expected = np.reshape(kernel_expected, kernel_shape)41 # pylint: enable=bad-whitespace42 init_kernel = lambda s, t: tf.constant(kernel_feed, dtype=t, shape=s)43 masked_conv2d = blocks_masked_conv2d.RasterScanConv2D(44 output_depth, [kernel_size] * 2, [1] * 2, 'SAME',45 initializer=init_kernel)46 x = tf.placeholder(dtype=tf.float32, shape=[10] * 3 + [input_depth])47 _ = masked_conv2d(x)48 with self.test_session():49 tf.global_variables_initializer().run()50 kernel_value = masked_conv2d._kernel.eval()51 self.assertAllEqual(kernel_expected, kernel_value)52 def testDepthOrderKernel(self):53 kernel_size = 154 input_depth = 755 output_depth = input_depth56 kernel_shape = [kernel_size, kernel_size, input_depth, output_depth]57 kernel_feed = np.ones(kernel_shape)58 x_shape = [5] * 3 + [input_depth]59 x_feed = np.ones(x_shape)60 y_expected = np.zeros(x_shape[0:3] + [output_depth])61 y_expected[:, :, :] = np.arange(output_depth)62 init_kernel = lambda s, t: tf.constant(kernel_feed, dtype=t, shape=s)63 masked_conv2d = blocks_masked_conv2d.DepthOrderConv2D(64 output_depth, [kernel_size] * 2, [1] * 2, 'SAME',65 strict_order=True,66 initializer=init_kernel)67 x = tf.placeholder(dtype=tf.float32, shape=x_shape)68 y = masked_conv2d(x)69 with self.test_session():70 tf.global_variables_initializer().run()71 y_value = y.eval(feed_dict={x: x_feed})72 self.assertAllEqual(y_expected, y_value)73 def testGroupRasterScanKernel(self):74 kernel_size = 375 input_depth = 476 input_group_size = 277 output_depth = 278 output_group_size = 179 kernel_shape = [kernel_size, kernel_size, input_depth, output_depth]80 kernel_feed = np.ones(shape=kernel_shape)81 height = 582 width = 583 x_shape = [1, height, width, input_depth]84 x_feed = np.ones(shape=x_shape)85 # pylint: disable=bad-whitespace86 y_expected = [87 [[ 0, 2], [ 4, 6], [ 4, 6], [ 4, 6], [ 4, 6]],88 [[ 8, 10], [16, 18], [16, 18], [16, 18], [12, 14]],89 [[ 8, 10], [16, 18], [16, 18], [16, 18], [12, 14]],90 [[ 8, 10], [16, 18], [16, 18], [16, 18], [12, 14]],91 [[ 8, 10], [16, 18], [16, 18], [16, 18], [12, 14]],92 ]93 y_expected = np.reshape(y_expected, [1, height, width, output_depth])94 # pylint: enable=bad-whitespace95 init_kernel = lambda s, t: tf.constant(kernel_feed, dtype=t, shape=s)96 masked_conv2d = blocks_masked_conv2d.GroupRasterScanConv2D(97 output_depth, [kernel_size] * 2, [1] * 2, 'SAME',98 strict_order=True,99 input_group_size=input_group_size,100 output_group_size=output_group_size,101 initializer=init_kernel)102 x = tf.placeholder(dtype=tf.float32, shape=x_shape)103 y = masked_conv2d(x)104 with self.test_session():105 tf.global_variables_initializer().run()106 y_value = y.eval(feed_dict={x: x_feed})107 self.assertAllEqual(y_expected, y_value)108 def testInFillingKernel(self):109 kernel_size = 5110 input_depth = 1111 output_depth = 1112 kernel_shape = [kernel_size, kernel_size, input_depth, output_depth]113 # pylint: disable=bad-whitespace114 kernel_feed = [[ 1.0, 2.0, 3.0, 4.0, 5.0],115 [ 6.0, 7.0, 8.0, 9.0, 10.0],116 [11.0, 12.0, 13.0, 14.0, 15.0],117 [16.0, 17.0, 18.0, 19.0, 20.0],118 [21.0, 22.0, 23.0, 24.0, 25.0]]119 kernel_feed = np.reshape(kernel_feed, kernel_shape)120 kernel_expected = [[ 1.0, 2.0, 3.0, 4.0, 5.0],121 [ 6.0, 7.0, 8.0, 9.0, 10.0],122 [11.0, 12.0, 0.0, 14.0, 15.0],123 [16.0, 17.0, 18.0, 19.0, 20.0],124 [21.0, 22.0, 23.0, 24.0, 25.0]]125 kernel_expected = np.reshape(kernel_expected, kernel_shape)126 # pylint: enable=bad-whitespace127 init_kernel = lambda s, t: tf.constant(kernel_feed, dtype=t, shape=s)128 masked_conv2d = blocks_masked_conv2d.InFillingConv2D(129 output_depth, [kernel_size] * 2, [1] * 2, 'SAME',130 initializer=init_kernel)131 x = tf.placeholder(dtype=tf.float32, shape=[10] * 3 + [input_depth])132 _ = masked_conv2d(x)133 with self.test_session():134 tf.global_variables_initializer().run()135 kernel_value = masked_conv2d._kernel.eval()136 self.assertAllEqual(kernel_expected, kernel_value)137 def testConv2DMaskedNumerics(self):138 kernel_size = 5139 input_shape = [1, 10, 10, 1]140 filter_shape = [kernel_size, kernel_size, 1, 1]141 strides = [1, 1, 1, 1]142 output_shape = [1, 10, 10, 1]143 conv = blocks_masked_conv2d.RasterScanConv2D(144 depth=filter_shape[-1],145 filter_size=filter_shape[0:2],146 strides=strides[1:3],147 padding='SAME',148 initializer=tf.constant_initializer(value=1.0))149 x = tf.placeholder(dtype=tf.float32, shape=input_shape)150 y = conv(x)151 x_feed = - np.ones(input_shape, dtype=float)152 y_expected = np.ones(output_shape, dtype=float)153 for i in xrange(input_shape[1]):154 for j in xrange(input_shape[2]):155 x_feed[0, i, j, 0] = 10 * (j + 1) + i156 v = 0157 ki_start = max(i - kernel_size /​/​ 2, 0)158 kj_start = max(j - kernel_size /​/​ 2, 0)159 kj_end = min(j + kernel_size /​/​ 2, input_shape[2] - 1)160 for ki in range(ki_start, i + 1):161 for kj in range(kj_start, kj_end + 1):162 if ki > i:163 continue164 if ki == i and kj >= j:165 continue166 v += 10 * (kj + 1) + ki167 y_expected[0, i, j, 0] = v168 with self.test_session():169 tf.global_variables_initializer().run()170 y_value = y.eval(feed_dict={x: x_feed})171 self.assertAllEqual(y_expected, y_value)172if __name__ == '__main__':...

Full Screen

Full Screen

models.py

Source: models.py Github

copy

Full Screen

1from keras.layers import Dense, Input, LSTM, Dropout2from keras.models import Model, Sequential3from keras.initializers import Constant, RandomNormal, RandomUniform4from keras.optimizers import Adam5from keras.regularizers import l1, l26def create_model_stateless(seq_len, seq_dim, kernel_initializer=RandomUniform(), kernel_regularizer=None,7 hidden_units=50, hidden_layers=1, activation="tanh", regularization=0.01):8 x = Input(shape=(seq_len, seq_dim))9 if hidden_layers == 1:10 h = LSTM(hidden_units, return_sequences = False, activation=activation, kernel_initializer=kernel_initializer,11 kernel_regularizer=kernel_regularizer)(x)12 else:13 h = LSTM(hidden_units, return_sequences = True, activation=activation, kernel_initializer=kernel_initializer,14 kernel_regularizer=kernel_regularizer)(x)15 for i in range(1, hidden_layers):16 if i == hidden_layers - 1:17 h = LSTM(hidden_units, return_sequences = False, activation=activation, kernel_initializer=kernel_initializer,18 kernel_regularizer=kernel_regularizer)(h)19 else:20 h = LSTM(hidden_units, return_sequences = True, activation=activation, kernel_initializer=kernel_initializer,21 kernel_regularizer=kernel_regularizer)(h)22 y = Dense(seq_dim, activation = "linear", kernel_initializer=kernel_initializer, kernel_regularizer=None)(h)23 model = Model(x, y)24 model.compile(optimizer="adam", loss="mean_squared_error")25 return model26def create_model_stateful(batch_size, seq_len, seq_dim, kernel_initializer=RandomUniform(), kernel_regularizer=None,27 hidden_units=10, hidden_layers=1, activation="relu"):28 x = Input(batch_shape=(batch_size, seq_len, seq_dim))29 if hidden_layers ==1:30 h = LSTM(hidden_units, activation=activation, stateful=True, return_sequences=False,31 kernel_regularizer=kernel_regularizer, kernel_initializer=kernel_initializer)(x)32 else:33 h = LSTM(hidden_units, activation=activation, stateful=True, return_sequences=True,34 kernel_regularizer=kernel_regularizer, kernel_initializer=kernel_initializer)(x)35 for i in range(1, hidden_layers):36 if i == hidden_layers - 1:37 h = LSTM(hidden_units, activation=activation, stateful=True, return_sequences=False,38 kernel_regularizer=kernel_regularizer, kernel_initializer=kernel_initializer)(h)39 else:40 h = LSTM(hidden_units, activation=activation, stateful=True, return_sequences=True,41 kernel_regularizer=kernel_regularizer, kernel_initializer=kernel_initializer)(h)42 y = Dense(seq_dim, activation="linear", kernel_initializer=kernel_initializer)(h)43 model = Model(x, y)44 model.compile(optimizer="adam", loss="mean_squared_error")45 return model46def create_dropout_rnn(seq_len, seq_dim, kernel_initializer=RandomNormal(), kernel_regularizer=l2(0.0001), hidden_units=100,47 hidden_layers=1, activation="relu", dropout_recurrent=0.1, dropout_dense=0.1,48 dropout_input=0.1, dropout_lstm=0.1):49 x = Input(shape=(seq_len, seq_dim))50 h = Dropout(dropout_input)(x)51 h = LSTM(hidden_units, activation=activation, kernel_regularizer=kernel_regularizer, recurrent_regularizer=kernel_regularizer,52 recurrent_dropout=dropout_recurrent, dropout=dropout_lstm, bias_regularizer=kernel_regularizer,53 kernel_initializer=kernel_initializer, recurrent_initializer=kernel_initializer,54 bias_initializer=kernel_initializer)(h)55 h = Dropout(dropout_dense)(h)56 y = Dense(1, activation="linear", kernel_regularizer=kernel_regularizer, bias_regularizer=kernel_regularizer,57 kernel_initializer=kernel_initializer, bias_initializer=kernel_initializer)(h)58 model = Model(x, y)59 model.compile(optimizer="adam", loss="mean_squared_error")60 return model61def create_dropout_rnn_stateful(seq_len, seq_dim, kernel_initializer=RandomNormal(), kernel_regularizer=l2(0.0001), hidden_units=100,62 hidden_layers=1, activation="relu", dropout_recurrent=0.1, dropout_dense=0.1,63 dropout_input=0.1, dropout_lstm=0.1):64 x = Input(batch_shape=(1, seq_len, seq_dim))65 if hidden_layers == 1:66 h = Dropout(dropout_input)(x)67 h = LSTM(hidden_units, activation=activation, kernel_regularizer=kernel_regularizer, recurrent_regularizer=kernel_regularizer,68 recurrent_dropout=dropout_recurrent, dropout=dropout_lstm, bias_regularizer=kernel_regularizer, stateful=True,69 kernel_initializer=kernel_initializer, recurrent_initializer=kernel_initializer,70 bias_initializer=kernel_initializer, return_sequences=False)(h)71 for i in range(hidden_layers - 1):72 h = LSTM(hidden_units, activation=activation, kernel_regularizer=kernel_regularizer,73 recurrent_regularizer=kernel_regularizer,74 recurrent_dropout=dropout_recurrent, dropout=dropout_lstm, bias_regularizer=kernel_regularizer,75 stateful=True,76 kernel_initializer=kernel_initializer, recurrent_initializer=kernel_initializer,77 bias_initializer=kernel_initializer, return_sequences=True)(h)78 if hidden_layers > 1:79 h = LSTM(hidden_units, activation=activation, kernel_regularizer=kernel_regularizer,80 recurrent_regularizer=kernel_regularizer,81 recurrent_dropout=dropout_recurrent, dropout=dropout_lstm, bias_regularizer=kernel_regularizer,82 stateful=True,83 kernel_initializer=kernel_initializer, recurrent_initializer=kernel_initializer,84 bias_initializer=kernel_initializer, return_sequences=False)(h)85 h = Dropout(dropout_dense)(h)86 y = Dense(1, activation="linear", kernel_regularizer=kernel_regularizer, bias_regularizer=kernel_regularizer,87 kernel_initializer=kernel_initializer, bias_initializer=kernel_initializer)(h)88 model = Model(x, y)89 model.compile(optimizer="adam", loss="mean_squared_error")90 return model91def reinstantiate_model(config, weights):92 model = Model.from_config(config)93 model.set_weights(weights)94 return model95# class CustomLSTM(LSTM):96# def call(self, inputs, mask=None, training=None, initial_state=None):97# self.cell._generate_dropout_mask(inputs, training=training)98# self.cell._generate_recurrent_dropout_mask(inputs, training=training)99# return RNN.call(inputs, mask=mask, training=training,...

Full Screen

Full Screen

Blogs

Check out the latest blogs from LambdaTest on this topic:

Scala Testing: A Comprehensive Guide

Before we discuss Scala testing, let us understand the fundamentals of Scala and how this programming language is a preferred choice for your development requirements.The popularity and usage of Scala are rapidly rising, evident by the ever-increasing open positions for Scala developers.

What Agile Testing (Actually) Is

So, now that the first installment of this two fold article has been published (hence you might have an idea of what Agile Testing is not in my opinion), I’ve started feeling the pressure to explain what Agile Testing actually means to me.

How To Choose The Right Mobile App Testing Tools

Did you know that according to Statista, the number of smartphone users will reach 18.22 billion by 2025? Let’s face it, digital transformation is skyrocketing and will continue to do so. This swamps the mobile app development market with various options and gives rise to the need for the best mobile app testing tools

A Complete Guide To CSS Houdini

As a developer, checking the cross browser compatibility of your CSS properties is of utmost importance when building your website. I have often found myself excited to use a CSS feature only to discover that it’s still not supported on all browsers. Even if it is supported, the feature might be experimental and not work consistently across all browsers. Ask any front-end developer about using a CSS feature whose support is still in the experimental phase in most prominent web browsers. ????

Appium Testing Tutorial For Mobile Applications

The count of mobile users is on a steep rise. According to the research, by 2025, it is expected to reach 7.49 billion users worldwide. 70% of all US digital media time comes from mobile apps, and to your surprise, the average smartphone owner uses ten apps per day and 30 apps each month.

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run autotest automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful