


In the first experiment, let's create a multi-layer feed-forward network, concatenate ( encoded_features ) return all_features append ( encoded_feature ) all_features = layers. expand_dims ( inputs, - 1 ) encoded_features. expand_dims ( inputs, - 1 )) else : # Use the numerical features as-is. encoded_feature = embedding ( encoded_feature ) else : # Convert the string input values into a one hot encoding.

Embedding ( input_dim = len ( vocabulary ), output_dim = embedding_dims ) # Convert the index values to embedding representations. sqrt ( len ( vocabulary ))) # Create an embedding layer with the specified dimensions. encoded_feature = lookup ( inputs ) embedding_dims = int ( math. lookup = StringLookup ( vocabulary = vocabulary, mask_token = None, num_oov_indices = 0, output_mode = "int" if use_embedding else "binary", ) if use_embedding : # Convert the string input values into integer indices. # Since we are not using a mask token nor expecting any out of vocabulary # (oov) token, we set mask_token to None and num_oov_indices to 0. Next, let's define an input function that reads and parses the file, then converts featuresįrom import StringLookup def encode_inputs ( inputs, use_embedding = False ): encoded_features = for feature_name in inputs : if feature_name in CATEGORICAL_FEATURE_NAMES : vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY # Create a lookup to convert string values to an integer indices. Soil_type_values = if feature_name in NUMERIC_FEATURE_NAMES + else for feature_name in CSV_HEADER ] NUM_CLASSES = len ( TARGET_FEATURE_LABELS )
