MultilayerPerceptron

class imagepypelines.builtin_blocks.MultilayerPerceptron(neurons=512, dropout=0.5, num_hidden=1, learning_rate=0.01, decay=1e-06, momentum=0.9, batch_size=128, label_type='integer', validation=0.0, num_epochs=1)[source]

Bases: imagepypelines.core.block_subclasses.BatchBlock

Simple Neural Network Classifier to predict the label of input features

This block utilizes keras in the backend This multilayer perceptron supports both integer and one-hot categorical labeling during training,

Parameters:
  • neurons (int) – the number of neurons in each of the first and hidden layers
  • dropout (float) – the fraction of neurons dropped out after each layer to mitigate network overfitting
  • num_hidden (int) – number of layers containing ‘neurons’ fully-connected neurons between the first and last layers. This is the parameter to tweak to make the network _deeper_
  • learning_rate (float) – initial learning rate for the SGD optimizer
  • decay (float) – learning rate decay for the SGD optimizer
  • momentum (float) – momentum of the SGD optimizer, this affects the descent rate and oscillation dampening
  • batch_size (int) – number of datums to train on in each batch, larger will improve speed, but increase memory footprint default is 128
  • label_type (string) – the type of labels passed in, must be either ‘categorical’ (one-hot) labels or ‘integer’ labels default is integer
  • validation (float) – the fraction of training data that will be used for validating the model. default is 0.0
  • num_epochs (int) – the number of epochs to train this model for (the number of times the model is trained on the training data). higher usually yields better results but linearly increases training time. default is 1
neurons

the number of neurons in each of the first and hidden layers

Type:int
dropout

the fraction of neurons dropped out after each layer to mitigate network overfitting

Type:float
num_hidden

number of layers containing ‘neurons’ fully-connected neurons between the first and last layers

Type:int
learning_rate

initial learning rate for the SGD optimizer

Type:float
decay

learning rate decay for the SGD optimizer

Type:float
momentum

momentum of the SGD optimizer, this is

Type:float
batch_size

number of datums to processing in each batch, larger will improve speed, but increase memory footprint default is 128

Type:int
label_type

the type of labels passed in, must be either ‘categorical’ (one-hot) labels or ‘integer’ labels default is integer

Type:string
validation

the fraction of training data that will be used for validating the model. default is 0.0

Type:float
model

the keras model being used in this block

Type:keras.models.Sequential
io_map

object that maps inputs to this block to outputs subclass of tuple where I/O is stored as: ( (input1,output1),(input2,output2)… )

Type:IoMap
name

unique name for this block

Type:str
notes

a short description of this block

Type:str
requires_training

whether or not this block will require training

Type:bool
trained

whether or not this block has been trained, True by default if requires_training = False

Type:bool
printer

printer object for this block, registered to ‘name’

Type:ip.Printer
num_epochs

the number of epochs to train this model for (the number of times the model is trained on the training data). higher usually yields better results but linearly increases training time. default is 1

Type:int

Attributes Summary

EXTANT

Methods Summary

after_process() (optional overload)function that runs after processing for optional functionality.
batch_process(data) generates the predicted label given input features
before_process(data[, labels]) (optional overload)function that runs before processing for optional functionality.
label_strategy(labels) runs self.labels
labels(labels) (optional overload) returns all labels for input datums or None
process_strategy(data) runs self.batch_process
rename(name) Renames this block to the given name
train(data, labels) (optional or required overload)trains the block.

Attributes Documentation

EXTANT = {}

Methods Documentation

after_process()

(optional overload)function that runs after processing for optional functionality. intended for optional use as a cleanup function

Parameters:None
batch_process(data)[source]

generates the predicted label given input features

before_process(data, labels=None)

(optional overload)function that runs before processing for optional functionality. this function takes in the full data list and label list. does nothing unless overloaded

Parameters:
  • data (list) – list of datums to process
  • labels (list,None) – corresponding label for each datum, None by default (for unsupervised systems)
label_strategy(labels)

runs self.labels

labels(labels)

(optional overload) returns all labels for input datums or None

process_strategy(data)

runs self.batch_process

rename(name)

Renames this block to the given name

Parameters:name (str) – the new name for your Block
Returns:object reference to this block (self)
Return type:ip.Block

Note

unlike naming your block using the name parameter in instantiation, imagepypelines will not guarantee that this name will be unique. It is considered the user’s responsibility to determine that this will not cause problems in your pipeline.

train(data, labels)[source]

(optional or required overload)trains the block. overloading is required if the ‘requires_training’ parameter is set to True

users are expected to save pertinent variables as instance variables

Parameters:
  • data (list) – list of datums to train on
  • labels (list,None) – corresponding label for each datum, None by default (for unsupervised systems)
Returns:

None