fast::ImageToImageNetwork class

Image-to-Image neural network process object.

This class is a convenience class for a neural network which performs image to image transformation by having 1 input image, and outputs 1 image. Internally it uses TensorToImage. If you need multi-input or multi-output support, use NeuralNetwork with TensorToImage instead.

Base classes

class NeuralNetwork
Neural network process object.

Public functions

auto create(std::string modelFilename, float scaleFactor, int iterations, bool residualNetwork, bool resizeBackToOriginalSize, bool castBackToOriginalType, std::vector<int> channelsToExtract, float meanIntensity, float standardDeviationIntensity, std::vector<NeuralNetworkNode> inputNodes, std::vector<NeuralNetworkNode> outputNodes, std::string inferenceEngine, std::vector<std::string> customPlugins) -> std::shared_ptr<ImageToImageNetwork>
Create instance.
auto create(std::string modelFilename, std::vector<NeuralNetworkNode> inputNodes, std::vector<NeuralNetworkNode> outputNodes, std::string inferenceEngine, std::vector<std::string> customPlugins) -> std::shared_ptr<ImageToImageNetwork>
Create instance C++ friendly create with parameters that must be set before loading.
void loadAttributes() virtual
void setIterations(int iterations)
auto getIterations() const -> int
void setResidualNetwork(bool residual)
void setResizeOutput(bool resizeOutput)
void setCastOutput(bool castOutput)
void setChannels(std::vector<int> channels)

Private functions

void execute() virtual

Function documentation

std::shared_ptr<ImageToImageNetwork> fast::ImageToImageNetwork::create(std::string modelFilename, float scaleFactor, int iterations, bool residualNetwork, bool resizeBackToOriginalSize, bool castBackToOriginalType, std::vector<int> channelsToExtract, float meanIntensity, float standardDeviationIntensity, std::vector<NeuralNetworkNode> inputNodes, std::vector<NeuralNetworkNode> outputNodes, std::string inferenceEngine, std::vector<std::string> customPlugins)

Create instance.

Parameters
modelFilename path to model to load
scaleFactor A value which is multiplied with each pixel of input image before it is sent to the neural network. Use this to scale your pixels values. Default: 1.0
iterations Number of iterations to run the network
residualNetwork Whether this image-to-image network is a residual network. If true, the output is added to the input image to create the final output image.
resizeBackToOriginalSize Whether to resize the output image to its original input image size
castBackToOriginalType Whether to cast the output image to its input image type
channelsToExtract Which channels to extract from the output tensor. Default (empty list) is to extract all channels.
meanIntensity Mean intensity to subtract from each pixel of the input image
standardDeviationIntensity Standard deviation to divide each pixel of the input image by
inputNodes Specify names, and potentially shapes, of input nodes. Not necessary unless you only want to use certain inputs or specify the input shape manually.
outputNodes Specify names, and potentially shapes, of output nodes to use. Not necessary unless you only want to use certain outputs or specify the output shape manually.
inferenceEngine Specify which inference engine to use (TensorFlow, TensorRT, OpenVINO). By default, FAST will select the best inference engine available on your system.
customPlugins Specify path to any custom plugins/operators to load
Returns instance

std::shared_ptr<ImageToImageNetwork> fast::ImageToImageNetwork::create(std::string modelFilename, std::vector<NeuralNetworkNode> inputNodes, std::vector<NeuralNetworkNode> outputNodes, std::string inferenceEngine, std::vector<std::string> customPlugins)

Create instance C++ friendly create with parameters that must be set before loading.

Parameters
modelFilename Path to model to load
inputNodes Specify names, and potentially shapes, of input nodes. Not necessary unless you only want to use certain inputs or specify the input shape manually.
outputNodes Specify names, and potentially shapes, of output nodes to use. Not necessary unless you only want to use certain outputs or specify the output shape manually.
inferenceEngine Specify which inference engine to use (TensorFlow, TensorRT, OpenVINO). By default, FAST will select the best inference engine available on your system.
customPlugins Specify path to any custom plugins/operators to load
Returns instance