fast::ImageToImageNetwork class

Image-to-Image neural network process object.

This class is a convenience class for a neural network which performs image to image transformation by having 1 input image, and outputs 1 image. Internally it uses TensorToImage. If you need multi-input or multi-output support, use NeuralNetwork with TensorToImage instead.

Base classes

class NeuralNetwork
Neural network process object.

Public types

enum class Normalization { CLIP_0_1 = 0, CLIP_0_SQUEEZE, NONE }
Normalization method of image after each iteration.

Public functions

auto create(std::string modelFilename, float scaleFactor, int iterations, bool residualNetwork, bool resizeBackToOriginalSize, bool castBackToOriginalType, std::vector<int> channelsToExtract, Normalization normalization, float meanIntensity, float standardDeviationIntensity, std::vector<NeuralNetworkNode> inputNodes, std::vector<NeuralNetworkNode> outputNodes, std::string inferenceEngine, std::vector<std::string> customPlugins) -> std::shared_ptr<ImageToImageNetwork>
Create instance.
auto create(std::string modelFilename, std::vector<NeuralNetworkNode> inputNodes, std::vector<NeuralNetworkNode> outputNodes, std::string inferenceEngine, std::vector<std::string> customPlugins) -> std::shared_ptr<ImageToImageNetwork>
Create instance C++ friendly create with parameters that must be set before loading.
void loadAttributes() virtual
void setIterations(int iterations)
auto getIterations() const -> int
void setResidualNetwork(bool residual)
void setResizeOutput(bool resizeOutput)
void setCastOutput(bool castOutput)
void setChannels(std::vector<int> channels)
void setNormalization(Normalization norm)
Specify normalization to be performed after each iteration.
void setDisabled(bool disabled)
Set disabled state When disabled, this PO will just forward the input image instead of processing it.
auto isDisabled() const -> bool
Check if disabled.

Private functions

void execute() virtual

Enum documentation

enum class fast::ImageToImageNetwork::Normalization

Normalization method of image after each iteration.

Enumerators
CLIP_0_1

Clip image intensities at 0 and 1

CLIP_0_SQUEEZE

Clip image intensities at 0 and squeeze intensity range if max is above 1.0

NONE

No normalization

Function documentation

std::shared_ptr<ImageToImageNetwork> fast::ImageToImageNetwork::create(std::string modelFilename, float scaleFactor, int iterations, bool residualNetwork, bool resizeBackToOriginalSize, bool castBackToOriginalType, std::vector<int> channelsToExtract, Normalization normalization, float meanIntensity, float standardDeviationIntensity, std::vector<NeuralNetworkNode> inputNodes, std::vector<NeuralNetworkNode> outputNodes, std::string inferenceEngine, std::vector<std::string> customPlugins)

Create instance.

Parameters
modelFilename path to model to load
scaleFactor A value which is multiplied with each pixel of input image before it is sent to the neural network. Use this to scale your pixels values. Default: 1.0
iterations Number of iterations to run the network
residualNetwork Whether this image-to-image network is a residual network. If true, the output is added to the input image to create the final output image.
resizeBackToOriginalSize Whether to resize the output image to its original input image size
castBackToOriginalType Whether to cast the output image to its input image type
channelsToExtract Which channels to extract from the output tensor. Default (empty list) is to extract all channels.
normalization
meanIntensity Mean intensity to subtract from each pixel of the input image
standardDeviationIntensity Standard deviation to divide each pixel of the input image by
inputNodes Specify names, and potentially shapes, of input nodes. Not necessary unless you only want to use certain inputs or specify the input shape manually.
outputNodes Specify names, and potentially shapes, of output nodes to use. Not necessary unless you only want to use certain outputs or specify the output shape manually.
inferenceEngine Specify which inference engine to use (TensorFlow, TensorRT, OpenVINO). By default, FAST will select the best inference engine available on your system.
customPlugins Specify path to any custom plugins/operators to load
Returns instance

std::shared_ptr<ImageToImageNetwork> fast::ImageToImageNetwork::create(std::string modelFilename, std::vector<NeuralNetworkNode> inputNodes, std::vector<NeuralNetworkNode> outputNodes, std::string inferenceEngine, std::vector<std::string> customPlugins)

Create instance C++ friendly create with parameters that must be set before loading.

Parameters
modelFilename Path to model to load
inputNodes Specify names, and potentially shapes, of input nodes. Not necessary unless you only want to use certain inputs or specify the input shape manually.
outputNodes Specify names, and potentially shapes, of output nodes to use. Not necessary unless you only want to use certain outputs or specify the output shape manually.
inferenceEngine Specify which inference engine to use (TensorFlow, TensorRT, OpenVINO). By default, FAST will select the best inference engine available on your system.
customPlugins Specify path to any custom plugins/operators to load
Returns instance

void fast::ImageToImageNetwork::setNormalization(Normalization norm)

Specify normalization to be performed after each iteration.

Parameters
norm

void fast::ImageToImageNetwork::setDisabled(bool disabled)

Set disabled state When disabled, this PO will just forward the input image instead of processing it.

Parameters
disabled

bool fast::ImageToImageNetwork::isDisabled() const

Check if disabled.