fast::NeuralNetwork class

Neural network process object.

This is the neural network process object base. All other neural network process objects should extend this class.

Base classes

class ProcessObject
Abstract base class for all process objects.

Derived classes

class BoundingBoxNetwork
Neural network process object for bounding box detection.
class FlowNetwork
A neural network for optical flow estimation.
class ImageClassificationNetwork
Image classification neural network.
class SegmentationNetwork
Segmentation neural network process object.

Constructors, destructors, conversion operators

~NeuralNetwork() virtual
NeuralNetwork() protected

Public functions

auto create(std::string modelFilename, float scaleFactor, float meanIntensity, float stanardDeviationIntensity, std::vector<NeuralNetworkNode> inputNodes, std::vector<NeuralNetworkNode> outputNodes, std::string inferenceEngine, std::vector<std::string> customPlugins) FAST_CONSTRUCTOR(NeuralNetwork -> std::shared_ptr<NeuralNetwork>
void load(std::vector<uint8_t> model, std::vector<uint8_t> weights, std::vector<std::string> customPlugins = std::vector<std::string>())
void setInferenceEngine(InferenceEngine::pointer engine)
void setInferenceEngine(std::string engine)
auto getInferenceEngine() const -> InferenceEngine::pointer
void setInputNode(uint portID, std::string name, NodeType type = NodeType::IMAGE, TensorShape shape = TensorShape())
void setOutputNode(uint portID, std::string name, NodeType type = NodeType::IMAGE, TensorShape shape = TensorShape())
void setScaleFactor(float scale)
void setMeanAndStandardDeviation(float mean, float std)
void setMinAndMaxIntensity(float min, float max)
void setSignedInputNormalization(bool signedInputNormalization)
void setPreserveAspectRatio(bool preserve)
void setHorizontalFlipping(bool flip)
void setTemporalWindow(uint window)
void setInputSize(std::string name, std::vector<int> size) virtual
void loadAttributes() virtual

Public variables

modelFilename
inputNodes
outputNodes
inferenceEngine
customPlugins

Protected functions

void run() virtual
auto processInputData() -> std::unordered_map<std::string, Tensor::pointer>
auto resizeImages(const std::vector<std::shared_ptr<Image>>& images, int width, int height, int depth) -> std::vector<std::shared_ptr<Image>>
auto convertImagesToTensor(std::vector<std::shared_ptr<Image>> image, const TensorShape& shape, bool temporal) -> Tensor::pointer
auto standardizeOutputTensorData(Tensor::pointer tensor, int sample = 0) -> Tensor::pointer

Protected variables

bool mPreserveAspectRatio
bool mHorizontalImageFlipping
bool mSignedInputNormalization
int mTemporalWindow
int m_batchSize
float mScaleFactor
float mMean
float mStd
float mMinIntensity
float mMaxIntensity
bool mMinAndMaxIntensitySet
Vector3f mNewInputSpacing
std::unordered_map<std::string, std::vector<int>> mInputSizes
std::unordered_map<int, DataObject::pointer> m_processedOutputData
std::shared_ptr<InferenceEngine> m_engine
std::unordered_map<std::string, std::vector<std::shared_ptr<Image>>> mInputImages
std::unordered_map<std::string, std::vector<std::shared_ptr<Tensor>>> mInputTensors

Private functions

void execute() virtual

Function documentation

void fast::NeuralNetwork::load(std::vector<uint8_t> model, std::vector<uint8_t> weights, std::vector<std::string> customPlugins = std::vector<std::string>())

Parameters
model
weights
customPlugins paths to custom plugins/operators which can be libraries (.so/.dll) or in the case of GPU/VPU OpenVINO: .xml files.

Load a network from memory provided as byte two byte vectors: model and weights The second argument can be used to specify files for loading custom plugins/operators needed by the network model.

void fast::NeuralNetwork::setInferenceEngine(InferenceEngine::pointer engine)

Parameters
engine

Specify which inference engine to use

void fast::NeuralNetwork::setInferenceEngine(std::string engine)

Parameters
engine

Specify which inference engine to use

InferenceEngine::pointer fast::NeuralNetwork::getInferenceEngine() const

Retrieve current inference engine

void fast::NeuralNetwork::setScaleFactor(float scale)

Parameters
scale

For each input value i: new_i = i*scale

void fast::NeuralNetwork::setMeanAndStandardDeviation(float mean, float std)

Parameters
mean
std

For each input value i: new_i = (i - mean)/std, this is applied after the scale factor

void fast::NeuralNetwork::setMinAndMaxIntensity(float min, float max)

Parameters
min
max

Intensities of input image will be clipped at these values

void fast::NeuralNetwork::setHorizontalFlipping(bool flip)

Parameters
flip

Setting this parameter to true will flip the input image horizontally. For pixel classification the output image will be flipped back.

void fast::NeuralNetwork::setTemporalWindow(uint window)

Parameters
window

Set the temporal window for dynamic mode. If window > 1, assume the second dimension of the input tensor is the number of timesteps. If the window is set to 4, the frames t-3, t-2, t-1 and t, where t is the current timestep, will be given as input to the network.

Tensor::pointer fast::NeuralNetwork::standardizeOutputTensorData(Tensor::pointer tensor, int sample = 0) protected

Parameters
tensor
sample

Converts a tensor to channel last image ordering and takes care of frame data and spacing