class
NeuralNetworkNeural network process object.
Contents
This is the neural network process object base. All other neural network process objects should extend this class.
Base classes
- class ProcessObject
- Abstract base class for all process objects.
Derived classes
- class BoundingBoxNetwork
- Neural network process object for bounding box detection.
- class FlowNetwork
- A neural network for optical flow estimation.
- class ImageClassificationNetwork
- Image classification neural network.
- class ImageToImageNetwork
- Image-to-Image neural network process object.
- class SegmentationNetwork
- Segmentation neural network process object.
Constructors, destructors, conversion operators
- ~NeuralNetwork() virtual
- NeuralNetwork() protected
Public functions
- auto create(std::string modelFilename, float scaleFactor, float meanIntensity, float standardDeviationIntensity, std::vector<NeuralNetworkNode> inputNodes, std::vector<NeuralNetworkNode> outputNodes, std::string inferenceEngine, std::vector<std::string> customPlugins) -> std::shared_ptr<NeuralNetwork>
- Create instance Python friendly constructor with almost all parameters.
- auto create(std::string modelFilename, std::vector<NeuralNetworkNode> inputNodes, std::vector<NeuralNetworkNode> outputNodes, std::string inferenceEngine, std::vector<std::string> customPlugins) -> std::shared_ptr<NeuralNetwork>
- Create instance C++ friendly create with parameters that must be set before loading.
- void load(std::string filename, std::vector<std::string> customPlugins = std::vector<std::string>())
- void load(std::vector<uint8_t> model, std::vector<uint8_t> weights, std::vector<std::string> customPlugins = std::vector<std::string>())
-
void setInferenceEngine(InferenceEngine::
pointer engine) - void setInferenceEngine(std::string engine)
-
auto getInferenceEngine() const -> InferenceEngine::
pointer -
void setInputNode(uint portID,
std::string name,
NodeType type = NodeType::
IMAGE, TensorShape shape = TensorShape()) -
void setOutputNode(uint portID,
std::string name,
NodeType type = NodeType::
IMAGE, TensorShape shape = TensorShape()) - void setInputNode(NeuralNetworkNode node)
- void setOutputNode(NeuralNetworkNode node)
- void setScaleFactor(float scale)
- void setMeanAndStandardDeviation(float mean, float std)
- void setMinAndMaxIntensity(float min, float max)
- void setSignedInputNormalization(bool signedInputNormalization)
- void setPreserveAspectRatio(bool preserve)
- void setHorizontalFlipping(bool flip)
- auto getInputNodes() -> std::map<std::string, NeuralNetworkNode>
- auto getOutputNodes() -> std::map<std::string, NeuralNetworkNode>
- auto getNode(std::string name) -> NeuralNetworkNode
- void setTemporalWindow(uint window)
- Set the temporal window for dynamic mode. If window > 1, assume the second dimension of the input tensor is the number of timesteps. If the window is set to 4, the frames t-3, t-2, t-1 and t, where t is the current timestep, will be given as input to the network.
- void addTemporalState(std::string inputNodeName, std::string outputNodeName, TensorShape shape = TensorShape())
- Add a temporal state uses input and output nodes to remember state between runs.
- void setInputSize(std::string name, std::vector<int> size) virtual
- void loadAttributes() virtual
Protected functions
- void runNeuralNetwork() virtual
-
auto processInputData() -> std::unordered_map<std::string, Tensor::
pointer> - auto resizeImages(const std::vector<std::shared_ptr<Image>>& images, int width, int height, int depth) -> std::vector<std::shared_ptr<Image>>
-
auto convertImagesToTensor(std::vector<std::shared_ptr<Image>> image,
const TensorShape& shape,
bool temporal) -> Tensor::
pointer -
auto standardizeOutputTensorData(Tensor::
pointer tensor, int sample = 0) -> Tensor:: pointer - void processOutputTensors()
Protected variables
- bool mPreserveAspectRatio
- bool mHorizontalImageFlipping
- bool mSignedInputNormalization
- int mTemporalWindow
- int m_batchSize
- float mScaleFactor
- float mMean
- float mStd
- float mMinIntensity
- float mMaxIntensity
- bool mMinAndMaxIntensitySet
- Vector3f mNewInputSpacing
- Vector3i m_newInputSize
- std::unordered_map<std::string, std::vector<int>> mInputSizes
-
std::unordered_map<int, DataObject::
pointer> m_processedOutputData - std::shared_ptr<InferenceEngine> m_engine
- std::unordered_map<std::string, std::vector<std::shared_ptr<Image>>> mInputImages
- std::unordered_map<std::string, std::vector<std::shared_ptr<Tensor>>> mInputTensors
-
std::map<std::string, Tensor::
pointer> m_temporalStateNodes - std::vector<std::pair<std::string, std::string>> m_temporalStateLinks
Private functions
- void execute() virtual
Function documentation
std::shared_ptr<NeuralNetwork> fast:: NeuralNetwork:: create(std::string modelFilename,
float scaleFactor,
float meanIntensity,
float standardDeviationIntensity,
std::vector<NeuralNetworkNode> inputNodes,
std::vector<NeuralNetworkNode> outputNodes,
std::string inferenceEngine,
std::vector<std::string> customPlugins)
Create instance Python friendly constructor with almost all parameters.
Parameters | |
---|---|
modelFilename | Path to model to load |
scaleFactor | A value which is multiplied with each pixel of input image before it is sent to the neural network. Use this to scale your pixels values. Default: 1.0 |
meanIntensity | Mean intensity to subtract from each pixel of the input image |
standardDeviationIntensity | Standard deviation to divide each pixel of the input image by |
inputNodes | Specify names, and potentially shapes, of input nodes. Not necessary unless you only want to use certain inputs or specify the input shape manually. |
outputNodes | Specify names, and potentially shapes, of output nodes to use. Not necessary unless you only want to use certain outputs or specify the output shape manually. |
inferenceEngine | Specify which inference engine to use (TensorFlow, TensorRT, OpenVINO). By default, FAST will select the best inference engine available on your system. |
customPlugins | Specify path to any custom plugins/operators to load |
Returns | instance |
std::shared_ptr<NeuralNetwork> fast:: NeuralNetwork:: create(std::string modelFilename,
std::vector<NeuralNetworkNode> inputNodes,
std::vector<NeuralNetworkNode> outputNodes,
std::string inferenceEngine,
std::vector<std::string> customPlugins)
Create instance C++ friendly create with parameters that must be set before loading.
Parameters | |
---|---|
modelFilename | Path to model to load |
inputNodes | Specify names, and potentially shapes, of input nodes. Not necessary unless you only want to use certain inputs or specify the input shape manually. |
outputNodes | Specify names, and potentially shapes, of output nodes to use. Not necessary unless you only want to use certain outputs or specify the output shape manually. |
inferenceEngine | Specify which inference engine to use (TensorFlow, TensorRT, OpenVINO). By default, FAST will select the best inference engine available on your system. |
customPlugins | Specify path to any custom plugins/operators to load |
Returns | instance |
void fast:: NeuralNetwork:: load(std::string filename,
std::vector<std::string> customPlugins = std::vector<std::string>())
Parameters | |
---|---|
filename | Path to network model file. |
customPlugins | Paths to custom plugins/operators which can be libraries (.so/.dll) or in the case of GPU/VPU OpenVINO: .xml files. |
Load a given network model file. This takes time. The second argument can be used to specify files for loading custom plugins/operators needed by the network model.
void fast:: NeuralNetwork:: load(std::vector<uint8_t> model,
std::vector<uint8_t> weights,
std::vector<std::string> customPlugins = std::vector<std::string>())
Parameters | |
---|---|
model | |
weights | |
customPlugins | paths to custom plugins/operators which can be libraries (.so/.dll) or in the case of GPU/VPU OpenVINO: .xml files. |
Load a network from memory provided as byte two byte vectors: model and weights The second argument can be used to specify files for loading custom plugins/operators needed by the network model.
void fast:: NeuralNetwork:: setInferenceEngine(InferenceEngine:: pointer engine)
Parameters | |
---|---|
engine |
Specify which inference engine to use
void fast:: NeuralNetwork:: setInferenceEngine(std::string engine)
Parameters | |
---|---|
engine |
Specify which inference engine to use
InferenceEngine:: pointer fast:: NeuralNetwork:: getInferenceEngine() const
Retrieve current inference engine
void fast:: NeuralNetwork:: setScaleFactor(float scale)
Parameters | |
---|---|
scale |
For each input value i: new_i = i*scale
void fast:: NeuralNetwork:: setMeanAndStandardDeviation(float mean,
float std)
Parameters | |
---|---|
mean | |
std |
For each input value i: new_i = (i - mean)/std, this is applied after the scale factor
void fast:: NeuralNetwork:: setMinAndMaxIntensity(float min,
float max)
Parameters | |
---|---|
min | |
max |
Intensities of input image will be clipped at these values
void fast:: NeuralNetwork:: setHorizontalFlipping(bool flip)
Parameters | |
---|---|
flip |
Setting this parameter to true will flip the input image horizontally. For pixel classification the output image will be flipped back.
void fast:: NeuralNetwork:: setTemporalWindow(uint window)
Set the temporal window for dynamic mode. If window > 1, assume the second dimension of the input tensor is the number of timesteps. If the window is set to 4, the frames t-3, t-2, t-1 and t, where t is the current timestep, will be given as input to the network.
Parameters | |
---|---|
window |
void fast:: NeuralNetwork:: addTemporalState(std::string inputNodeName,
std::string outputNodeName,
TensorShape shape = TensorShape())
Add a temporal state uses input and output nodes to remember state between runs.
Parameters | |
---|---|
inputNodeName | Name of the input node for the given temporal state |
outputNodeName | Name of the output node for the given temporal state |
shape | Shape of the temporal state tensor. If empty, FAST will try to find the shape automatically. |
Not all inference engines support stateful temporal neural networks directly. Stateful LSTM/GRU/ConvLSTM layers remembers it's internal state from one run to the next. Statefullness can still be enabled by having an additional input and output node for each temporal state in the neural network, and then copy the temporal state from the output node to the input node for the next run. The first run the input nodes for the temporal state are all zero.
Tensor:: pointer fast:: NeuralNetwork:: standardizeOutputTensorData(Tensor:: pointer tensor,
int sample = 0) protected
Parameters | |
---|---|
tensor | |
sample |
Converts a tensor to channel last image ordering and takes care of frame data and spacing