Model Conversion¶
Commands for converting model files from various frameworks into the SoftNeuro DNN file format.
About preprocess, postprocess¶
In SoftNeuro preprocess and postprocess are separate networks attached to the beginning and to the end of the main network, respectively. These networks can be attached as a json file when converting a model. There's a preset json file, but it can be changed for custom settings.
Currently, adding preprocess and postprocess networks is only available for import-onnx
and import-tensorflow
, and not available for import-ver3
.
Preprocess specification¶
The preprocess network can be used for resizing images, specifying resize mode, specifying color format, normalizing data, and so on. Descriptions of each element in preprocess is below.
layers¶
Each layer consists of four elements: type, params, weights, and attrs.
Properties | Required | Description | Type | Example |
---|---|---|---|---|
type | Mandatory | String specifying the layer type. source, sink, madd, permute, can be specified. | String | "madd" |
params | Optional | JSON object specifying layer parameters. | Object | example |
weights | Optional | JSON object specifying leyer weights. | Object | example |
attrs | Optional | JSON object specifying layer attributes. | Object | example |
Source Layer¶
Input layer of the network.
- params
Properties | Required | Description | Type | Example |
---|---|---|---|---|
shape | Mandatory | Array of integer specifying the shape of the input data. | Array of Integer | [224, 224, 3] |
- weights: None
- attrs
Properties | Required | Description | Type | Example |
---|---|---|---|---|
format | Optional | String specifying the color format of the input image. Only "rgb" can be specified. | String | "rgb" |
resize_mode | Optional | String specifying the resize mode. "bilinear" (default) or "nearest" can be specified. | String | "bilinear" |
keep_ar | Optional | Boolean specifying whether or not to keep aspect ratio when resizing. Default is false. | Boolean | true |
padding_color | Optional | Array of numbers specifying the padding color filling the margin space when resizing. | Array of Numbers | [255, 255, 255] |
Sink Layer¶
Output layer of the network.
- params: None
- weights: None
- attrs: None
Permute Layer¶
Permutes the data order in the specified axis. Commonly used for channel-swapping.
- params
Properties | Required | Description | Type | Example |
---|---|---|---|---|
axis | Mandatory | Integer specifying the axis to be permuted. | Integer | 2 |
order | Mandatory | Array of integers containing the original positions ordered by their new indexes. | Array of Integer | [2, 1, 0] |
- weights: None
- attrs: None
Madd Layer¶
Multiply add (scale * x + bias) layer. Commonly used for normalization of input data, subtraction of mean, and so on.
- params
Properties | Required | Description | Type | Example |
---|---|---|---|---|
has_relu | Optional | Boolean specifying whether or not to execute Relu activation. Default is False. | Boolean | False |
relu_max_value | Optional | Max output value after Relu activation. Only valid when has_relu is True. | Number | 6.0 |
- weights
Properties | Required | Description | Type | Example |
---|---|---|---|---|
scale | Mandatory | Number or array of numbers by which the input data is multiplied. Must be array if different value is specified for each channel. Must have the same number of elements as bias. | Number or Array of Number | [0.01712475383, 0.0175070028, 0.01742919389] |
bias | Mandatory | Number or array of numbers, to be added to the input data. Must be array if different value is specified for each channel. Must have the same number of elements as scale. | Number or Array of Number | [2.11790393013, 2.03571428571, 1.80444444444] |
- attrs: None
preprocess example¶
[
{
"name": "preprocess",
"layers": [
{
"type": "source",
"params": {
"shape": [
1,
320,
320,
3
]
},
"attrs": {
"format": "rgb",
"attrs": {
"format": "rgb",
"resize_mode": "bilinear",
"keep_ar": true,
"padding_clor": [
0,
0,
0
]
}
}
},
{
"type": "permute",
"params": {
"axis": 3,
"dims": [
2,
1,
0
]
}
},
{
"type": "madd",
"weights": {
"scale": [
0.0135694,
0.0143123,
0.0141064
],
"bias": [
-1.41176,
-1.63139,
-1.69065
]
}
},
{
"type": "sink"
}
]
}
]
Postprocess specification¶
The postprocess network can be used for labeling outputs, specifying the decoding type for tasks such as Object detection, and so on. Descriptions of each element in postprocess is below.
layers¶
Each layer consists of four elements, type, params, weights, and attrs.
Properties | Required | Description | Type | Example |
---|---|---|---|---|
type | Mandatory | String specifying the layer type. source, sink, decode_centernet, decode_pelee, decode_ssd, decode_yolov3, decode_yolov4 can be specified. | String | "decode_ssd" |
params | Optional | JSON object specifying layer parameters. | Object | example |
weights | Optional | JSON object specifying leyer weights. | Object | example |
attrs | Optional | JSON object specifying layer attributes. | Object | example |
Source Layer¶
Input layer of the network.
- params
Properties | Required | Description | Type | Example |
---|---|---|---|---|
shape | Mandatory | Array of integers specifying the shape of the input data. | Array of Integer | [224, 224, 3] |
- weights: None
- attrs: None
Sink Layer¶
Output layer of the network.
- params: None
- weights: None
- attrs
Properties | Required | Description | Type | Example |
---|---|---|---|---|
label_list | Optional | Array of strings representing output labels. | Array of String | ["cat", "dog", "bird"] |
Decode Layer¶
Executes decode operation for Object Detection task. decode_centernet, decode_pelee, decode_ssd, decode_yolov3, decode_yolov4 can be specified for type.
- params
Properties | Required | Description | Type | Example |
---|---|---|---|---|
keep_top_k | centernet, pelee, ssd, yolov3, yolov4 | Integer specifying the number of bounding boxes. Bounding box is selected in descending order of score. Default is 300. | Integer | 300 |
do_nms | centernet, pelee, ssd, yolov3, yolov4 | Boolean specifying whether or not to execute NMS (Non-Maximum Suppression). Default is true. | Boolean | true |
nms_thresh | centernet, pelee, ssd, yolov3, yolov4 | IoU threshold for NMS. | Float | 0.5 |
conf_thresh | centernet, pelee, ssd, yolov3, yolov4 | Confidence threshold for detecting bounding boxes. | Float | 0.3 |
background_label_id | pelee, ssd | Label number of background. Default is 0. | Integer | 0 |
img_width | yolov3 | Width of input image. | Integer | 416 |
img_height | yolov3 | Height of input image. | Integer | 416 |
num_anchors_per_out | yolov3 | Number of anchor boxes for each grid cell. | Integer | 3 |
anchors | yolov3 | Array of numbers representing the size of anchor boxes. [w1, h1, w2, w3, ...] | Float array | [10,13, 16,30] |
- weights: None
- attrs: None
postprocess example¶
[
{
"name": "postprocess",
"layers": [
{
"name": "input",
"type": "source",
"params": {
"shape":[1, 300, 15]
}
},
{
"name": "decode",
"type": "decode_ssd",
"params": {
"keep_top_k" : 300,
"background_label_id" : 0,
"nms_thresh" : 0.45,
"conf_thresh" : 0.01
}
},
{
"name": "output",
"type": "sink"
}
]
}
]
import-onnx¶
Convert an ONNX model to DNN.
Usage
softneuro import-onnx [--naive] [--extract]
[--preprocess_json PREPROCESS_JSON]
[--postprocess_json POSTPROCESS_JSON] [--help]
INPUT OUTPUT
Arguments
Argument | Description |
---|---|
INPUT | ONNX model file to be converted. |
OUTPUT | Resulting DNN file path. If the extract flag is used, this is the folder where the model JSON will be saved. |
--preprocess_json PREPROCESS_JSON | A json file specified when preprocess is added to DNN model. |
--postprocess_json POSTPROCESS_JSON | A json file specified when postprocess is added to DNN model. |
Flags
Flag | Description |
---|---|
--naive | Save the DNN file without FuseRelu-type optimization. |
--extract | Save the model information as a JSON file. |
--help | Shows the command help. |
Example
After the model is converted the mobilenet_v2.dnn file will be created.
$ softneuro import-onnx mobilenet_v2.onnx mobilenet_v2.dnn
import model
converting initializers: done
converting nodes: done
rectify model
optimize model
save model
import-tensorflow¶
Convert a TensorFlow model to DNN.
Usage
softneuro import-tensorflow [-h] [--keras] [--list-keras] [--naive] [--extract] [--preprocess_json PREPROCESS_JSON] [--postprocess_json POSTPROCESS_JSON] [--output_attrs OUTPUT_ATTRS]
[--input_node [INPUT_NODE_NAME]] [--input_shape [INPUT_SHAPE]] [--output_node [OUTPUT_NODE_NAME]] [--fix_shape_inf]
INPUT OUTPUT
Arguments
Argument | Description |
---|---|
INPUT | Protocol buffer(.pb) or hdf5 format model file. For hdf5, not only weights but also the model structure is needed. |
OUTPUT | Resulting DNN file path. If the extract flag is used, this is the folder where the model JSON will be saved. |
Flags
Flag | Description |
---|---|
--keras | It's possible to convert a model from tf.keras.applications pretrained models by inputting the model name. |
--list-keras | Show the list of avalilable model names for --keras option. |
--naive | Save the DNN file without FuseRelu-type optimization. |
--extract | Save the model information as a JSON file. |
--preprocess_json PREPROCESS_JSON | A json file containing a preprocess network definition to be added to the DNN model. |
--postprocess_json POSTPROCESS_JSON | A json file containing a postprocess network definition to be added to the DNN model. |
--output_attrs OUTPUT_ATTRS | A JSON containing the output attributes (output labels), or a preset name. |
--input_node [INPUT_NODE] | Model input node name. |
--input_shape [SHAPE] | Model input shape. |
--output_node [OUTPUT_NODE] | Model output node name. |
--fix_shape_inf | turn off shape inference |
-h, --help | Shows the command help. |
Example
After the model is converted the vgg16.dnn file will be created.
softneuro import-tensorflow vgg16.pb vgg16.dnn
import-ver3¶
Convert a SoftNeuro V3 DNN model file to a SoftNeuro V5 DNN model file.
Usage
softneuro import-ver3 [--naive] [--extract] [--help] INPUT OUTPUT
Arguments
Argument | Description |
---|---|
INPUT | V3 DNN file. |
OUTPUT | Resulting DNN file path. If the extract flag is used, this is the folder where the model JSON will be saved. |
Flags
Flag | Description |
---|---|
--naive | Save the DNN file without FuseRelu-type optimization. |
--extract | Save the model information as a JSON file. |
--help | Shows the command help. |
Example After the model is converted the vgg16_v5 file will be created.
softneuro import-ver3 vgg16_v3.dnn vgg16_v5.dnn
import model
rectify model
optimize model
save model