CIANNA Python Interface API

[Up to date with V-1.0.1+ stable]

This page lists all the functions available in the CIANNA Python interface.
As version updates go, we will provide direct links to versions of this page that correspond to the different version releases.

All these functions are declared in the src/python_module.c file, which should always be the ultimate reference for each sub-version of CIANNA. Default values specified in the API documentation are either set in the Python interface or in the corresponding low-level C function.

Table of contents



Initialization



CIANNA.init(in_dim, in_nb_ch, out_dim, bias=0.1, b_size=8, comp_meth="C_CUDA", network=nb_networks-1,
            dynamic_load=1, mixed_precision="off", inference_only=0, no_logo=0, adv_size=30)

Create and initialize a network object with all the parameters that must be set before using any other function. The various dimensional parameters (input, in_nb_ch, out_dim, b_size) must be set in advance so CIANNA can construct properly sized arrays when calling dataset or layer construction functions.


CIANNA.free_network(network=0)

[V-1.0.1+] Free all network components, including all individual layers.

Note: Individual layer destruction is not supported at the API level.

Dataset management



CIANNA.create_dataset(dataset, size, input, target, network=nb_networks-1, silent=0)

Create and initialize a dataset object, create the required low-level C arrays, and fill them using the provided Numpy arrays. The function handles data movement from host to GPU if needed based on parameters specified in the init function.

Note: The memory usage during the execution of this function is up to twice the size of the dataset. The Numpy arrays can be deleted after the function as it has been copied into low-level C arrays. If the compute method is "C_CUDA" and a mixed precision is set, the low-level C arrays will be of this type both on the host and on the GPU, reducing the memory requirements of the datasets.

Note: Data should always be 2D Numpy arrays regardless of the real input dimension. The input data layout for dense outputs is of the following format: [N_examples, output_neurons], with the fastest index on the right. For convolutional inputs, the format is: [N_examples, N_channels, Depth, Height, Width], with the last 4 dimensions flattened to form a 2D array of shape [N_examples, N_channels*Depth*Height*Width]. This means that for a typical RGB image that follows the [Height, Width, N_channels], each channel must be flattened using row-major convention and append to the input one after the other to obtain the expected data layout.


CIANNA.delete_dataset(dataset, network=nb_networks-1, silent=0)

Free all the low-level C arrays of the dataset and reset the dataset object.


CIANNA.swap_data_buffers(dataset, network=nb_networks-1)

Swap the dataset properties and low-level array pointers between a dataset and its "buffer" version.

Note: The "DATA_buf" versions of the different datasets are used for parallelizing data loading or augmentation process and network training on a previously loaded dataset. Data are not moved; only the pointed location to "DATA" and "DATA_buf" are exchanged.

|  Steps  |    Thread A    |    Thread B    | 
|---------|----------------|----------------| 
|    0    |   Load DATA    |        /       | 
|    1    | Train on Data  | Load DATA_buf  | 
|    2    | Swap DATA/Buf  |      Join      | 
|    3    |  Loop from 1   |        /       |

Layers creation



CIANNA.dense(nb_neurons, activation="RELU", bias=None, prev_layer=nb_layers-1, drop_rate=0.0, 
             strict_size=0, init_fct="xavier", init_scaling=None, network=nb_networks-1)

Create and initialize a layer object of type "dense". Most of the layer properties are set automatically regarding the position in the network and the surrounding layers, but some parameters can be customized at the function call.

Note: The layers are sequentially added to the network. If it is the first layer, it is assumed to be connected to the input, and if it is declared after another layer, it will take the previous layer as its input. This process also takes into account the type of successive layers. For example, setting a "dense" layer after a "conv" or a "pool" layer will automatically flatten the previous output to be used as input for the present "dense" layer. The last layer to be declared (or the highest layer id) is assumed to be the output layer of the network.


CIANNA.conv(f_size=[...], nb_filters, stride=[...], padding=[...], int_padding=[...], activation="RELU", 
            bias=None, prev_layer=nb_layers-1, input_shape=[...], drop_rate=0.0, init_fct="xavier", 
            init_scaling=None, network=nb_networks-1)

Create and initialize a layer object of type "conv". Most of the layer properties are set automatically regarding the position in the network and the surrounding layers, but some parameters can be customized at the function call.

Note: There is presently no automatic option to preserve the spatial size between the input and the output of the layer. A carefully chosen padding regarding the filter size and the stride must be used to preserve dimensionality. The network behavior is not guaranteed if the input cannot be fully decomposed in an integer number of convolution regions. This will raise a warning but will not stop the execution.


CIANNA.pool(p_size=[...], stride=[...], padding=[...], prev_layer=nb_layers-1, drop_rate=0.0,
            p_type="MAX", activation="LIN", p_global=0, network=nb_networks-1)

Create and initialize a layer object of type "pool". Most of the layer properties are set automatically regarding the position in the network and the surrounding layers, but some parameters can be customized at the function call.

Note: The network behavior is not guaranteed if the input cannot be fully decomposed in an integer number of convolution regions. This will raise a warning but will not stop the script.


CIANNA.norm(normalization, activation="LIN", prev_layer=nb_layers-1, group_size=8, set_off=0, network=nb_networks-1)

Create and initialize a layer object of type "norm". Most of the layer properties are set automatically regarding the position in the network and the surrounding layers, but some parameters can be customized at the function call.

Note: Normalization networks are only supported between "conv" or "pool" layers. It is also recommended to only have dropout in the network after the last normalization layer.

Model saving/loading


Note: The save format is stable from V-0.9.3.4 to V-1.0.1+. This might change in the experimental version (V-1.1+) in order to accomodate new functionalities of to solve some performance issues. Any change to the save format will be notified in the patch note for each version. Some format conversion tools might be provided at some point for major version changes to avoid retraining networks that would benefit from performance updates with the latest versions.


CIANNA.save(file, network=nb_networks-1, bin=0)

Save the network model in its current state in a file using customized ASCII or binary formats.

Note: The save file contains all the architectural information required to reload the network without any configuration file.


CIANNA.load(file, iteration, network=nb_networks-1, nb_layers=0, nb_skip_layers=0, bin=0)

Load a network model from a file using a custom ASCII or binary format.

Note: The save file contains all the architectural information required to load the network without any configuration file.


CIANNA.print_arch_tex(path, file_name, size=1, in_size=1, f_size=1, out_size=1, stride=1, padding=1, 
                      in_padding=0, activation=0, bias=0, dropout=0, param_count=0, network=nb_networks-1)

Write the current network architecture in a latex formatted table (output file_name.tex) and compile it onto a .pdf. This function allows easy integration into papers/reports using a standard table format to help comparisons. The columns to use can be selected through keyword flags.

Network training/inference



CIANNA.train(nb_iter, learning_rate, end_learning_rate=0.0, control_interv=1, momentum=0.0, lr_decay=0.0,
             weight_decay=0.0, confmat=0, save_every=0, save_bin=0, network=nb_networks-1, shuffle_gpu=0,
             shuffle_every=1, TC_scale_factor=1.0, silent=0)

High-level interface to a custom training procedure. This function will handle all the aspects of the training on the current "TRAIN" dataset, including data movement, gradient optimization, network forward and back propagation loops, saving the model, etc.

Note: This function loops over all the self-constructed batches of the "TRAIN" dataset and monitors the error on the "VALID" dataset. The "TEST" dataset is never used in this function.
While a number of iterations can be specified, advanced training procedures (e.g., for dynamic augmentation, dynamic data loading, multiple networks co-training, etc.) can usually be constructed by invoking this function for a single iteration in a loop with other operations on the datasets or other networks. All the modifications applied to the model are preserved between calls, and a global iteration value is updated so every element of the training procedure that depends on it evolves appropriately between calls.
This function was specifically designed to run in parallel in the main Python thread while other threads handle data movement or data augmentation. See the dynamic augmentation example.


CIANNA.forward(saving=1, drop_mode="AVG_MODEL", no_error=0, repeat=1, network=nb_networks-1, silent=0)

High-level interface to a custom inference and output saving procedure. This function will handle all the aspects of the inference on the current "TEST" dataset, including data movement, network forward loop, output saving, etc.

Note: This function can be used to save the inference output on the validation dataset during training by initializing the "TEST" dataset object with the same content as "VALID". This function is often used to perform the end prediction of the underlying test dataset. When deploying a trained model, this function allows inference on unlabelled data. For this, they must be filled in the "TEST" dataset object and associated with a zero-filled target of the appropriate dimension. This is noticeably the only function that allows selecting the behavior of the dropout between model averaging and MC-dropout.

Note: About the data layout format, it is like the dataset input, always a 2D array. The output data layout for dense outputs is of the following format with the fastest index on the right: [N_examples, output_neurons]. For convolutional/pool outputs, the format is: [N_examples, N_channels, Depth, Height, Width], with the last 4 dimensions flattened to [N_examples, N_channels*Depth*Height*Width].


CIANNA.perf_eval(network=nb_networks-1)

Display a table with the numerical performance of each layer in microseconds for one input. Separated evaluation for forward and backpropagation performances. This function should be called after a CIANNA.train() or CIANNA.forward() function so it can measure compute time.

Note: The displayed time averages up to 1000 performance evaluations per layer. The computing time is not updated after the 1000 measurements are reached.

Fine-tuned activation functions



CIANNA.linear(None)

Create a string always equal to "LIN".

Note: Equivalent to no-activation


CIANNA.relu(saturation=800.0, leaking=0.05)

Create a string of the following format "RELU_S[saturation]_L[leaking]"

Note: Correspond to a custom leaky-ReLU with saturation. In practice, the activation is linear between 0 and the saturation value and is linear with a slope equal to the leaking factor both above the saturation value and below 0.


CIANNA.logistic(saturation=6.0, beta=1.0)

Create a string of the following format "LOGI_S[saturation]_B[beta]"

Note: The logistic derivative approaches 0 toward the two edges. This behavior can result in no gradient propagation for very contrasted activations, even if the error is large. To circumvent this behavior, we added a customized saturation value for the weighted sum before it goes through the sigmoid activation. With the default values, saturation=6 and beta=1, the activation stays in the range of [0.00247,0.99753].


CIANNA.softmax(None)

Create a string always equal to "SMAX".


CIANNA.YOLO(None)

Create a string always equal to "YOLO".



CIANNA.set_yolo_params(nb_box=0, nb_class=0, nb_param=0, max_nb_obj_per_image=0, prior_size=[...],
                       prior_noobj_prob=[...], error_scales=[...], slopes_and_maxes=[...],
                       param_ind_scales=[...], IoU_limits=[...], fit_parts=[...], 
                       IoU_type="GIoU", strict_box_size=0, prior_dist_type="SIZE", fit_dim=0, 
                       rand_startup=64000, rand_prob_best_box_assoc=0.0, rand_prob = 0.0,
                       min_prior_forced_scaling=0.0, class_softmax=0, diff_flag=0, error_type="natural",
                       no_override=0, raw_output=0, network=nb_networks-1)

Configure all the non-architectural parameters for a YOLO network. This function must be invoked before starting to declare/load the network architecture. The function returns the number of required filters for the YOLO output with the provided configuration.

Note: This function is mandatory for any YOLO network to operate properly, but it can be called with no argument when deploying a saved YOLO network for inference only. Many parameters have automated default values that are context-dependent (e.g., the default IoU_limits depends on the IoU_type). The output log should be looked at carefully for any doubt on which values were used by CIANNA.

Note: When using a YOLO network, the output and target sizes differ. The output is (N+1)-dimensional (with N being the number of input dimensions), with the last dimension representing all the box parameters per output grid element. In contrast, the target dimension that must be set in CIANNA.init() depends on the parameters with the following formula: [1+max_nb_obj_per_image*(7+nb_param+diff)]. In practice, for all YOLO datasets, the provided target should be a 2D array with the number of lines equal to the number of images in the dataset, and each row representing all the boxes contained in the corresponding image in the form of [Nb_box␣box_1_elements␣box_2_elements␣[box_..._elements]␣[0_fill]], with 0 filling to preserve a unique size for all images box list. For each box, the elements are in the form of [box_class_int␣xmin␣ymin␣zmin␣xmax␣ymax␣zmax␣diff_flag]. The box_class element is an id starting from 1 to nb_class+1, always present even if nb_class=0. The 3D box coordinates are always mandatory regardless of the actual dimensionality of the problem (see priors setting for details), and then the diff_flag for each target is only here if the diff_flag keyword is set to 1.


CIANNA.set_fit_parts(position=1, size=1, probability=1, objectness=1, classes=1, parameters=1)

Use explicit keywords to create an array of integers formatted in the proper order to be given to the CIANNA.set_yolo_params() function with the "fit_parts" keyword.

Note: For all parameters, there are 3 possible flags. 1: The corresponding loss part is fitted normally; 0: The corresponding loss part has a constant target corresponding to the center of the activation interval, -1: the corresponding loss part is not fitted at all, and the resulting output is manually set to 0 but is unconstrained internally.


CIANNA.set_error_scales(position=2.0, size=2.0, probability=1.0, objectness=2.0, classes=1.0, parameters=1.0)

Use explicit keywords to create an array of scaling values for each part of the YOLO loss in the proper order to be given to the CIANNA.set_yolo_params() function with the "error_scales" keyword.


CIANNA.set_IoU_limits(good_IoU_lim, low_IoU_best_box_assoc, min_prob_IoU_lim, min_obj_IoU_lim,
                      min_class_IoU_lim, min_param_IoU_lim, diff_IoU_lim, diff_obj_lim)

Use explicit keywords to create an array of threshold values for the different association refinement processes in the proper order to be given to the CIANNA.set_yolo_params() function with the "IoU_limits" keyword.

Note: Most of these parameters are part of what we call the "cascading loss" process. It refers to the fact that some box properties are not fitted until the prediction overlaps sufficiently with the target. This does not affect the position and size prediction, so the network can always improve regarding raw detection. This helps ensure that the network can sufficiently identify the object features in the image before trying to predict the corresponding class of the regression parameters. In most cases, this cascading loss process speeds up and stabilizes the training, but in some cases, it even improves the prediction results (especially for the regression parameters).

Note: The IoU and objectness values are expressed regarding the chosen IoU association function in the CIANNA.set_yolo_params(). For example, IoU only spans from 0 to 1, while GIoU and DIoU are in the interval -1 to 1. This also affects objectness values, even if it always spans from 0 to 1. What can be considered a good prediction objectness threshold depends on the IoU function. From this, it is natural that the default threshold values depend on the selected IoU function. Default values are provided for each parameter in the following order: IoU, GIoU, DIoU, DIoU2.


CIANNA.set_sm_single(slope=1.0f, fmin, fmax)

Use explicit keywords to create an array of one slope and two extremum values. This function can be used to set up any of the individual YOLO loss part in the CIANNA.set_slopes_and_maxes() function.


CIANNA.set_slopes_and_maxes(position=[...], size=[...], probability=[...], 
                            objectness=[...], classes=[...], parameters=[...])

Use explicit keywords to create a 2D array of slopes and extremum values in the proper order to be given to the CIANNA.set_yolo_params() function with the "slopes_and_maxes" keyword. Each of the arguments of the present function can be filled with an array return by the CIANNA.set_sm_single function.