AVRawRA – application for video raw record acquisition for neuroimaging and videoregistration research
- Authors: Suchkov D.S.1, Shumkova V.V.2, Sitdikova V.R.2, Silaeva V.M.2, Logashkin A.E.2, Mamleev A.R.2, Popova Y.V.2, Sharipzyanova L.S.2, Minlebaev M.G.1,2
-
Affiliations:
- INSERM UMR1249, INMED, Aix-Marseille University
- Kazan Federal University
- Issue: Vol 74, No 3 (2024)
- Pages: 369-382
- Section: МЕТОДИКА
- URL: https://ogarev-online.ru/0044-4677/article/view/267295
- DOI: https://doi.org/10.31857/S0044467724030094
- ID: 267295
Cite item
Full Text
Abstract
Application for Video Raw Record Acquisition – AVRawRA [ɔːvˈrɔːrə], is a software designed for acquisition and recording video from the cameras into raw binary and compressed video formats. AVRawRA allows using a wide range of camera devices in various neuroimaging applications. That provides the benefit of usage of expensive video registration equipment for several tasks with single software. The concept of presented software allows adding any camera device without rebuilding of the main code pipeline. Presented software has a user-friendly interface with interactive elements for regulating parameters of acquisition and recording in real time, without stopping video stream. Simultaneous real-time visualization, analysis and recording can be performed without loss of the efficiency and missed frames. AVRawRA supports recordings from camera devices with both external and internal triggers. The size of the saved video file is not restricted by the recording time and is limited only by the space on the storage. Our software is perfectly suited both for the neuroimaging applications and experiments with supplementary videoregistration. To summarize, AVRawRA represents a universal platform for usage of various videoregistration devices, performing real-time analysis and high-speed recordings in raw and compressed video formats.
Full Text
1. INTRODUCTION
Nowadays, the vast majority of videoregistration equipment (cameras) is provided by manufacturers with their own software. This software allows the user to acquire and store the data in commonly used video formats like *.mp4, *.avi etc. Usage of multimedia compression based also on the suggestion that a human being’s eye’s limitation, that an eye cannot recognize high frequency intensity changes (Venkatesh, 2018; Hubel, 1962). Therefore, reasonable loss of information from the raw image data won’t be detected by our brain. However, changes in the recorded visual image below the eye sensitivity level can contain important and crucial information in scientific applications. In a large amount of neuroimaging research, qualitative description of the neuronal activity critically depends on the spatial resolution, noise/signal ratio and acquisition rate. In calcium imaging, loss of the spatial resolution can affect the following separation of active neurons (Hendel, 2020; de Melo Reis, 2022; Dard, 2022; Oh, 2019; Mues, 2013). Voltage sensitive dyeing can provide a good time resolution of the optical signal from the neuron, while modifying each frame can lead to loss of this advantage (Petersen, 2001; Grinvald, 2004; Baker,2005; Popovic, 2015). In optical intrinsic signal (OIS) imaging the level of beneficial signal reaches only fractions of percents from background intensity and loss of information via compression can lead to absence of results (Aitken, 1999; Vincis, 2015; Suchkov, 2022; Sintsov, 2017). In behavioral studies, videoregistration of animal movements can require high time and spatial precision for correlation to the neuronal activity, with accuracy of several milliseconds (Tiriac, 2015; Akhmetshina, 2016; Inácio, 2016). Therefore, recording video files in raw format is essential for neurobiological research. Sometimes cameras are supported by software, which can have some limitations due to the specialization of the software to the given field of research. That software, in practice, also has low compatibility with other cameras, which prevent usage of the same cameras or software in various applications. Proprietary software, commonly used in non-scientific areas, mostly provide recordings in compressed formats and only for widely used cameras (Salem, 2020). While the cost of the scientific video equipment can reach tangible values, the necessity of the software, which can cover at least basic functions of video acquisition in various research, becomes noticeable. Here we presented an AVRawRA, software, which can provide main functions for video acquisition and recording in raw data format from the large amount of modern cameras, including web cameras. Our software is fully compatible with FireWire (IEEE 1394), GigE (Gigabit Ethernet) and USB cameras (USB3 Vision or DirectShow). AVRawRA allows the user to record video files in several formats, including raw binary (*.bin) and compressed video format (*.avi). According to the technical features of the camera, video recordings can be performed in several modes using both internal and external triggers in various configurations. Frame rate and spatial region of interest (ROI) can be manually regulated if the camera allows that. A basic analysis of intensity included in this version of software to present the opportunity of the real-time analysis without significant loss of the camera time resources.
2. AVRAWRA GRAPHICAL USER INTERFACE
AVRawRA graphical user interface contains panel for setup main camera features, panel for setup region of interest (ROI), panel for the record configuration, panel for real-time image visualization and panel for real-time analysis (Fig. 1).
Fig. 1. General view of the AVRawRA user interface. The following panels are presented: 1) “Camera setup” panel; 2) Real-time image visualization panel; 3) “Region of interest (ROI)” panel; 4) “Record control” panel; 5) “Real-time analysis” pane.
Рис. 1. Общий вид пользовательского интерфейса AVRawRA. Представлены следующие панели: 1) панель “Настройка камеры”; 2) панель визуализации изображения в реальном времени; 3) панель “Область интереса (ROI)”; 4) панель “Управление записью”; 5) панель “Анализ в реальном времени”.
2.1. “Camera setup” panel. The main step for the user is a definition of the video adapter, camera and its operating mode. First, the user should check availability and indicate the camera’s video adapter in the “Adapter” combo box. Absence of an adapter of interest may represent incorrect installation of the drivers for the camera. Secondly, the user determines which camera device, presented in the “Camera” combo box, will be used for acquisition. Only connected and supported by the chosen adapter cameras will be represented in the “Camera” combo box. Thirdly, the user should choose the operating mode of the camera in the “Mode” combo box. Here will be defined resolution, color mode. Some cameras with unregulated frame rate also present different modes with various predefined frame rates. Finally, numeric controls “Exposure” and “Unit” will be available, if the camera’s hardware architecture allows controlling frame rate. When all combo boxes contain correct (not blank) information, “Start” button will be enabled to press and start acquisition with imaging in the panel for real-time image visualization.
2.1.1. “Adapter” combo box. Combo box for selection between available video adapters. To overview, the user should press the button with ∨ symbol to expand the list of automatically detected adapters. The first video adapter is “IMAQ”, which represents a part of the LABView Vision Acquisition Module product. “IMAQ” video adapter based on NI-IMAQdx package functions for acquiring and recording video. LABView Vision Acquisition Module covers the vast majority of modern manufacture cameras with supporting FireWire (IEEE 1394), GigE (Gigabit Ethernet) and USB (USB3 Vision or DirectShow) connection interfaces. The second available video adapter is “Qimaging”, which represents the drivers for the line of Qimaging CCD cameras. A package of functions was designed by Qimaging for acquiring data using LABView resources. That video adapter was added as an example of an opportunity to acquire data without the NI-IMAQdx package. Selection of the video adapter automatically creates a list of the available camera devices in the “Camera” combo box.
2.1.2. “Camera” combo box. Combo box for camera device selection. Camera device selection available only after definition of the video adapter in “Adapter” combo box. To overview, the user should press the button with ∨ symbol to expand the list of automatically detected cameras. Selection of the camera device automatically creates a list of the available camera operation modes in the “Mode” combo box.
2.1.3. “Mode” combo box. Combo box for camera operation mode selection. To overview, the user should press the button with ∨ symbol to expand the list of automatically extracted modes for the selected camera device from “Camera” combo box. Camera operation mode selection is available only after definition of the camera device in the “Camera” combo box. Camera operation mode can be changed during video acquisition (after pressing the “Start” button from the “Camera setup” panel), but not during the recording (after pressing “Rec” button from the “Record control” panel). Selection of the camera operation mode is a last obligatory action to unlock the “Start” button.
2.1.4. “Exposure” numeric control and “Unit” combo box. Numeric control and combo box to set value and units, respectively, of the exposure time for each acquisition frame. To change a numeric value, the user should manually type the acquisition frame time or press a button with ∧ or ∨ symbol (to increase or decrease on 1 value point, respectively). To change time units, the user should press the button with ∨ symbol to expand the list of automatically defined time units. The exposure time is automatically set to 16 ms, however, that value can be out of range of the used camera. Therefore, users should verify acceptable time ranges for exposure times before operation. Not correct exposure time will lead to abort of the application. “Exposure” numeric control and “Unit” combo box are not available for the cameras with constant frame rate.
2.1.5. “Start” button and “Acquiring” indicator. Button to start/stop acquisition and indicator to represent corresponding mode. To start acquisition, press once on the “Start” button. Normally, “Acquiring” indicator will change color to light green, “Start” button will be highlighted with yellow color and an image from the camera will appear on image display in the real-time image visualization panel. A second press on the “Start” button will stop acquisition, deactivate “Start” button (yellow highlight removed) and switch off “Acquiring” indicator (dark green color). It is highly recommended to stop acquisition before switching between camera devices to avoid unpredictable abort of the application.
2.2. Real-time image visualization panel. Real-time image visualization panel is designed to display acquired frames, provide spatial navigation through the image, set or change ROI, represent information about current camera operation mode and writing status.
2.2.1. Image display. Interactive display for image acquired frames. Become enabled automatically after “Start” button press. Interactive display allows user to observe real-time image from the camera, change image color map (visually), snap and save image to file (*.png). ROI selection by default is represented as a red-colored contour which covers the image. Color map change and snap/save image functions can be called by right-clicking on the image display area with the following selection of the related functions.
2.2.2. Navigation and ROI toolbar. Toolbar with tools for spatial navigation through the image and setting the ROI. Toolbar contains zooming tool (magnifying glass symbol), selection tool (arrow symbol), dragging tool (hand symbol), rectangle ROI set tool (hand symbol), freehand ROI set tool (blot symbol), ellipse ROI set tool (ellipse symbol). The zooming tool allows the user to zoom in/out the displayed image. Zooming tools don’t affect the recorded image size and are used only for the visual zoom. Selection tool is a neutral tool, which replaces actions of other tools and indicates the position of the pointer on the image display in the status string (see Graphical user interface → Panel for real-time image visualization → Status string subsection). Dragging tool allows the user to move along a zoomed image. Rectangle, freehand and ellipse ROI set tool allows the user to draw the area, which will be used to calculate the ROI region, using respective interactive shapes.
2.2.3. “Bit depth” string. Text string that shows a bit depth of the image, used to record (except compressed video, see Methodology section).
2.2.4. “max FPS” string. Text string that shows a maximum frame per second (FPS) value. Maximum FPS value is evaluated from the camera parameters.
2.2.5. “max FI” string. Text string that shows a maximum frame interval (FI) value. Maximum FI value is evaluated from the camera parameters.
2.2.6. “real FPS” string. Text string that shows a real frame per second (FPS) value. The Real FPS value is evaluated as reciprocal to the frame time interval (in seconds) extracted during acquisition between current and previous frame timestamps.
2.2.7. “real FI’’ string. Text string that shows a real frame interval (FI) value. The Real FI value is evaluated as the frame time interval (in seconds) extracted during acquisition between current and previous frame timestamps.
2.2.8. Status string. Text string that shows current used camera operation mode, zoom factor, a bit depth of the acquisition stream and position of the pointer on the display image.
2.2.9. “Writing” indicator. Button for start/stop acquisition and indicator to represent corresponding mode. To start acquisition, press the “Start” button once. Normally, “Acquiring” indicator will change color to light green, “Start” button will be highlighted with yellow color and an image from the camera will appear on image display in the real-time image visualization panel. A second press on the “Start” button will stop acquisition, deactivate “Start” button (yellow highlight removed) and switch off “Acquiring” indicator (dark green color). It is highly recommended to stop acquisition before switching between camera devices to avoid unpredictable abort of the application.
2.2.10. “Trial #” string. Text string that shows a number of the current recording trial. Text string shows zero value when the number reaches the value defined by “Trials #” numeric control.
2.2.11. “Trigger #” string. Text string that shows a number of the current received camera trigger. Text string shows zero value when the number reaches the value defined by “Triggers per trial” numeric control.
2.2.12. “Time” string. Text string that shows a time of the current recording trial in seconds. Text string shows zero value when the number reaches the value defined by “Triggers per trial” numeric control.
2.3. “Region of interest (ROI)” panel. Region of interest panel is designed to set/unset ROI and display ROI parameters. ROI region calculated as a rectangle area with left, top, width and height parameters extracted from the minimum and maximum x-y values of the interactive shape mask borders (see Graphical user interface → Panel for real-time image visualization → Navigation and ROI toolbar). Interactive shape mask is a rectangle that circumscribes interactive shape.
2.3.1. “Set ROI” button and “ROI on” indicator. Button to set ROI action and indicator to show that ROI is used. To set ROI, press the “Set ROI” button. Normally, “ROI on” indicator will change color to light green and an image on the real-time image visualization panel will be reduced to ROI values.
2.3.2. ROI array indicator. Indicator of ROI values that represent left, top, width and height values for the rectangle that circumscribes interactive shape of ROI.
2.3.3. “Reset ROI” button. Button to reset ROI back to the original size of the camera operation mode. To reset ROI, press the “Reset ROI” button. Normally, “ROI on” indicator will change color to dark green and image on the real-time image visualization panel will be recovered to the camera operation mode values.
2.4. “Record control” panel. “Record control” panel is designed to manage record parameters, select the type of record file and start recording. Record triggering parameters are extracted from the camera device properties and presented in the “Trigger” combo box. “Record type” combo box is used to specify the record file type (raw or compressed). The record can also be binned using a 4x4 square kernel, if necessary. Each record is formed by repetitive trials with predefined duration and number of triggers. Trial recording stops when the duration time or number of triggers per trial reaches predefined values. Record automatically stops when the number of trials was reached or the “Rec” button was manually disabled by the user. Number of trials, trial duration and number of camera triggers are defined by user in the “Trials #”, “Trial duration”, “Triggers per trial” numeric controls. Additionally, time units for trial duration time can be specified in the “Unit” combo box. After adjusting all necessary parameters, “Rec” button can be pressed to start recording. Record will start immediately, if the user selected the “Free” file type. Otherwise, a user dialog interface will ask for the saving folder pathway. The user should specify the folder and press the “Current folder” button in the user dialog. Afterward, AVRawRA will automatically create a subfolder, named as current date (format YYYY-MM-DD) and place a record file there.
2.4.1. “Trigger” combo box. Combo box for selection between available camera trigger modes. “Free” mode is set by default to cameras without the possibility of external triggering. To change trigger type, the user should press the button with ∨ symbol to expand the list of automatically defined time units. Trigger mode has different designs for different devices (cameras) and depends on the specific model. The most common type is a free (or internal) trigger (“Free”), in which the image is read from the camera after the exposure time has been reached. When using other modes, the trigger will determine when information needs to be read from the camera. In this case, this can be either the time of each frame determined by the trigger, or, for example, a set of frames received while maintaining the trigger in the active state (+5V). The source is an external signal generator connected to the corresponding functional port of the camera (if available). Thus, the operation of the trigger will depend on the corresponding selected camera mode according to the documentation. The list of trigger modes is automatically read from the device during camera initialization and displayed here.
2.4.2. “Record type” combo box. Combo box for selection between record file types. The record can be saved in raw binary (*.bin) or compressed (*.avi) formats. Experimental IOS3 format is also presented in the current software release and will be used as format for RGB-colored video (see Future directions). “Free” format allows the user to record in the testing mode without creating any files. To change record type, the user should press the button with ∨ symbol to expand the list of automatically defined time units.
2.4.3. “Binning record” switch with indicator. Switch with an indicator to set/unset binning mode for the record file. The indicator changes its color to light green for “binning on” mode and to dark green color for “binning off” mode.
2.4.4. “Trials #” numeric control. Numeric control to set number of repetitive trials. Users can interactively change this value during recording. Value will be checked only after the current trial finishes recording. Trial number displayed in the “Trial #” string (see Real-time image visualization panel → “Trial #” string). To change a numeric value, the user should manually type the number of trials or press a button with ∧ or ∨ symbol (to increase or decrease on 1 value point, respectively).
2.4.5. “Triggers per trial” numeric control. Numeric control to set the number of triggers in a single trial. Users can interactively change this value during recording. Interactively set value will be taken into account if it is more than the current value, displayed in the “Trigger #” string (see Real-time image visualization panel → “Trigger #” string). To change a numeric value, the user should manually type the number of triggers or press a button with ∧ or ∨ symbol (to increase or decrease on 1 value point, respectively).
2.4.6. “Trial duration” numeric control. Numeric control to set duration of single trial. Users can interactively change this value during recording. Interactively set value will be taken into account if it is more than the current value, displayed in the “Time” string (see Real-time image visualization panel → “Time” string). To change numeric value, the user should manually type the duration time or press a button with ∧ or ∨ symbol (to increase or decrease on 1 value point, respectively).
2.4.7. “Unit” combo box. Combo box for selection time units of the trial duration, presented in the “Trial duration” numeric control. To change time units, the user should press the button with ∨ symbol to expand the list of automatically defined time units. Change of the units will not affect the presentation of time in the “Time” string (see Real-time image visualization panel → “Time” string), which is always displayed in seconds.
2.4.8. “Rec” button. Button to start recording to the file. To start recording, press the “Rec” button once. Normally, “Writing” indicator (see Real-time image visualization panel → “Writing” indicator) will change color to light green, and the “Rec” button will be highlighted with yellow color. A second press on the “Rec” button will stop recording, deactivate the “Rec” button (yellow highlight removed) and switch off the “Writing” indicator (dark green color). However, the camera will wait for the last trigger due to the mechanisms of stream triggering.
2.4.9. “Pause” button. Unfunctional experimental button, will be developed for the next release to pause recording.
2.5. “Real-time analysis” panel. “Real-time analysis” panel is designed to show users some additional information about image properties. This panel is still developing, and for now presents two options: analysis of pixel intensity distribution (“Intensity histogram” graph) and preview for the image binning (“Software binning” switch with indicator).
2.5.1. “On/off” switch with indicator. Switch with an indicator to enable/disable real-time analysis. The indicator changes its color to light green for “analysis on” mode and to dark green color for “analysis off” mode.
2.5.2. “Software binning” switch with indicator. Switch with an indicator to enable/disable software binning of the data. The indicator changes its color to light green for “binning on” mode and to dark green color for “binning off” mode. Software binning disables the “Rec” button. If software binning was enabled, the video stream was replaced by original images, spatially smoothed using cumulative intensity from the 4x4 square kernel.
2.5.3. “Intensity histogram” graph. Graph for showing image pixel intensity distribution. Full image used by the default. If the ROI contour was defined by a user, only pixels inside the ROI contour will be used for estimation of the pixel intensity distribution. Automatically evaluated after enabling real-time analysis. Bit depth of each pixel is defined by the video stream parameters. X-value represents intensity in the bit fullness, Y-value represents number of pixels with current bit fullness. Number values for the y-axis are switched off to remove the effect of the rapid change of the values during autoscaling.
3. METHODOLOGY
AVRawRA is following a simple workflow. During “Prepare device mode” users select necessary camera device and operation mode and define a frame rate, if possible. Interface for setting acquisition parameters is placed in the Camera setup panel (see “Camera setup” panel). Afterwards, “Acquisition session” is started after pressing the “Start” button (see “Camera setup” panel → “Start” button and “Acquiring” indicator). During “Acquisition session”, the user is able to monitor the real-time image from the camera device (see Real-time image visualization panel), set ROI (see “Region of interest (ROI)” panel) and change camera operation mode and frame rate (see “Camera setup” panel), if necessary. Real-time analysis can be optionally switched on to visualize parameters of the image (see “Real-time analysis” panel) or sequence of images (see Future directions). To move to the “Recording session”, the user should set properties of the record (see “Record control” panel) and press “Rec” button. After the record will be stopped (manually or according to the record parameters), the program state will return to “Acquisition session”. AVRawRA does not allow you to open and view already recorded files. Files with AVI extension can be played in any media player that supports this format (if there are appropriate decoding libraries - codecs). The binary format can be read using any programming language with an implementation for reading binary files (for example, the “fread” function in the Matlab programming environment).
3.1. “Prepare device mode”. “Prepare device mode” is the first and important stage of the workflow for the full description of the device which will be used in “Acquisition session” and “Recording session”. Necessity of this stage obviously originates from the logic of interaction between devices and software. Firstly, each camera device uses a corresponding video adapter to identify itself in the operating system. Software will not find the device without correctly installed camera drivers for the adapter. That is a key point to allow software to find the camera device. Secondly, the camera device itself and operation mode of the camera should be selected to create the correct video stream (see “Acquisition session”). Created video streams correspond to the camera device, and it should be reorganized after changing the device. Therefore, it is highly recommended to stop “Acquisition session” before changing camera device or adapter, but not the camera operation mode or frame rate. That recommendation is based on the architecture of the program. After change of the camera device or adapter, the dependent parameters of camera names and operation modes will be fully reset and unset. Therefore, algorithms will not recognize camera name or operation mode, leading the program to abort. Operation mode or frame rate have no critical dependencies, therefore they can be changed anytime. During “Prepare device mode” a video stream is not created that is indicated by the “Start” button and “Acquiring” indicator.
3.2. “Acquisition session”. “Acquisition session” is a main and mostly used stage of the workflow, where a real-time video stream from the camera device is created and processed. On this stage the user is allowed to work with the video stream in the “Live view” mode to see its features (such as FPS) during preparation for the record. Video stream presents a sequence of the images, constantly collecting from the camera. Once created, the video stream corresponds to the parameters which were set in “Prepare device mode”. The internal logic of the AVRawRA allows building video streams for any cameras in the universal way. That was achieved by isolating processes of setting camera parameter names, provided by the manufacturer, to the blocks with universe inputs from “Prepare device mode” and output in form of video stream. That approach allows a simple process of building and encapsulation of the necessary camera block, despite the large variety of parameter names from company to company. If bit depth is specified among internal camera acquisition parameters, it will be used to build the video stream. Otherwise, 16-bit depth will be used for each image in the video stream by default. Only monochrome video streams are allowed in the presented version of the software. Camera operation modes, providing colored images, are automatically converted to the monochrome image. However, opportunities to work with colored images will be recovered in the future release (see Future directions). Image display (see “Real-time image visualization” panel → “Image display”) shows one current image from the sequence and renews it according to the frame rate. Parameters of the stream are represented in the status string below the image display (see “Real-time image visualization” panel → “Status string”). All real-time analysis in presented software is based on the current image properties (see “Real-time analysis”). To switch on/off real-time analysis advantages, “On/off” switch with indicator is used. During usage of software binning (see “Real-time analysis” → “Software binning” switch with indicator) each consequent image is extracted from the video stream, recalculated as a cumulative intensity from 16 pixels forming 4x4 square and placed back to the video stream. That option is used only for the displaying and real-time analysis functions (“Recording session” is blocked), because it consumes machine time resources. Therefore, for the correct recording with binning instead of “Software binning”, the user should use a corresponding switch on the “Record control” panel (see “Record control” panel → “Binning record” switch with indicator). That option will perform data binning during saving to file, but not during acquisition. “Acquisition session” also allows users to optionally define ROI using Navigation and ROI toolbar instruments (see “Real-time image visualization” panel → “Navigation and ROI” toolbar). ROI contour is used to extract pixels from an image to build an intensity histogram (see “Real-time analysis” → “Intensity histogram” graph). The user also can use ROI as a reference for image size by using the “Region of interest (ROI)” panel. A rectangle that circumscribes the interactive shape of ROI will be used as a new size for the frame. Video stream will be reconfigured, if camera device properties allow defining ROI (described in the camera device manual). Finally, “Acquisition session” allows the user to start “Recording session” using the “Rec” button (see “Record control” panel → “Rec” button).
3.3. “Recording session”. “Recording session” is a final stage of the workflow, where the user can save acquired frames to the file. To start a “Recording session”, the user should define record parameters using the “Record control” panel (see “Record control” panel). According to set parameters, each record will contain trials with frames. Frames acquisition can be triggered internally (or in “Free” record mode) by camera device pacemaker or externally by trigger input on camera device (if it exists). The full set of trigger modes is extracted from the camera device properties at the stage of forming the camera block (see “Acquisition session”) and is presented in the “Trigger” combo box (see “Record control” panel → “Trigger” combo box). When the camera device obtains a trigger, the image is acquired and placed into the buffer. Buffer is created in the beginning of the record and works as a process parallel to the video streaming. Images from the buffer are extracted and written to the file by the third parallel process. Therefore, acquiring and recording processes are working independently, providing high speed of the simultaneous visualization and saving of the data. Finally, files are saved to the raw binary or compressed video formats. The raw binary file (*.bin) consists of two parts: header and data array part with structure presented in Table 1. The “Header” section of the file contains the following sequential information (93 bits): 1) Service (80 bits); 2) Number of light sources (optional, 1 bit); 3) Frame width (number of pixels, 2 bits); 4) Frame height (number of pixels, 2 bits); 5) Parameters of the rectangular region of interest (ROI) used in the recording (coordinate values of the upper left corner of the ROI relative to the upper left corner of the frame in pixels - 4 bits, ROI width - 2 bits, and ROI height - 2 bits). In the “Data Array” section of the file, each frame (Frame) corresponds to a data block that sequentially contains: 1) information about the occupancy (in bits) of each pixel of the frame; 2) Timestamp in seconds (2 bits), milliseconds (2 bits) and microseconds (2 bits); 3) Trial number. In the case of an avi-file, the information is organized according to the structure of the avi-file, with frames being recorded sequentially, trial by trial.
Table 1. The raw binary file (*.bin) file structure
Таблица 1. Структура двоичного файла (*.bin)
Variable name | Number of elements | Type |
HEADER | ||
Free space | 80 | char |
Number of colors | 1 | uint8 |
Image width | 1 | uint16 |
Image height | 1 | uint16 |
ROI elements | 4 | uint16 |
DATA ARRAY | ||
Frame 1 | image width * image height | uint16 |
Timestamp 1 (s,ms,us) | 3 | uint16 |
Trial number 1 | 1 | uint16 |
... | ... | ... |
Frame N | image width * image height | uint16 |
Timestamp N (s,ms,us) | 3 | uint16 |
Trial number N | 1 | uint16 |
4. RESULTS
We tested AVRawRA in several applications. Firstly, AVRawRA showed perfect results for registration reflected light from the animal skull (Fig. 2). The record was performed with external triggering of the camera under the green LED illumination. AVRawRA was able to catch, visualize and analyze each frame simultaneously with FPS rate 36 frames per second with resolution of 348x260 pixels. Video stream was recorded to the raw binary file without loss of the points. AVRawRA successfully visualized and recorded each frame to the raw binary file, which had a final size of 1.5 Gb. That shows us the stability of the record even with switched-on visualization and real-time analysis.
Fig. 2. Real-time intensity analysis during recording with Qcam 1394 fast camera. Image display shows a single frame with an image of the rat skull surface under the green (525 nm) LED highlight. Real-time intensity analysis shows distribution of the pixel intensity for the current frame in the image display.
Рис. 2. Анализ интенсивности кадра в реальном времени во время записи камерой Qcam 1394. На дисплее отображается один кадр с изображением поверхности черепа крысы c подсветкой зеленым (525 нм) светодиодом. Анализ интенсивности в реальном времени показывает распределение интенсивности пикселей для текущего кадра на дисплее изображения.
Secondly, we tested usage of the ROI instrument. To demonstrate that we used an experiment shown on Fig. 2. Usage of the ROI allowed the user to estimate pixel intensity distribution in the targeted area (Fig. 3). After the decision to set the current ROI the “Set ROI” button was pressed. Image size in the stream was reduced from 348x260 pixels (Fig. 3) to 88x112 pixels (Fig. 4), which increased FPS from 36 to 50 frames per seconds. “ROI on” indicator was highlighted with a light green color, while ROI coordinates were represented as non editable values in the right part of the “Region of interest (ROI)”.
Fig. 3. Real-time intensity analysis using Qcam 1394 fast camera with marked ROI to the video stream from Fig. 2. Image display shows a single frame with an image of the mouse skull surface under the green (525 nm) LED highlight. The ROI region was marked with a red contour using the “freehand” tool. Real-time intensity analysis shows distribution of the pixel intensity inside the ROI region. Frame rate is 36 frames per second.
Рис. 3. Анализ интенсивности кадра в реальном времени с использованием камеры Qcam 1394 с отмеченным регионом интереса для видеопотока на рис. 2. На дисплее отображается один кадр с изображением поверхности черепа мыши под подсветкой зеленого (525 нм) светодиода. Регион интереса отмечен красным контуром с помощью инструмента “свободная рука”. Анализ интенсивности в реальном времени показывает распределение интенсивности пикселей внутри области ROI. Частота составляет 36 кадров в секунду.
Fig. 4. Real-time intensity analysis using Qcam 1394 fast camera after setting ROI to the video stream from Fig.2. Frame was reduced using ROI from 348 × 260 pixels to 88 × 112 pixels. Frame rate is 50 frames per second.
Рис. 4. Анализ интенсивности в реальном времени с помощью быстрой камеры Qcam 1394 после настройки ROI для видеопотока с рис.2. Кадр был уменьшен с помощью ROI с 348 × 260 пикселей до 88 × 112 пикселей. Частота составляет 50 кадров в секунду.
To verify the efficiency of the IMAQ adapter, we connected to the web camera with AVRawRA (Fig. 5). Software was able to perform all types of video recordings. Web cameras usually did not support external triggering, therefore all recordings were made using internal triggers.
Fig. 5. Acquiring and recording with a web camera device (Logitech C270) to the compressed video format (AVI). In the right bottom corner an inset with a fragment of the recorded video file opened in the media player.
Рис. 5. Съемка и запись веб-камерой (Logitech C270) в сжатый видеоформат (AVI). В правом нижнем углу показана вставка с фрагментом записанного видеофайла открытого в медиаплеерее.
Testing showed us several results. AVRawRA software allows us to use various camera devices. AVRawRA software allows us to use one camera device for the different scientific neuroimaging approaches. Real-time analysis is fully functional and compatible with AVRawRA features like ROI and binning. AVRawRA software allows us to use one camera device for the different scientific neuroimaging approaches. Summarizing that, AVRawRA software allows performing high-speed recordings with simultaneous visualization and real-time analysis without loss of efficiency.
Future directions
AVRawRA demonstrates a well organized environment and concept of data workflow. However, despite obvious benefits, presented software has some restrictions and deficiencies. Therefore, future release of the AVRawRA will cover the following key improvements: 1) acquiring and recording both in monochrome and colored modes; 2) moving from NI-IMAQdx functions (which require additional licensing after the installation of the compiled application) to NI-IMAQ functions (which provide fully free compiled application); 3) extension of the real-time analysis to display image sequence features; and 4) opportunity of the recording pause. Also, authors plan to develop an application for creating camera blocks. Camera blocks will be created independently as virtual instruments by the user for further upload to the AVRawRA.
5. DISCUSSION
Modern scientific videoregistration equipment can require large finances. Therefore, usage of the camera device in various imaging techniques can be essential for the researcher. We developed AVRawRA in the universal way to provide the user an opportunity of connection of the camera device in any configuration for performing various scientific research requiring videoregistration. However, diversity of the camera device parameters among modern manufactures creates a problem for the design of the universal software. AVRawRA was conceived as a tool for the fast video recording in the raw binary data format for a wide range of the cameras. That requires high performance of the acquiring, visualizing and saving of the video data. Therefore, we used LABView as the most suitable development environment for the interaction with hardware. The greatest advantage of the LABView is a large and convenient functionality to direct connection to hardware devices and integrated by default ability of creating parallel processes. Firstly, stable interaction with various hardware using LABView libraries eliminates the necessity of developing a support package for each device. Secondly, the universal set of functions for each type of device provides a convenient architecture of code, where each interaction with any device can be formed as blocks with constant inputs and outputs, despite different numbers and names of the internal parameters of the devices. Thirdly, various tasks can be performed independently in the same time as a part of LABView language logic, while in other development environments that is an option. The combination of the described factors completely satisfied the conditions for completing our task. However, synchronization and accuracy of interactions between parallel processes should be carefully taken into account. Using the advantages of the LABView, we designed a concept of “camera block”, where a virtual instrument is designed with standard inputs and outputs, while all differences of the camera device parameter names are described only inside this virtual instrument (“camera block”). That allows the user to add “camera block” to the software without reorganizing the main interface.
Among the available free software, Micro-Manager can be noticed as the closest analogue (in terms of functionality). However, Micro-Manager is a professional solution for a wide range of tasks, which leads to a more complex interface, as well as to some limitations (for example, organizing video recording in timed trials). The functionality implemented in AVRawRA is specialized to the usage of video cameras for recording video files. The user can configure video recording and camera parameters in one interface. Flexible modes for setting up and organizing video recording in AVRawRA allow the user to organize record both in streaming form and in the form of partial trials with fixed time intervals.
In the presented software we designed and integrated a “camera block” for three cameras (Qimaging Qicam 1394 fast, Photonfocus MV1-R1280-50-G2-16, Teledyne FLIR Blackfly S BFS-U3-04S2M). Qimaging Qicam 1394 fast requires a unique adapter, which is provided with application as self-extracted installation free package. Photonfocus MV1-R1280-50-G2-16, Teledyne FLIR Blackfly S BFS-U3-04S2M and a large set of similar cameras are supported by the resources provided by the NI-IMAQdx module. That is quite convenient and removes hardware problems of interaction with camera devices. However, the principal limitation should be mentioned. NI-IMAQdx module requires additional licensing, which can be a large budgetary burden for the laboratory. Therefore, our team plans to move from the NI-IMAQdx module to the simple NI-IMAQ and NI-IMAQ I/O functions that are covered by the developing license. Another option is to use LABView virtual instruments, which are already provided by some manufacturers in the camera device support package. However, those virtual instruments are still specific for certain camera devices. Therefore, the universal approach of “camera block” again can solve that problem. For now, all “camera block” virtual instruments can be designed by request. However, in future software releases we plan to create an application that will allow the user to create his own “camera block” virtual instrument and upload it to the AVRawRA.
6. CONCLUSION
AVRawRA software concept is trying to keep the delicate edge between complexity of the camera device software/hardware interaction and user-friendly interface. Moreover, AVRawRA provides an opportunity for high-speed raw video data acquisition and recording. Additionally, AVRawRA demonstrates low-resource consuming real-time image analysis that can be essential for neuroimaging research. In summary, AVRawRA is a convenient tool for video acquisition and recording, however, for now with some additional licensing limitations.
7. HARDWARE AND SOFTWARE REQUIREMENTS
Disk space: 1GB; RAM: 1GB;
IBM PC-compatible;
OS: Windows 7.0/8.0/8.1/10
8. LICENSE
The software can be freely used for scientific and educational purposes. If used for commercial purposes, it is necessary to notify the first author (Dmitrii Suchkov). AVRawRA is developed in LABView, therefore a run-time module for LABView 2020 is included in the installation executable file. The installation file is designed for direct installation on Windows operating system without additional preparations. Attention: IMAQ adapter requires additional licensing (https://www.ni.com/en/support/documentation/supplemental/18/licensingnational-instruments-vision-software.html).
To install AVRawRA all files from the presented repository should be uploaded. Link to the free repository with installation files: https://gitlab.com/lab-equipment-assemblies1/avrawra
9. ETHICAL APPROVAL
All animal-use protocols followed the guidelines of the Kazan Federal University on the use of laboratory animals (ethical approval by the Institutional Animal Care and Use Committee of Kazan State Medical University N9-2013).
10. ACKNOWLEDGMENTS
This work was supported by RSF grant 22-25-00225.
About the authors
D. S. Suchkov
INSERM UMR1249, INMED, Aix-Marseille University
Author for correspondence.
Email: suchkov.dmitriy.ksu@gmail.com
France, Marseille
V. V. Shumkova
Kazan Federal University
Email: suchkov.dmitriy.ksu@gmail.com
Russian Federation, Kazan
V. R. Sitdikova
Kazan Federal University
Email: suchkov.dmitriy.ksu@gmail.com
Russian Federation, Kazan
V. M. Silaeva
Kazan Federal University
Email: suchkov.dmitriy.ksu@gmail.com
Russian Federation, Kazan
A. E. Logashkin
Kazan Federal University
Email: suchkov.dmitriy.ksu@gmail.com
Russian Federation, Kazan
A. R. Mamleev
Kazan Federal University
Email: suchkov.dmitriy.ksu@gmail.com
Russian Federation, Kazan
Y. V. Popova
Kazan Federal University
Email: suchkov.dmitriy.ksu@gmail.com
Russian Federation, Kazan
L. S. Sharipzyanova
Kazan Federal University
Email: suchkov.dmitriy.ksu@gmail.com
Russian Federation, Kazan
M. G. Minlebaev
INSERM UMR1249, INMED, Aix-Marseille University; Kazan Federal University
Email: suchkov.dmitriy.ksu@gmail.com
France, Marseille; Kazan, Россия
References
- Aitken P.G., Fayuk D., Somjen G.G., Turner D.A. Use of intrinsic optical signals to monitor physiological changes in brain tissue slices. Methods 1999. 18: 91–103. https://doi.org/10.1006/meth.1999.0762
- Akhmetshina D., Nasretdinov A., Zakharov A., Valeeva G. The Nature of the Sensory Input to the Neonatal Rat Barrel Cortex. The Journal of Neuroscience 2016. 36 (38): 9922–9932. https://doi.org/10.1523/jneurosci.1781–16.2016
- Baker B.J., Kosmidis E.K., Vucinic D., Falk C.X., Cohen L.B., Djurisic M., Zecevic D. Imaging brain activity with voltage- and calcium-sensitive dyes. Cellular and Molecular Neurobiology 2005. 25 (2): 245–282. https://doi.org/10.1007/s10571–005–3059–6
- Dard R., Leprince E., Denis J., Rao Balappa S., Suchkov D., Boyce R. et al. The rapid developmental rise of somatic inhibition disengages hippocampal dynamics from self-motion. eLife 2022. 11:e78116. https://doi.org/10.7554/eLife.78116
- Grinvald A., Hildesheim R. VSDI: a new era in functional imaging of cortical dynamics. Nature Reviews . Neuroscience 2004. 5 (11): 874–885. https://doi.org/10.1038/nrn1536
- Hendel T., Mank M., Schnell B., Griesbeck O., Borst A., Reiff D.F. Fluorescence changes of genetic calcium indicators and OGB-1 correlated with neural activity and calcium in vivo and in vitro. The Journal of Neuroscience 2008. 28 (29): 7399–7411. https://doi.org/10.1523/JNEUROSCI.1038–08.2008
- Hubel D.H., Wiesel T.N. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. Journal of Physiology 1962. 160: pp. 106–154. https://doi.org/10.1615/CritRevBiomedEng.2017020607
- Inácio A.R., Nasretdinov A., Lebedeva J., Khazipov R. Sensory feedback synchronizes motor and sensory neuronal networks in the neonatal rat spinal cord. Nature Communications 2016. 7:13060. https://doi.org/10.1038/ncomms13060.
- de Melo Reis R.A., Freitas H.R., de Mello F.G. Cell Calcium Imaging as a Reliable Method to Study Neuron–Glial Circuits. Frontiers in Neuroscience 2020. 14: 975. https://doi.org/10.3389/fnins.2020.569361
- Mues M., Bartholomäus I., Thestrup .T, .Griesbeck O., Wekerle H., Kawakami N., Krishnamoorthy G. Real-time in vivo analysis of T cell activation in the central nervous system using a genetically encoded calcium indicator. Nature Medicine 2013. 19 (6): 778–783. https://doi.org/10.1038/nm.3180
- Oh. J., Lee C., Kaang B.K. Imaging and analysis of genetically encoded calcium indicators linking neural circuits and behaviors. The Korean journal of physiology & pharmacology: official journal of the Korean Physiological Society and the Korean Society of Pharmacology 2019. 23 (4): 237–249. https://doi.org/10.4196/kjpp.2019.23.4.237
- Petersen C.C., Sakmann B. Functionally independent columns of rat somatosensory barrel cortex revealed with voltage-sensitive dye imaging. The Journal of Neuroscience 2001. 21 (21): 8435–8446. 10.1523/JNEUROSCI.21–21–08435.2001' target='_blank'>https://doi.org/doi: 10.1523/JNEUROSCI.21–21–08435.2001
- Popovic M.A., Carnevale N., Rozsa B., Zecevic D. Electrical behaviour of dendritic spines as revealed by voltage imaging. Nature Communications 2015. 6 (1): 8436. https://doi.org/10.1038/ncomms9436
- Salem G., Krynitsky J., Cubert N., Pu .A., Anfinrud S., Pedersen J. et al. Digital video recorder for Raspberry PI cameras with multi-camera synchronous acquisition. HardwareX 2020. 8:e00160. https://doi.org/10.1016/j.ohx.2020.e00160
- Sintsov M., Suchkov D., Khazipov R., Minlebaev M. Developmental Changes in Sensory-Evoked Optical Intrinsic Signals in the Rat Barrel Cortex. Frontiers in Cellular Neuroscience 2017. 11. https://doi.org/10.3389/fncel.2017.00392
- Suchkov D., Shumkova V., Sitdikova V., Minlebaev M. Simple and efficient 3D-printed superfusion chamber for electrophysiological and neuroimaging recordings in vivo. eNeuro 2022. 9 (5). https://doi.org/10.1523/eNeuro.0305–22.2022
- Tiriac A., Sokoloff G., Blumberg M.S. Myoclonic twitching and sleep-dependent plasticity in the developing sensorimotor system. Current Sleep Medicine Reports 2015. 1: 74–79. https://doi.org/10.1007/s40675–015–0009–9
- Venkatesh M, .Victor S.P. Video Compression based on Visual Perception of Human Eye. IJRASET 2018. 6 (1): 2661–2664. https://doi.org/10.22214/ijraset.2018.1365
- Vincis R., Lagier S., Van De .Ville D., Rodriguez I.,Carleton A. Sensory-evoked intrinsic imaging signals in the olfactory bulb are independent of neurovascular coupling. Cell Reports 2015. 12: 1–13. https://doi.org/10.1016/j.celrep.2015.06.016
Supplementary files
