General questions about IREX
№ |
Module |
Description |
---|---|---|
1 | Motion detection | The module is used to detect the intrusion of unauthorized persons into the protected area, as well as for video monitoring of small and medium-sized objects: apartments, entrances, offices, shops, etc. Motion detection is used to determine the presence of relevant motion in the observed scene. |
2 | Objects tracking | The main area of application of the module is the perimeter protection of any length, designed to prevent intrusion into the protected area from the outside or escape from the inside. It is used to detect people, vehicles, and objects left in the surveillance zone. It is used to protect the perimeter of special facilities (borders, warehouses, prisons), in industrial and manufacturing plants, oil terminals, gas stations, health facilities, schools, and kindergartens. The module is a tool for controlling the loitering on the site, attempts of vehicle lifting on parking lots, the installation of an explosive device on the railway track, unauthorized photo or video shooting, wall graffiti. Detecting abandoned items (orphaned or forgotten things) allows the user to protect the surveillance area against the appearance of potentially threatening items, for example, of an explosive device, of the owner of the abandoned object on the public, trade and transport infrastructure: railway and bus stations, airports, underground, shopping centers. |
3 | Face recognition | The module captures all the faces in the field of view, matches them against the database and submits the similarity rate for the matched faces.
Facial recognition module can be used for:
|
4 | Number plate recognition | The module for recognition of state registration vehicle number plates provides with control, registration and identification of vehicles at any sites with the traffic of various intensity: parking lots, industrial enterprises, critical objects, road patrol service and municipal authorities, city transport flow control, search for stolen cars, etc. The module is able to recognize number plates issued in more than 20 countries all over the world. |
5 | Smoke detection | The early smoke detection module is designed to detect smoke by an image from a video camera, allowing the user to determine the source of the fire at a considerable distance from the camera much faster and more accurately, while conventional fire detectors work only when a fire or smoke is in close proximity to them.
The early smoke detection module is used to automatically detect smoke and fires:
|
6 | Traffic jam and parking | The traffic jams detection module is designed to analyze traffic conditions, traffic congestion, monitor traffic flows, and identify traffic jams as well as to control parking zones and count the number of vacant parking spots. The module is used in parking lots, private compounds, highways as well as city roads when special public events might take place. |
7 | Vehicles and traffic conditions | Traffic violation detection and number plates recognition module is used to detect various types of violations in real-time as well as for calculating the average vehicle speed on the road with a speed limit. The PDDTrack module helps to detect violations where there are no traffic police officers and speed up the analysis of the situation during a traffic accident. The module is used on highways, at the crossroads and dedicated lanes of vehicles. |
8 | Crowd detection | The Crowd detection module is used for indoor and outdoor perimeter protection for the early warning of public order violations. The module helps to timely detect and prevent dangerous moments: mass riots, crowding, pandemonium, etc. Crowd detection is ideal for monitoring public spaces, event venues, streets, parks, squares, railway stations, underground passages, shopping malls, educational institutions, and capacity restricted environments. |
9 | Sound detection | The audio detector is capable of detecting and recognizing various sounds, such as gunshots, breaking glass, scream or loud sounds like a car alarm, loud music, etc. Sound detection can be used by security personnel for safety purposes in residential areas, social facilities, shopping malls, public places significantly reducing the response time of security personnel long before any visual clues pop up. |
10 | Video quality | The video stream quality detector provides a comprehensive analysis of the quality of the video signal. It is used on all types of protected objects. It is designed to provide continuous monitoring of the final devices (cameras) in order to detect technical malfunctions, as well as unauthorized external interference or sudden violation of observation conditions, sabotage, blocking the camera’s field of view with a foreign object, exposure, defocusing of the camera, and a video signal loss. |
Ireland:
1500+ cameras – gas stations, oil refining facilities
The Republic of Azerbaijan:
800+ cameras – crowded places, sports facilities
The Republic of Kazakhstan:
500+ cameras – railway stations, sports, and cultural facilities
- Remote monitoring security 24/7 by a single monitoring center;
- Biometric identification system and face recognition;
- Automatic number plate recognition;
- Abandoned and stolen items detection. Ensuring the safekeeping of your property;
- Security upgrade, intrusion prevention;
- Visitor control by black and white lists, loyalty cards;
- Vehicles control;
- Prevention of unlawful acts involving theft, damage to property and equipment.
- Innovative and linear scalability
- Cloud solution
- Open source code
- Lack of undeclared capabilities
- 1M + cameras (unlimited)
- 1M + sensors (unlimited)
- Unlimited number of users
- Big Data Analysis
- Artificial Intelligence: OpenCV, Caffe, Tensorflow, CoreML + internal AI
- JS Web Client, HTML5
- Open API
IREX | Other video surveillance systems | |
---|---|---|
Server platforms | Linux familyFree software | Windows familyProprietary software |
Client platforms | Full-fledged work through the web interfaceLinux, Windows, Android, and iOS clients | Windows client is required |
Virtualization | Docker containersNo cost for additional VM softwareFewer hardware resources required | Virtual machines like VMware, etc.Significant costs for additional VM softwareMore hardware resources required |
Analytics | Video and audio analytics based on deep learningHigh accuracy and generalization ability | Video and audio analytics based on a motion detectorLow accuracy and limited use |
Search in video arrays | Global search in many placesBig Data Indexing | Search within one placeNo technology to work with Big Data |
Storage | Ceph object storageLow cost, scalable and self-healing | RAID-based file storageExpensive, non-scalable, complex recovery |
Solution class | Carrier-grade solution Unlimited number of usersUsable on different devices | Object-level solutionLimited number of usersOn a certain working place |
Infrastructure | Cloud infrastructurein private or public networks | Non-cloud infrastructurein private or public networks |
Database | Spark big data processingCassandra and Ignite distributed DBMSHorizontal (linear) scaling and self-healing | No work with big dataCentralized DBMS type like MS SQL ServerComplicated scaling without self-healing |
Orchestration | Kubernetes orchestrationAutomatic deployment, scaling, and controlFewer admins required | No Kubernetes orchestrationNo automatic deployment, scaling, and controlMore admins required |
A geographic information system | Self-contained scalable mapData sources are groupedSearch for objects and events on the map | Map-based on external service (Yandex, etc.)Data sources “crawl” against each otherNo possibility to search for objects and events on the map |
- System manager
- Camera manager
- Group manager
- Operator
- Member
- Observer
Below are the basics that are to be followed when you create a user group tree and assign user roles:
- Each user must be assigned to a user group.
- All user groups are organized into a multi-level hierarchy, called a “tree”. At the very top of the tree is a root group.
- The role assigned to a user determines his/her permissions in the System. Role permissions cannot be changed.
- None of the System’ roles provide full access to all System settings, functions, and resources.
- One user can be assigned to different user groups and have different roles in them. In this case, the user will have access to the resources of the groups he belongs to.
- Each user can have only one role in a user group.
- Permissions of a user assigned to a particular user group are propagated hierarchically to all subordinate (nested) user groups.
- Users belonging to the root group have access to all System resources.
- A user can have individually assigned resources (cameras, lists of persons, number plates lists, etc.) regardless of the hierarchical position of his/her user group.
- Individually assigned resources (cameras, lists of persons, number plates lists, etc.) are view-only.
- No user can remove himself/herself from a user group and, therefore, from the System.
- No user can change his role in a user group or assign resources to himself.
- The system manager is present in the root group only. The settings made by him are valid for the entire System and all users.
- Only the System manager can assign, edit or delete the profile of another System manager.
- A single System manager and the root group cannot be deleted from the System.
- A System manager can add any user to any group and change the role of any user (see the restrictions in p. 18).
- Only the System manager can assign, delete or edit the Camera manager profile.
- A Camera manager can only appear in the root group.
- If a user belongs to only one user group, he/she will be deleted from the System, if he/she is removed from this group. If necessary, a user with the same email can be added to the System again.
- To delete a user group, it must be empty. This means that all assigned resources, as well as all users, are to be deleted from the group. Only then it is allowed to delete a user group.
Below are the performance indicators of the video analytics modules in the System:
Module | Accuracy, % |
---|---|
Motion detection. An intelligent motion detection module | 95,83 |
Objects tracking (side view). Multi-purpose video analytics module for side view cameras and sparse events | 88,46 |
Objects tracking (side view) + rule. Multi-purpose video analytics module for side view cameras and sparse events with the application of rules and classifiers: | |
Abandoned item | 86,96 |
Motion in region | 85,38 |
Events classification | 91,67 |
Line crossing | 83,46 |
Loitering | 82,93 |
Face recognition. Module for detecting, tracking and identifying people according to biometric features of a person | 94,21 |
Number plate recognition. License plate recognition module | |
Republic of Belarus | 97,46 |
Republic of Azerbaijan | 89,82 |
Republic of Kazakhstan | 94,00 |
Russian Federation | 96,66 |
Ukraine | 91,69 |
Smoke detection. Intelligent smoke detection module | 87,72 |
Traffic congestion detection. Traffic jam detection module | 99,76 |
Traffic conditions. Traffic violation detection module | |
Line crossing | 98,52 |
Motion in region | 99,39 |
Crowd detection. Crowd detection module outside and inside | 76,28 |
Sound detection. Audio detector for detecting cries, noise, gunshots and glass breaking | 93,76 |
Video quality. Video stream quality detector | 95,74 |
In the table below you can see how video and audio analytics modules are compatible with each other.
Motion detection | Object tracking (side view) | Traffic jam and parking | Face recognition | Number plate recognition | Traffic conditions | Smoke detection | Crowd detection | Video quality | Sound detection | |
---|---|---|---|---|---|---|---|---|---|---|
Video quality | yes | yes | yes | yes | yes | yes | yes | yes | yes | |
Sound detection | yes | yes | yes | yes | yes | yes | yes | yes | yes | |
Motion detection | yes | yes | yes | |||||||
Object tracking (side view) | yes | yes | yes | |||||||
Traffic jam and parking | yes | yes | ||||||||
Traffic conditions | yes | yes | yes | |||||||
Face recognition | yes | yes | yes | |||||||
Number plate recognition | yes | yes | yes | |||||||
Smoke detection | yes | yes | yes | |||||||
Crowd detection | yes | yes | yes | yes | yes | yes | yes | yes | yes |
There are two ways to view and try IREX functionality:
- To watch video tutorials on our youtube channel;
- To get a temporary account for IREX demo zone (you’ll get access to up to 7 cameras with a set of basic functionality: detection of persons, number plates, abandoned items, situation analysis, etc.).
IREX Architecture and hardware equipment
- Storage System – Ceph
- Orchestration – Kubernetes, Swarm
- Database – PostgreSQL, Cassandra, Redis
- Bus – Kafka
- Data Processing – Ignite, Spark
- Monitoring system – Elastic + Kibana Logging, InfluxDB + Grafana
At different stages and at different components of the Kipod platform, we use an open-source technology stack and several programming languages. The main languages are C ++, Java, GO and Python.
VA Machines (K8S-VA) – IREX video processing subsystem. This group of servers is designed for deployment and management (orchestration) of docker containers, providing self-healing mechanisms, performance control, task re-planning and load balancing when
processing data using the IREX platform video analytics modules (VA modules). Implemented on Kubernetes. The main task of the subsystem is to run the container on the server hardware with a sufficient amount of available resources (CPU, RAM).
Resource metering is based on the requested resources in the POD and their quantity on the server. Kubernetes automatically restarts the application when it crashes. When a Kubernetes machine fails, the containers are migrated to the available machines.
Kubernetes records used and available resources on the servers and the newly created container for video processing from the added camera will automatically start processing the video stream on the least loaded server.
The IREX storage and processing subsystem performs the main roles needed to build horizontally (linearly) scalable stateful and stateless services.
The composition of the data processing and storage subsystem includes:
BE Server Nodes K8S-BE – IREX data storage and processing subsystem (BackEnd).
The IREX storage and processing subsystem performs the main roles needed to build horizontally (linearly) scalable stateful and stateless services.
The composition of the data processing and storage subsystem includes:
- ETL (Extract, Transform, Load) stateless services for transformation and event placement in Cassandra, Apache Ignite, PostgreSQL databases.
- Services providing API for searching and displaying data.
- User traffic balancing nodes.
The list of stateful applications (DB):
- Cassandra
- PostgreSQL
- Kafka
- Ignite.
AUX is the IREX platform control subsystem.
AUX servers are responsible for monitoring various indicators of the current platform state, including storage usage, performance indicators, system utilization. They also provide tools for collecting and processing system information, metrics, logs, and managing the platform as a whole.
Subsystems include:
- AUX-MM – node collecting metrics, logs and cluster management;
- AUX-MAAS – MAAS node (equipment consolidation). These are repositories with software, setting up network interfaces, creating disk partitions and setting up RAID.
- MAAS stores the configuration of all system nodes and allows you to centrally distribute resources between pools and perform basic configuration of all entered servers, including the installation of the operating system.
- Logging – a system for collecting and analyzing logs.
- Monitoring – a system for monitoring equipment and applications.
- Management – platform service management system.
- At the level of VA Machines (K8S-VA) fail safety is provided by the Kubernetes processing servers. When the Kubernetes server crashes, containers are migrated to available machines.
- At the level of BE Server Nodes (K8S-BE) fail safety is provided both by the Kubernetes processing servers and by triple database replication as well as by fail-safety of individual BE servers. RAID is used to resist local disk failure. Additionally, there are BE Server Nodes machines responsible for load balancing for the data storage subsystem and the data processing subsystem. Fail safety and data integrity is provided by the distributed data storage system based on Ceph.
To ensure optimal data routing between all components of the IREX software and hardware complex, routing is implemented using the Leaf-Spine two-level topology. The topology is composed of leaf switches (to which servers and storage connect) and spine switches (to which leaf switches connect).
The Leaf level consists of switches to which servers connect.
Spine switches form the core of the architecture. Every leaf switch connects to every spine switch in the network fabric. The network traffic path is balanced in such a way that the network load is evenly distributed among spine switches. Failure of one of spine switches will only slightly degrade network performance in the cluster.
No matter which leaf switch a server is connected to, it has to cross the same number of devices every time it connects to another server. (The only exception is when the other server is on the same leaf.)This approach is most efficient because it minimizes latency and bottlenecks.
dBrain is an open-source hardware and software platform for cloud computing. It allows creating a private cloud, an analog of Amazon EC2 and Google Cloud, in a closed loop for government customers and corporations.
Includes container virtualization (Docker), orchestration (Kubernetes), distributed storage (Ceph) and many other services necessary for the operation of cloud applications. Used for the work of their own and third-party products.
- Ceph is an open-source distributed storage: no binding to the vendor. You can combine equipment from different manufacturers to optimize costs.
NFS: equipment of certain vendors with proprietary software, lack of compatibility
2. Ceph: is laid at the software level, which allows you to use horizontal scaling, unlimitedly increase the power of the entire cluster and not change the previous system to put additional server equipment into operation.
NFS is laid at the level of equipment and the system as a whole. There are limits for both vertical and horizontal scaling, which over time can lead to the replacement of hardware and software.
3. Ceph is an open-source solution that greatly facilitates debugging and speeds up the solution of critical problems.
NFS is a closed solution. Changes are made directly by the technical specialists of the vendor.
4. Ceph is initially fault-tolerant storage. There are various levels of fault tolerance:

- 2/3/n replication
- Erasure coding
- 8+3
- 8+2
- K+m
NFS: fault tolerance is laid at the level of hardware and software which leads to an increase in the cost of technology.
5. Ceph takes into account the physical location of the media to reduce the likelihood of simultaneous failure of several replicas at once:
- the server where the disk is located
- the rack where the server is placed
- the row where the server is located
- data center
6. Ceph is used by ERN, Cisco, DigitalOcean, Deutsche Telekom, Yahoo!, Bloomberg, Mail.ru, Wargaming, Huawei, QCT.
IREX software is responsible for the initial processing of data received from CCTV cameras and specialized sensors using artificial intelligence algorithms and the subsequent sending of all the processed data to the dBrain distributed storage system.
IREX is responsible for the online and post-factum search for events through a web interface and a mobile application. IREX contains user databases (lists of persons, number plates lists, etc.), here you can specify user rights and roles, configure CCTV cameras and navigate through map services.
The minimum required amount of computing resources for processing 10 HD video streams is 2 servers with Intel Xeon E5 v4 (12 cores each) and RAM.
The total storage volume is calculated from the required archive depth and the number of replicas (data copies). The following basic parameters are accepted for calculation: HD video channel with permanent archive recording for 30 days + metadata takes up to 1.1 Tb.
To calculate computing and server resources, fill in the questionnaire and send it to [email protected].
IREX doesn’t provide any equipment for testing, but there is a possibility to try it on google cloud or customer server equipment.
The execution period of a pilot project is 1 to 1,5 months (describe hardware, cloud requirements, etc.)
Camera requirements
See below the basic requirements for video parameters to perform video content analysis:
Parameter | Value |
---|---|
Video stream resolution | At least [email protected] (for TamperAlarm, AudioAlarm, BasicTrack, SideTrack, TopTrack, RailTrack, SmokeTrack, CrowdTrack);At least [email protected] (for TrafficTrack); At least 1280х[email protected] (for FaceTrack, when using the transmission system); At least 1920х[email protected] (for FaceTrack, when using in the streaming mode); At least 1920х[email protected] (for NumberTrack). |
Video bitrate | Constant bitrate:for resolution 1920x1080px – 6 MBit/sec; for resolution 1280x720px – 3,0 MBit/sec; for resolution 720x576px – 1 MBit/sec. |
Video stream type | RTSP (package type TCP/IP, UDP);codec H.264;ONVIF compatibility is required (PROFILE S, PROFILE G). |
Electronic shutter lag (Detection of plate numbers) | from 1/250s – at car speeds up to 30 km/h; from 1/500s – at speeds up to 60 km/h; from 1/1000c – at speeds up to 90 km/h; from 1/5000s – at speeds up to 200 km/h. |
Electronic shutter lag (Face detection) | at least 1/100 s. |
Electronic shutter lag (Motion detection) | at least 1/50 s. |
Audio codec | G.711ulaw (8 бит 8Кгц моно), S/N ratio: at least 15 дБ. |
Image type | Both b / w and color images are supported. |
Distortion (fish-eye effect) | The relative distortion of the image is not more than 1%. |
S/N ratio | > 50 dB. |
Video quality | Observed objects should be well distinguishable, in focus, clear and contrasting with respect to the background. Digital noise (as a superimposed mask of pixels of random color and brightness) should not distort the observed objects or their boundaries. The camera compression algorithms should not distort the scene and the observed objects. On the entire area of the frame and the camera lens, there should be no defocused zones, foreign objects (example – cobwebs). In difficult weather conditions, such as snow, fog, rain, dust, and others, the quality of all video analysis modules can be significantly reduced. It is not allowed to direct the projectors into the camera lens, the presence of lighting devices, including a flashing and stroboscopic type, is highly undesirable and leads to distortion of the video image and a decrease in the accuracy of video analytics. |
Linear dimensions of objects | The width and height of objects in the image should not be less than 1% of the frame size (at a resolution of 1920x1080px) or 15x15px. The width and height of objects in the image should not exceed 70% of the frame size. Linear dimensions of the object on the image for the modules for recognizing faces, plate numbers, and traffic rules are determined by the requirements for the specified modules. |
Features of object motion | The speed of the object should not be less than 1 pixel per second. To detect the movement of an object by video analytics, the duration of its visibility in the frame must be at least 1 second. For continuous tracking, the object must move between two adjacent frames in the direction of travel for a distance not exceeding its size. When identifying and recognizing faces, from the initial position to the final one, at least 20 frames per person (at least 2 seconds per frame) must be captured. When identifying and recognizing car numbers, from the initial position to the final one, at least 15 frames per number (at least 1 second per frame) must be captured. |
Video streams characteristics of 1 camera | Minimum bandwidth per video camera, Mbps | Burst, % | Burstable bandwidth, Mbps |
---|---|---|---|
FullHD 1920×1080, constant bitrate: 4.0 Mbps, 25 fps | 4,0 | 25 | 5,00 |
HD 1280×720, constant bitrate 2.5 Mbps, 25 fps | 2.5 | 25 | 3,13 |
QHD 2592х1944, constant bitrate avg: 8.0 Mbps, 25 fps | 8,0 | 25 | 10,00 |
Example:
You need to connect 3 video cameras to the monitoring system: 2 HD cameras 1280×720 and 1 FullHD camera 1920×1080. It means the total channel capacity from the monitoring site will be: 2 x 3.13 Mbps + 1 x 5.00 Mbps = 11.26 Mbps.
However, taking into account the potential possibility of heavy traffic on the scenes of cameras of the local CCTV system, and the tariff policy of the main telecom operators and operators of the “last mile”, optimal bandwidth to be ordered when the site is connected to the IREX monitoring system in this example is up to 15 Mbps.
Requirements to communication links
Parameter | Value |
---|---|
Data transmission and network connection readiness factor | min 99.50 % |
Latency | max 100 ms |
Jitter | max 20 ms |
Packet loss | max 1 % |
Percentage of connections satisfying data rate and latency | min 99,00 % |
QoS, traffic prioritization requirements from cameras of local CCTV systems (descending) | 1. In-house traffic 2. Video traffic 3. The rest of the traffic (telemetry, SNMP) |
Face recognition
A certain camera position and view angle are required for high-quality face recognition.
Minimum requirements for parameters for facial recognition cameras.
Parameters | Value |
---|---|
Number of streams | Support for dual stream with independent configuration. |
Stream resolution | With a recognition distance of 4m – not less than 1920x1080px; With a recognition distance of up to 4m, at least 1280x720px. |
Stream properties | The aspect ratio of the video should match the aspect ratio of the actual scene to prevent geometric distortion.Stream typeAspect ratioHD 1280×720 Constant Bitrate: 2.5 MBit/sec, 25 frames/sec16:9FullHD 1920×1080 Constant Bitrate: 4.0 MBit/sec, 25 frames/sec16:9If the image is distorted, the camera should transmit the correct SAR (DAR) parameter in the stream. |
Frame rate | 25fps. |
Keyframe (i-frame) interval | at least 1 keyframe per second |
Focal length of the lens | from 4mm. |
Lens type | Fixed (motorized varifocal is recommended). |
Matrix | 1/1.8’’ Progressive Scan CMOS. |
Aperture | Auto/Manual. |
S/N ratio | > 50 dB. |
Photosensitivity | 0.002lc (F1.2) – 0.0002lc (F1.4). |
Image enhancement | Hardware WDR from 120dB. |
Electronic shutter lag | from 1/100 s;slow shutter support. |
Video сompression | H.264. |
Video bitrate | for the resolution of 1920x1080px – 4 MBit/sec;for the resolution of 1280x720px – 2.5 MBit / sec. |
Working conditions | |
Environmental conditions | from -40 ° С to + 55 ° С (for video surveillance cameras) and from 0 ° С to + 55 ° С (for CCTV cameras of internal execution). |
The accuracy of face recognition is 94, 21%.
Yes. IREX is able to recognize a person’s age by photo or from the camera.
IREX has also developed some extra services for face recognition (Telegram chatbots).
- Recognition of buyers age by photo and checking his or her passport if there is any doubt that the person is under 18 – @RetailKYC_bot.
- Matching a person’s passport or other ID with a live picture – @ksensormatcher_bot.
Yes. IREX recognizes additional features like having a beard or mustache or wearing glasses. It can also recognize person’s race and gender.
No, there are no limits on the number of people in the database. You can add as many people as you wish and upload multiple images for each person.
Yes. Depending on the location of the surveillance camera IREX recognizes up to 25 persons in the frame simultaneously.
Yes, it can.
Recommended – 1.7- 1.8 m Minimum – 1,4 m Maximum – 3,5 m Note: Installation parameters are selected individually for each camera.
The minimum face image size for detection must be at least 45x45px.
The minimum face image size for recognition must be at least 100x100px; Eyes distance in images from front-facing cameras must be at least 50px.
-/+15° up/down
-/+ 45° left/right
The IREX server architecture assumes linear scaling of the system for the operation of an unlimited number of cameras and users. Depending on the available server hardware, the maximum number of cameras for recognizing individuals (equipment capacity) is calculated individually for specific CPUs and Memory for hardware.
Number plate recognition
No, there are no limits on the number of vehicles in the database.
A certain camera position and view angle are required for high-quality number plate recognition.
Minimum requirements for parameters for number plate recognition cameras:
Parameters | Value |
---|---|
Number of streams | Support for dual stream with independent configuration. |
Stream resolution | At least 1920×1080px. |
Frame rate | At least 25 f/s. |
Focal length of the lens | With a recognition distance of 3.4m – from 3.8mm; With a recognition distance of 9.8m – from 8mm. |
Lens type | Motorized zoom lens. |
Matrix | 1/1.8’’ Progressive Scan CMOS. |
Aperture | Manual ( optional – Auto). |
S/N ratio | > 50 dB. |
Photosensitivity | 0.002lc (F1.2) – 0.00027lc (F1.4). |
Image enhancement | Hardware WDR, 140dB (with the implementation of the transmission system). |
IR illumination in the area of the plate number | at least 100 lc (at speeds up to 30 km/h);at least 200 lc (at speeds from 30 km/h and above); |
Electronic shutter lag | from 1/250s – at car speeds up to 30 km/h; from 1/500s – at speeds up to 60 km/h; from 1/1000c – at speeds up to 90 km/h; from 1/5000s – at speeds up to 200 km/h; support of slow shutter. |
Video bitrate | Constant bitrate – 4 Mb/s. |
Working conditions | |
Environmental conditions | -40 °C…+60 °C, humidity not more than 95% (without condensation). |
Protection | IP67, Thermal housing with heating is mandatory. |
Built-in heater | mandatory. |
Types of recognizable numbers for each recognition profile:
Recognition profile is a combination of settings that provide the highest recognition accuracy for the selected country, group of countries or place. For example, profile Texas is the most suitable for number plates registered in Texas and its neighboring states.
- US:
- Texas
- Texas (priority)
- California
- California
- California (priority)
- Texas
- Other countries:
- Belarus
- Lithuania
- Poland
- Spain
- UK (Ireland)
- etc.
Number plate recognition | Accuracy % |
---|---|
Republic of Azerbaijan | 89,82 |
Republic of Belarus | 97,46 |
Republic of Kazakhstan | 94,00 |
Russian Federation | 96,66 |
Ukraine | 91,69 |
The IREX server architecture assumes linear scaling of the system for the operation of an unlimited number of cameras and users. Depending on the available server hardware, the maximum number of cameras for recognizing numbers (equipment capacity) is calculated individually for specific CPUs and Memory for hardware.
The maximum number plate reading angle is 20°.
If camera settings and camera position meet all the requirements, IREX is able to read a number plate at any speed.
IREX monitors all 4 lanes simultaneously with 4k cameras.
IREX monitors 1-2 lanes with HD cameras.
IREX classifies vehicles by their type and color only.
Individual for each camera (for example, different parameters for different scenarios: along the carriageway or on the entry groups)
The number plate must be at least 150px in width and at least 20px in height to be recognized.
Licensing
There are 2 ways to purchase the IREX license:
- one-time purchase of a IREX license (Capex)
- a monthly subscription to IREX (Opex)
The license is purchased at the rate of 1 camera = 1 license.
Yes. The cost of a one-time software purchase depends on the total number of cameras/licenses. The greater the number of cameras, the lower the cost of the license.
No, you don’t. There are no limits on the number of persons or vehicles in the database.
With a monthly subscription – technical support is provided free of charge (already included in the monthly subscription price). With a one-time purchase of licenses – technical support is provided free of charge for the first 12 months from the date of sale. For the next 12 months (after 1 year) the cost of technical support is 15% of the initial cost of the IREX license.
No. When purchasing a IREX license, the customer receives all the functionality of the system. License cost depends on the number of cameras only.
Monthly IREX software technical support includes:
- Software updates and level 2 technical support 24×7.
- Software optimization by installing new versions, reduction of consumed computing resources per channel unit, reduction of resource capacity for processing and storing information on existing and new server equipment, ensuring an adequate level of fault tolerance and system health, minimizing downtime and effects of the version during and after upgrades to the new version.
- Ensuring maximum stability of the IREX application software with Infrastructure software (dBrain) with minimization of the functional regression and 100% availability of the monitoring system services, increasing the number of competitive user requests on the existing monitoring system cluster equipment.
- Improving the interface and usability of the IREX software, developing existing and adding new search and event management functions: web interface, video wall, mobile