product-img
  • small-img

IVE-DP8000-GU2-HW EoL-L

Dahua Intelligent Management Server

> Supports the unified management of algorithms. Multiple types of algorithms form algorithm storehouses that contain the abilities of the algorithms stored in them. You can view details on algorithm packages and perform operations on them, such as upload and delete.

> Supports allocating computing resources, including CPU, memory, and other hardware resources. Computing resources can be centrally scheduled based on service requirements, to eliminate hardware resource differences and externally provide a unified resource pool.

> The system not only supports the unified management and flexible allocation of services about human, vehicle and metadata, but it also comes with a variety of algorithms built into it. It provides a unified call interface to the application side and offers the option to select multiple functions at the same time.

> With cluster management, you can manage multiple service modules, and enjoy access to non-stop services through cluster scheduling. The service load of each node can be adjusted based on the service load.

> Collects and manages information of service registration and configuration, and provides the unified O & M portal and the automated O & M module.

> Supports authentication management of computing power.

> With resource management, the system can integrate and manage fragmented idle computing power, and release the idle computing power resources for other analysis tasks, improving the utilization rate of idle hardware resources.

> Service platforms can specify the algorithm for task analysis. The intelligent view engine allocates computing power and schedules tasks.

> Schedules algorithms across different types of architecture. The algorithms easily adapt to 2 or more architectures at the same time, such as the arm and x86 architectures.


  • 仕様
  • アクセサリ
  • ダウンロード
  • System

    Main Processor

    Two 10C/20T Xeon Silver 4114 processors 2.2 GHz (13.75M Cache)

    Operating System

    Linux

    Memory

    Four 16 GB DDR4 memory modules with up to 24 slots

    Disk

    Two 3.5"4 TB HDD which can be expanded to maximum 32 TB (each HDD is 4T) with up to 8 slots.
    7.2K RPM SATA 6 Gbps 512n 3.5"

    RAID

    Optional:
    1. SATA RAID0, 1, 5, 10
    2. SAS RAID0, 1, 5, 6, 10, 50, 60
    Optional: no/1 GB/2 GB cache
    Optional: power failure protection of cache

    Port

    Network Port

    2 × 10000/1000 MB self-adaptive network ports

    USB

    2 × front USB3.0 ports and 3 × rear USB3.0 ports

    VGA

    2 × VGA ports

    PCIe

    6 × half-height PCI-E expansion slot (3 × PCI-E3.0x8, 2 × PCI-E3.0x16, 1 × PCI-E3.0x4))

    Others

    1 × RJ-45 management network port

    General

    Power Supply

    100–127 V/200–240 V, 50/60 Hz, 10 A/5 A

    Power Redundancy

    Dual

    Power Consumption

    ≤ 800 W

    Operating Temperature

    10°C to 35°C (50°F to 95°F)

    Operating Humidity

    35%–80% (RH), maximum relative humidity is 90%(RH) (40°C).

    Storage Temperature

    –40°C to 60°C (–40°F to 140°F)

    Storage Humidity

    20%–93% (RH)

    Gross Weight

    29.05 kg (64.04 lb)

    Net Weight

    19.25 kg (42.44 lb)

    Product Dimensions

    87.1 mm × 447.6 mm × 735.0 mm (3.43" × 17.62" × 28.94") (H × W × D)

    Packaging Dimensions

    273.0 mm × 754.0 mm × 1069.0 mm (10.74" × 29.68" × 42.09") (H × W × D)

    Installation

    Standard 19'' rack installation with guide rail

    Optional

    Product Type

    Hardware

    Computer Version Intelligent Engine

    Resource Management

    Supports allocating computing resources, including CPU, memory, GPU and other hardware resources. The unified management of computing resources is also supported to provide computing resources for the analytical system. Computing resources can be centrally scheduled based on service requirements, to eliminate hardware resource differences and externally provide a unified resource pool. Supports monitoring the registration process and the running status of computing resources. You can also perform operation and maintenance as needed. Virtualizes computing cards into multiple fragments, and provides refined control of computing resources.

    Service Scheduling

    The system not only supports the unified management and flexible allocation of algorithms, but it also comes with a variety of algorithms built into it. It provides a unified call interface to the application side and offers the option to select multiple functions at the same time. Schedules algorithms across different types of architecture. The algorithms easily adapt to 2 or more architectures at the same time, such as the arm and x86 architectures.
    The supported options are: Face recognition, license plate recognition, pedestrian recognition, human metadata, and motor-vehicle and non-motor vehicle metadata.

    Intelligent Task Scheduling

    Manage analysis and retrieval tasks, and execute strategies for different tasks through resource scheduling modules and task scheduling strategies. Adjust task strategies based on the actual workload and task priorities, and assign intelligence analysis tasks to corresponding analysis engines for processing.

    Cluster Management

    With cluster management, you can manage multiple service modules, and enjoy access to non-stop services through cluster scheduling. The service load of each node can be adjusted based on the service load.

    Network Authentication Management

    Conveniently authorize algorithms and the number of cards for computing power servers through the network authentication.

    Third-Party Algorithm Access

    Manages third-party algorithm access. The system provides a standard interface for connecting third-party algorithms to the algorithm warehouses of intelligent view engine. You can allocate and load third-party algorithms to computing resources to realize algorithm instantiation. Service platforms can even specify the algorithm for task analysis. The intelligent view engine allocates computing power and schedules tasks.