Categories
Documentation Quick Start Guides exacqVision Client Categories Products

S-Series Enterprise Quick Start Guide

S-Series-Enterprise-Networked-Video-Storage.pdf

Categories
Documentation Data Sheets Categories

exacqVision S-Series Network Video Storage

S-Series.pdf
Categories
Knowledge Support Support exacqVision Server Categories Products

How to Enable Edge Storage on Axis Cameras

The instructions below differ based on the firmware versions on your Axis cameras. Please check your camera’s firmware version prior to continuing.

<br>

Edge Storage recording is currently supported for recording based on motion events. Setting the camera(s) to continuous recording may produce unpredictable results.

<br>

Adding the camera to exacqVision:

  • When adding your camera(s) to exacqVision on the ‘Add IP Cameras’ page you must use HTTP. HTTPS connections are currently not supported for use with Axis cameras for the Edge Storage feature. 
  • On the ‘Add IP Cameras’ page, add the following string after the Hostname/IP Address entered for the camera: #transport=udp  (i.e. – 192.168.0.5#transport=udp)  Note:  Some camera firmware versions do not require this override.  If the camera is not showing “Edge Storage Status: Supported”remove this string and disable/enable the camera. 

<br>

Axis firmware 7.x and above

  1. Login to the camera’s web interface.
  2. Enter the Settings menu.
  3. Go to the System tab.
  4. Open the Events page.
  5. Under the Action Rule List, click ‘Add…’
  6. Enter a name for the event such as ‘NetLossRecording’.
  7. Under the Trigger drop-down menu, what you choose depends on the type of motion detection you are using.

    For VMD1, which is traditional motion detection, select ‘Detectors’. In the menu below this, select ‘Motion Detection’ or ‘Motion Alarm’.
    In the final drop-down menu, select the Motion Window to use.

    For VMD4, select ‘Applications’. In the drop-down menu below this, select ‘VMD 4.
    If you have multiple profiles configured you need to choose the same one being used by exacqVision. 
  8. Leave the Schedule set to ‘Always’.
  9. The stream profile must be set to the same stream that the exacqVision Server is recording.
  10. Check the duration boxes but leave them at their default settings.
  11. Change the Storage option to your SD card. 
  12. Click ‘OK’.
  13. You may examine the SD card to confirm the camera populates recordings locally. 
    In firmware 7.x and above, this is performed by minimizing the Settings menu to make the Storage icon appear at the bottom of the Live View.
  14. To confirm or test the Edge Storage function you will need to simulate a network disconnection that does not cause the camera itself to lose power. When the camera connection is re-established the server will indicate entries in server logs, and will begin to copy video files from the camera.

<br>

Axis firmware 6.x

  1. Login to the camera’s web interface.
  2. Enter the Setup page.
  3. Open Events > Action Rules.
  4. Click on ‘Add’. 
  5. Enter a name for the event, such as ‘NetLossRecording’.
  6. Under the Trigger drop-down menu, what you choose depends on the type of motion detection you are using.

    For VMD1, which is traditional motion detection, select ‘Detectors’. In the menu below this, select ‘Motion Detection’ or ‘Motion Alarm’.
    In the final drop-down menu, select the Motion Window to use.

    For VMD4, select ‘Applications’. In the drop-down below this, select ‘VMD 4’.
    If you have multiple profiles configured you need to choose the same one being used by exacqVision.
  7. Leave Schedule set to ‘Always’. 
  8. Under Actions, set the Type to ‘Record Video’. 
  9. The stream profile must be set to the same stream that the exacqVision Server is recording. 
  10. Check the duration boxes but leave them at their default settings. 
  11. Change the Storage option to your SD card. 
  12. Click ‘OK’. 
  13. You may examine the SD card to confirm that the camera populates recordings locally. 
  14. To confirm or test the Edge Storage function you will need to simulate a network disconnection that does not cause the camera itself to lose power. When the camera connection is re-established the server will indicate entries in server logs, and will begin to copy video files from the camera. 

<br>

How-to-Enable-Edge-Storage-on-Axis-Cameras.pdf
Categories
Knowledge Support Support exacqVision Server Categories Products

ExacqVision IP Plugin IDs

ExacqVision data files are stored with a specific file naming format. The format includes a four-hex-digit plug-in ID associated with the recording device. Following is a list of device and integration plug-in IDs:

Access Control, Capture Boards and Cameras/Encoders

NamePlugin IDDescription
XDVAPI0004eDVR capture board
AXISIP0007IP camera
IQEYE0011IP camera
SONY0012IP camera
PANASONIC0015IP camera
ACTI0016IP camera
IPCAMDETECT0017General IP camera detection
ARECONT0018IP camera
VIVOTEK0019IP camera
ONVIFNVC001AIP camera
ARECONTTFTP0021IP camera
IOIMAGE0022IP camera
STARDOT0023IP camera
BOSCH0024IP camera
CANON0025IP camera
IPX0026IP camera
STRETCH0027Stretch and SDVR capture boards
BASLER0028IP camera
GANZ0029IP camera
EXACQRTSP0030IP camera
SANYO0031IP camera
PELCO0032IP camera
ILLUSTRA0034IP camera
HIKVISION0035IP camera
UDP0036IP camera
DAHUA0040Dahua capture board
HANWHA/SAMSUNG0041IP camera
PIXELPRO0042IP camera
ILLUSTRA_FLEX0045IP camera
ILLUSTRA30046IP camera
TDVR0047tDVR capture board
DAHUAIP0049IP camera
KANTECH004AAccess control
ITV2004BIntrusion panel
NEO004BIntrusion panel
DYNACOLOR004DM-series
HONEYWELL004EAccess control
BENTEL004FIntrusion panel
BOSCHSEC0050Access control
ANALYTIC0051Analytic appliance(s)
DMP0052Intrusion panel
CCURE0053Access control
AXISBW0057Bodycam
NAPCO0055Intrusion panel
BRIVO0059Access control
ILLUSTRAMULTI005AIP camera
ILLUSTRABW005BBodycam

<br>

Categories
Knowledge Support Support Categories Products exacqVision Hardware

Motherboard drives displaying “Red” on Storage Hardware tab

Description

Motherboard drives (including OS SSD) displaying “Red” on Storage Hardware tab

<br>

Platform

Windows A-Series with HBA (more than 4 drives)

<br>

Steps to reproduce

Shut down the Exacq server and remove power

Note: Removing the power is the key part, just a normal shutdown will not produce this issue.

<br>

Expected result

All drives show “Green” and Healthy on the Storage Hardware tab.

<br>

Actual result

Just the drives on the Motherboard (including the OS SSD) show “Red” but healthy on the Storage Hardware tab

Note: All drives on the Storage Drive tab show Green and Healthy

Note: The drives are good, the system is recording to the drives when in this state. No known adverse effect to the customer when in this state.

<br>

Work around

Rebooting the system will return the drives to Green Healthy on the Storage Hardware tab

<br>

Fix

We know that rebooting the system will fix the problem until the next time the system losses power. If still in the state where it is showing the drives offline, navigate to the Windows Device Manager, expand storage controllers (it might be labeled as IDE ATA/ATAPI Controllers), and disable Standard Dual Channel pci ide controller. Again, you will not see the Standard Dual Channel pci ide controller listed if it is not still in the state where you see the offline drives (after a reboot). If a reboot has been performed, you must shut down the system, pull power, and then boot back up to disable the Standard Dual Channel pci ide controller.

<br>

Motherboard-drives-displaying-Red-on-Storage-Hardware-tab.pdf
Categories
Knowledge Support Support exacqVision Server Categories Products exacqVision Hardware

ExacqVision A-Series 4U Drive Replacement Guide

NOTE: This article assumes that you already know which drive number must be replaced, and that you have a replacement drive ready to install in the system.

NOTE: The photos in this article illustrate the replacement of Drive 3.

To replace a hard drive in an exacqVision A-Series 4U system manufactured on or after 07/30/2015, complete the following steps:

  1. Shut down the system and remove all power cables.
  2. Use a screwdriver to remove the screws that hold the system lid in place. Remove the lid.
  3. Locate the drive that needs to be replaced. Drives are labeled by numbers 0 up to 7 (the Drive 3 label is circled in the blue oval in the following image).

    Power and communication cables must be removed from all drives in the same cage — drives 0-3 or drives 4-7. Power cables are on top (circled in red) and communication cables are on the bottom (one example is shown in a green oval). Note the labels on the communication cables, as they must be plugged in later in the same order.
  1. Remove the screws that hold the cage in place. There are two screws on the side of the chassis and four screws on the bottom. For better access to the screws on the bottom, you can tilt the system on its side as shown. CAUTION: If you tilt the system on its side, hold the cage in with your hand as you remove the screws so that the cage does not fall out of the system.

<br>

  1. Slide the cage up and out of the chassis.
  1. Remove the screws that hold the drive in its cage. There are two screws on the bottom of the cage and two screws on top of the cage. Remove the drive from the cage.

<br>

  1. Slide the replacement drive into the empty slot in the cage. Secure it in the slot using the four screws removed in the previous step.
  2. Slide the cage back into the chassis and affix it with the six screws removed in step 4.
  3. Plug the communication cables into the drives. Note the labels on the cables and plug them in order according to the drive numbers.
  4. Plug the power cables (in any order) into the drives.
  5. Secure the system lid using the screws removed in step 2.
  6. Connect the power cable and start the system.

<br>

exacqVision-A-Series-4U-Drive-Replacement-Guide.pdf
Categories
Knowledge Support Support exacqVision Server Categories Products

Best Practices When Upgrading Hard Drives on exacqVision Servers

The following considerations are for 32-bit Windows-based A-Series systems without a RAID controller:

  • Desktop and 2U A-Series systems do not have a separate physical drive for the operating system. This means that we cannot load Windows on anything larger than 2.2TB. By default, Windows 7 loads with an MBR partition table, which has a maximum addressable space of 2.2TB.
  • UEFI is not supported for booting with 32-bit versions of Windows.
  • The operating system needs its own partition (30-60GB) to use the rest of the drive as storage. If the drive containing the operating system is replaced, you will need to back up your settings and other important information, so plan accordingly. 
  • Although it is possible to use a storage drive that is larger than 2.2TB with GUID Partition table (GPT), we do not support mixed-capacity systems.
  • A BIOS upgrade might be required to detect >2TB drives for older systems.

Bottom line: 2TB drives are the maximum supported upgrade for 32-bit Windows-based Desktop and 2U A-Series systems.

<br>

The following considerations are for 32-bit Windows-based A-Series and Z-Series systems with a RAID Controller:

  • 4U A-Series systems (JBOD or RAID) use a separate boot partition made by the RAID Controller. Windows detects as a separate drive, so MBR may be used and the operating system may be installed here. If the drive containing the operating system is replaced, you will need to back up your settings and other important information, so plan accordingly.
  • All Z-Series systems use a separate physical drive for the operating system. We use this separate drive exclusively to install the operating system. This drive should not have to be changed when upgrading the storage drives.
  • With JBOD and RAID arrays larger than 2.2TB, GPT must be used to see the entirety of the drive.
  • Although it is possible to add larger drives to an existing RAID array, the controller will base the capacity on the drive with the lowest capacity. Even if all drives are replaced with larger capacity drives, the controller will not automatically adjust to the larger size. You will need to destroy and re-create the RAID array, so plan accordingly.
  • Also, it is possible to mix capacities with JBOD arrays. However, JBOD arrays are not fault-tolerant, and the existing data on the drive will be lost, so plan accordingly. Of more importance, we do not support mixed-capacity systems.
  • A BIOS upgrade and/or controller firmware update might be required to detect >2TB drives for older systems.

Bottom line: 6TB drives are the maximum supported upgrade for 32-bit Windows-based 4U-Series and all Z-Series systems.

<br>

The following considerations are for 64-bit Windows-based A-Series and Z-Series systems:

  • We use UEFI partitions and GPT with 64-bit Windows-based systems. This eliminates the 2.2TB limit of MBR partitioning on the operating system drive.
  • However, if the drive containing the operating system is replaced, you will need to back up your settings and other important information, so plan accordingly.
  • Although it is possible to add larger drives to an existing RAID array, the controller will base the capacity on the drive with the lowest capacity. Even if all drives are replaced with larger capacity drives, the controller will not automatically adjust to the larger size. You will need to destroy and re-create the RAID array, so plan accordingly.
  • Also, it is possible to mix capacities with JBOD arrays or single drive systems. However, JBOD arrays are not fault-tolerant, and the existing data on the drive will be lost, so plan accordingly. Of more importance, we do not support mixed capacity systems.

Bottom line: 6TB drives are the maximum supported upgrade for all 64-bit Windows-based systems.

<br>

The following considerations are for Linux-based LC, EL, ELS, ELX, ELP, A-Series, and Z-Series systems:

  • All Linux systems use a separate physical drive for the operating system (except for LC, as explained below).
  • We use either ext3 or ext4 file systems (depending upon the Linux version). In either case, we use a maximum single drive size of 16TB. For systems that have more (RAID systems), the drives must be split up into smaller, equal partitions. Using the diskprep.sh script will achieve this split automatically.
  • Although it is possible to add larger drives to an existing RAID array, the controller will base the capacity on the drive with the lowest capacity. Even if all drives are replaced with larger capacity drives, the controller will not automatically adjust to the larger size. You will need to destroy and re-create the RAID array, so plan accordingly.
  • Also, it is possible to mix capacities with JBOD arrays or single drive systems. However, JBOD arrays and single drive systems are not fault-tolerant, and the existing data on the drive will be lost, so plan accordingly. Of more importance, we do not support mixed capacity systems.
  • LC systems: LC systems do not have a separate boot drive. Before you upgrade the drive, back up the configuration and any other important information. Additionally, the data on the drive will be lost, so plan accordingly. The LC recovery image accounts for the single-drive setup, but it should not be used with other types of systems. LC also uses a different drive model than the other models.
  • A BIOS upgrade might be required to detect >2TB drives for older systems.

Bottom line: 6TB drives are the maximum supported upgrade for all Linux systems if all drives are replaced at the same capacity.

<br>

A note about drive speed:

  • Based on the age of the system, either the RAID controller or the SATA port on the motherboard may only support hard drives at 3 GB/s (SATA II). While this does not limit the motherboard from potentially detecting the larger drive, it will reduce its performance as new drives purchased from exacqVision will be 6 Gb/s (SATA III) drives.
  • Some motherboards do not have SATA III connectors, some only have two to four. Older RAID controllers only support SATA II speeds on drives.
  • For assistance on determining which motherboard you have and which connector to use, please contact Technical Support with your system’s serial number (number beginning with ER).

<br>

Best-Practices-When-Upgrading-Hard-Drives-on-exacqVision-Servers.pdf
Categories
Knowledge Support Documentation Support exacqVision Server Products

RAID Setup Using 3ware Controller (RAID 5 or RAID 6)

Part One: Enter the 3ware BIOS

  1. Restart the operating system.
  2. When prompted, press Alt+3 to enter the 3ware RAID BIOS.
  3. Type admin256 as the password.
  4. Press any key to acknowledge the warning.

Part Two: Remove the Failed Array

  1. Select the failed RAID array by highlighting the array (Exportable Unit) and pressing the spacebar.
  2. Press Tab to enter the lower menu.
  3. Select Delete Unit and press Enter.
  4. Press Enter to confirm.

Part Three: Create the New Array

  1. Press Tab to navigate to the top of the list of drives listed as Exportable Unit. Highlight Direct Attached and press Enter. Each drive should have asterisk (*) next to it.
  2. Press Tab to enter the lower menu.
  3. Navigate to Create Array and press Enter.
  4. Select the following sections (Enter) and use the following settings:
    1. Array Name should be Exacq.
    2. RAID Configuration is 5 (if you have 3 to 8 drives) or 6 (if you have more than 8 drives).
    3. Stripe Size should be 256KB.
    4. Write Cache Setting should be Enabled.
    5. Read Cache Setting should be Intelligent.
    6. StorSave Profile should be Performance.
    7. Auto-Verify should be Enabled.
    8. Rapid RAID Recovery should be Fast Rebuild/Shutdown.
  5. Select Advanced, press Enter, and use the following settings:
    1. Boot Volume Size should be blank.
    2. If you DO NOT have a separate device for the OS, Boot Volume Size should be 57.
    3. Drive Querying Mode should be Enabled.
    4. Continue on Error When Rebuild should be Disabled.
    5. Initialize Method should be Background.
    6. Navigate to OK and press Enter.
    7. Press Y to confirm cache settings.
    8. Confirm values are correct and press Enter.
    9. Press F8 to lock in settings.
    10. Press Y to save configuration and exit.
  6. You can now load the operating system per the OS Recovery Instructions (if needed).
Categories
Knowledge Support Documentation Support Products exacqVision Hardware

Enabling iSCSI Support on exacqVision Systems with Windows Embedded

Early versions of exacqVision systems with the Windows Embedded operating system did not have support for iSCSI. These systems were manufactured in in January and early February 2014.


To determine whether a system supports iSCSI, complete the following steps:

  1. Open the Start menu.
  2. Right-click Computer.
  3. Select Manage from the pop-up menu.
  4. Double-click Services and Applications.
  5. Double-click Services.
  6. If Microsoft iSCSI Initiator Service is not running, start it.
  7. Click Device Manager.
  8. Expand the Storage Controllers node if necessary.
  9. If you do not see Unknown Device, your system supports iSCSI.
  1. If you do see Unknown Device, right-click it, select Properties, select Details, and select Hardware IDs. If the value is ROOT\ISCSIPRT, your system does not support iSCSI.

To enable iSCSI support on the system, complete the following steps:

  1. Download iSCSIPack.zip from https://exacq.com/files and unzip it to a flash drive or directly to the exacqVision system.
  2. On the exacqVision system, navigate to the directory containing the unzipped installation files.
  3. Double-click the install.bat file.
  4. The installer prompts for administrator rights, installs the files, and restarts the system.
Enabling-iSCSI-Support-on-exacqVision-Systems-with-Windows-Embedded.pdf
Categories
Knowledge Support Support Categories Products exacqVision Hardware

Upgrading a Linux-based exacqVision Server with Active iSCSI Configuration to exacqVision 5.8 (Legacy)

When upgrading a Linux-based exacqVision server to exacqVision 5.8, the existing mount point for an active iSCSI connected drive might not be recognized. To work around this issue, complete the following steps after the upgrade is complete:

<br>

  1. Using exacqVision Client 5.8, open the Storage page for the upgraded server.
  2. Select the Extended tab.
  3. Look for your iSCSI connection and corresponding mount paths. If they appear as expected, no further action is necessary. Otherwise, continue with the following step.
  4. Note the mount paths that appear on the Extended tab. The example above shows /mnt/edvr/11/ (and three other mounts).
  5. On the Drive tab, deselect the recording drives listed on the Extended tab. Click Apply to disable recording to those mount paths.
  6. Ensure that the originally configured iSCSI mounts are still enabled for recording.
  7. On the server, run sudo /etc/init.d/edvrserver stop in Terminal.
  8. Use the mount command to determine the device name of the iSCSI mount point. The output will look similar to this:
    /dev/sdb1 on / type ext4 (rw,errors=remount-ro)
    proc on /proc type proc (rw,noexec,nosuid,nodev)
    /dev/sdc1 on /mnt/edvr/4 type ext4 (rw,_netdev,errors=remount-ro)
    /dev/sdd1 on /mnt/edvr/5 type ext4 (rw,_netdev,errors=remount-ro)
  9. Note the /dev/sdxx device name that corresponds to the /mnt/edvr/x mount path from earlier in the procedure.
  10. Run blkid -o value -s UUID /dev/sdc1 (substituting your device’s name for /dev/sdc1) to determine the UUID for the device.
  11. Open the /etc/fstab file for editing. Find the entry that specifies the iSCSI UUID. Change the mount point in the entry to the pre-upgrade configuration. For example, if the UUID= entry contains /mnt/edvr/4, change the “4” to “2.”
  12. Delete the fstab entry created for the iSCSI device before the upgrade. The file will have multiple entries for the mount point; keep the one specifying UUID, and delete the other, which will look similar to /dev/sdc1 /mnt/edvr/2 ext4 _netdev.errors=remount-ro 0 0.
  13. Save all changes.
  14. Continue to edit the fstab file for each iSCSI drive on the system.
  15. Run sudo mount -a to reload the fstab file.
  16. Open /usr/local/exacq/server and delete archivepi.xml and psfpi.xml.
  17. Run sudo /etc/init.d/edvrserver start.

<br>

exacqVision Client should now display the correct mount paths on the Extended tab on the Storage page.

<br>