Tag: Storage
The instructions below differ based on the firmware versions on your Axis cameras. Please check your camera’s firmware version prior to continuing.
<br>
Edge Storage recording is currently supported for recording based on motion events. Setting the camera(s) to continuous recording may produce unpredictable results.
<br>
Adding the camera to exacqVision:
- When adding your camera(s) to exacqVision on the ‘Add IP Cameras’ page you must use HTTP. HTTPS connections are currently not supported for use with Axis cameras for the Edge Storage feature.
- On the ‘Add IP Cameras’ page, add the following string after the Hostname/IP Address entered for the camera: #transport=udp (i.e. – 192.168.0.5#transport=udp) Note: Some camera firmware versions do not require this override. If the camera is not showing “Edge Storage Status: Supported”remove this string and disable/enable the camera.
<br>
Axis firmware 7.x and above
- Login to the camera’s web interface.
- Enter the Settings menu.
- Go to the System tab.
- Open the Events page.
- Under the Action Rule List, click ‘Add…’
- Enter a name for the event such as ‘NetLossRecording’.
- Under the Trigger drop-down menu, what you choose depends on the type of motion detection you are using.
For VMD1, which is traditional motion detection, select ‘Detectors’. In the menu below this, select ‘Motion Detection’ or ‘Motion Alarm’.
In the final drop-down menu, select the Motion Window to use.
For VMD4, select ‘Applications’. In the drop-down menu below this, select ‘VMD 4.
If you have multiple profiles configured you need to choose the same one being used by exacqVision. - Leave the Schedule set to ‘Always’.
- The stream profile must be set to the same stream that the exacqVision Server is recording.
- Check the duration boxes but leave them at their default settings.
- Change the Storage option to your SD card.
- Click ‘OK’.
- You may examine the SD card to confirm the camera populates recordings locally.
In firmware 7.x and above, this is performed by minimizing the Settings menu to make the Storage icon appear at the bottom of the Live View. - To confirm or test the Edge Storage function you will need to simulate a network disconnection that does not cause the camera itself to lose power. When the camera connection is re-established the server will indicate entries in server logs, and will begin to copy video files from the camera.
<br>
Axis firmware 6.x
- Login to the camera’s web interface.
- Enter the Setup page.
- Open Events > Action Rules.
- Click on ‘Add’.
- Enter a name for the event, such as ‘NetLossRecording’.
- Under the Trigger drop-down menu, what you choose depends on the type of motion detection you are using.
For VMD1, which is traditional motion detection, select ‘Detectors’. In the menu below this, select ‘Motion Detection’ or ‘Motion Alarm’.
In the final drop-down menu, select the Motion Window to use.
For VMD4, select ‘Applications’. In the drop-down below this, select ‘VMD 4’.
If you have multiple profiles configured you need to choose the same one being used by exacqVision. - Leave Schedule set to ‘Always’.
- Under Actions, set the Type to ‘Record Video’.
- The stream profile must be set to the same stream that the exacqVision Server is recording.
- Check the duration boxes but leave them at their default settings.
- Change the Storage option to your SD card.
- Click ‘OK’.
- You may examine the SD card to confirm that the camera populates recordings locally.
- To confirm or test the Edge Storage function you will need to simulate a network disconnection that does not cause the camera itself to lose power. When the camera connection is re-established the server will indicate entries in server logs, and will begin to copy video files from the camera.
<br>
How-to-Enable-Edge-Storage-on-Axis-Cameras.pdfExacqVision data files are stored with a specific file naming format. The format includes a four-hex-digit plug-in ID associated with the recording device. Following is a list of device and integration plug-in IDs:
Access Control, Capture Boards and Cameras/Encoders
Name | Plugin ID | Description |
XDVAPI | 0004 | eDVR capture board |
AXISIP | 0007 | IP camera |
IQEYE | 0011 | IP camera |
SONY | 0012 | IP camera |
PANASONIC | 0015 | IP camera |
ACTI | 0016 | IP camera |
IPCAMDETECT | 0017 | General IP camera detection |
ARECONT | 0018 | IP camera |
VIVOTEK | 0019 | IP camera |
ONVIFNVC | 001A | IP camera |
ARECONTTFTP | 0021 | IP camera |
IOIMAGE | 0022 | IP camera |
STARDOT | 0023 | IP camera |
BOSCH | 0024 | IP camera |
CANON | 0025 | IP camera |
IPX | 0026 | IP camera |
STRETCH | 0027 | Stretch and SDVR capture boards |
BASLER | 0028 | IP camera |
GANZ | 0029 | IP camera |
EXACQRTSP | 0030 | IP camera |
SANYO | 0031 | IP camera |
PELCO | 0032 | IP camera |
ILLUSTRA | 0034 | IP camera |
HIKVISION | 0035 | IP camera |
UDP | 0036 | IP camera |
DAHUA | 0040 | Dahua capture board |
HANWHA/SAMSUNG | 0041 | IP camera |
PIXELPRO | 0042 | IP camera |
ILLUSTRA_FLEX | 0045 | IP camera |
ILLUSTRA3 | 0046 | IP camera |
TDVR | 0047 | tDVR capture board |
DAHUAIP | 0049 | IP camera |
KANTECH | 004A | Access control |
ITV2 | 004B | Intrusion panel |
NEO | 004B | Intrusion panel |
DYNACOLOR | 004D | M-series |
HONEYWELL | 004E | Access control |
BENTEL | 004F | Intrusion panel |
BOSCHSEC | 0050 | Access control |
ANALYTIC | 0051 | Analytic appliance(s) |
DMP | 0052 | Intrusion panel |
CCURE | 0053 | Access control |
AXISBW | 0057 | Bodycam |
NAPCO | 0055 | Intrusion panel |
BRIVO | 0059 | Access control |
ILLUSTRAMULTI | 005A | IP camera |
ILLUSTRABW | 005B | Bodycam |
<br>
Description
Motherboard drives (including OS SSD) displaying “Red” on Storage Hardware tab
<br>
Platform
Windows A-Series with HBA (more than 4 drives)
<br>
Steps to reproduce
Shut down the Exacq server and remove power
Note: Removing the power is the key part, just a normal shutdown will not produce this issue.
<br>
Expected result
All drives show “Green” and Healthy on the Storage Hardware tab.
<br>
Actual result
Just the drives on the Motherboard (including the OS SSD) show “Red” but healthy on the Storage Hardware tab
Note: All drives on the Storage Drive tab show Green and Healthy
Note: The drives are good, the system is recording to the drives when in this state. No known adverse effect to the customer when in this state.
<br>
Work around
Rebooting the system will return the drives to Green Healthy on the Storage Hardware tab
<br>
Fix
We know that rebooting the system will fix the problem until the next time the system losses power. If still in the state where it is showing the drives offline, navigate to the Windows Device Manager, expand storage controllers (it might be labeled as IDE ATA/ATAPI Controllers), and disable Standard Dual Channel pci ide controller. Again, you will not see the Standard Dual Channel pci ide controller listed if it is not still in the state where you see the offline drives (after a reboot). If a reboot has been performed, you must shut down the system, pull power, and then boot back up to disable the Standard Dual Channel pci ide controller.
<br>
Motherboard-drives-displaying-Red-on-Storage-Hardware-tab.pdfNOTE: This article assumes that you already know which drive number must be replaced, and that you have a replacement drive ready to install in the system.
NOTE: The photos in this article illustrate the replacement of Drive 3.
To replace a hard drive in an exacqVision A-Series 4U system manufactured on or after 07/30/2015, complete the following steps:
- Shut down the system and remove all power cables.
- Use a screwdriver to remove the screws that hold the system lid in place. Remove the lid.
- Locate the drive that needs to be replaced. Drives are labeled by numbers 0 up to 7 (the Drive 3 label is circled in the blue oval in the following image).
Power and communication cables must be removed from all drives in the same cage — drives 0-3 or drives 4-7. Power cables are on top (circled in red) and communication cables are on the bottom (one example is shown in a green oval). Note the labels on the communication cables, as they must be plugged in later in the same order.
- Remove the screws that hold the cage in place. There are two screws on the side of the chassis and four screws on the bottom. For better access to the screws on the bottom, you can tilt the system on its side as shown. CAUTION: If you tilt the system on its side, hold the cage in with your hand as you remove the screws so that the cage does not fall out of the system.
<br>
- Slide the cage up and out of the chassis.
- Remove the screws that hold the drive in its cage. There are two screws on the bottom of the cage and two screws on top of the cage. Remove the drive from the cage.
<br>
- Slide the replacement drive into the empty slot in the cage. Secure it in the slot using the four screws removed in the previous step.
- Slide the cage back into the chassis and affix it with the six screws removed in step 4.
- Plug the communication cables into the drives. Note the labels on the cables and plug them in order according to the drive numbers.
- Plug the power cables (in any order) into the drives.
- Secure the system lid using the screws removed in step 2.
- Connect the power cable and start the system.
<br>
exacqVision-A-Series-4U-Drive-Replacement-Guide.pdfThe following considerations are for 32-bit Windows-based A-Series systems without a RAID controller:
- Desktop and 2U A-Series systems do not have a separate physical drive for the operating system. This means that we cannot load Windows on anything larger than 2.2TB. By default, Windows 7 loads with an MBR partition table, which has a maximum addressable space of 2.2TB.
- UEFI is not supported for booting with 32-bit versions of Windows.
- The operating system needs its own partition (30-60GB) to use the rest of the drive as storage. If the drive containing the operating system is replaced, you will need to back up your settings and other important information, so plan accordingly.
- Although it is possible to use a storage drive that is larger than 2.2TB with GUID Partition table (GPT), we do not support mixed-capacity systems.
- A BIOS upgrade might be required to detect >2TB drives for older systems.
Bottom line: 2TB drives are the maximum supported upgrade for 32-bit Windows-based Desktop and 2U A-Series systems.
<br>
The following considerations are for 32-bit Windows-based A-Series and Z-Series systems with a RAID Controller:
- 4U A-Series systems (JBOD or RAID) use a separate boot partition made by the RAID Controller. Windows detects as a separate drive, so MBR may be used and the operating system may be installed here. If the drive containing the operating system is replaced, you will need to back up your settings and other important information, so plan accordingly.
- All Z-Series systems use a separate physical drive for the operating system. We use this separate drive exclusively to install the operating system. This drive should not have to be changed when upgrading the storage drives.
- With JBOD and RAID arrays larger than 2.2TB, GPT must be used to see the entirety of the drive.
- Although it is possible to add larger drives to an existing RAID array, the controller will base the capacity on the drive with the lowest capacity. Even if all drives are replaced with larger capacity drives, the controller will not automatically adjust to the larger size. You will need to destroy and re-create the RAID array, so plan accordingly.
- Also, it is possible to mix capacities with JBOD arrays. However, JBOD arrays are not fault-tolerant, and the existing data on the drive will be lost, so plan accordingly. Of more importance, we do not support mixed-capacity systems.
- A BIOS upgrade and/or controller firmware update might be required to detect >2TB drives for older systems.
Bottom line: 6TB drives are the maximum supported upgrade for 32-bit Windows-based 4U-Series and all Z-Series systems.
<br>
The following considerations are for 64-bit Windows-based A-Series and Z-Series systems:
- We use UEFI partitions and GPT with 64-bit Windows-based systems. This eliminates the 2.2TB limit of MBR partitioning on the operating system drive.
- However, if the drive containing the operating system is replaced, you will need to back up your settings and other important information, so plan accordingly.
- Although it is possible to add larger drives to an existing RAID array, the controller will base the capacity on the drive with the lowest capacity. Even if all drives are replaced with larger capacity drives, the controller will not automatically adjust to the larger size. You will need to destroy and re-create the RAID array, so plan accordingly.
- Also, it is possible to mix capacities with JBOD arrays or single drive systems. However, JBOD arrays are not fault-tolerant, and the existing data on the drive will be lost, so plan accordingly. Of more importance, we do not support mixed capacity systems.
Bottom line: 6TB drives are the maximum supported upgrade for all 64-bit Windows-based systems.
<br>
The following considerations are for Linux-based LC, EL, ELS, ELX, ELP, A-Series, and Z-Series systems:
- All Linux systems use a separate physical drive for the operating system (except for LC, as explained below).
- We use either ext3 or ext4 file systems (depending upon the Linux version). In either case, we use a maximum single drive size of 16TB. For systems that have more (RAID systems), the drives must be split up into smaller, equal partitions. Using the diskprep.sh script will achieve this split automatically.
- Although it is possible to add larger drives to an existing RAID array, the controller will base the capacity on the drive with the lowest capacity. Even if all drives are replaced with larger capacity drives, the controller will not automatically adjust to the larger size. You will need to destroy and re-create the RAID array, so plan accordingly.
- Also, it is possible to mix capacities with JBOD arrays or single drive systems. However, JBOD arrays and single drive systems are not fault-tolerant, and the existing data on the drive will be lost, so plan accordingly. Of more importance, we do not support mixed capacity systems.
- LC systems: LC systems do not have a separate boot drive. Before you upgrade the drive, back up the configuration and any other important information. Additionally, the data on the drive will be lost, so plan accordingly. The LC recovery image accounts for the single-drive setup, but it should not be used with other types of systems. LC also uses a different drive model than the other models.
- A BIOS upgrade might be required to detect >2TB drives for older systems.
Bottom line: 6TB drives are the maximum supported upgrade for all Linux systems if all drives are replaced at the same capacity.
<br>
A note about drive speed:
- Based on the age of the system, either the RAID controller or the SATA port on the motherboard may only support hard drives at 3 GB/s (SATA II). While this does not limit the motherboard from potentially detecting the larger drive, it will reduce its performance as new drives purchased from exacqVision will be 6 Gb/s (SATA III) drives.
- Some motherboards do not have SATA III connectors, some only have two to four. Older RAID controllers only support SATA II speeds on drives.
- For assistance on determining which motherboard you have and which connector to use, please contact Technical Support with your system’s serial number (number beginning with ER).
<br>
Best-Practices-When-Upgrading-Hard-Drives-on-exacqVision-Servers.pdfPart One: Enter the 3ware BIOS
- Restart the operating system.
- When prompted, press Alt+3 to enter the 3ware RAID BIOS.
- Type admin256 as the password.
- Press any key to acknowledge the warning.
Part Two: Remove the Failed Array
- Select the failed RAID array by highlighting the array (Exportable Unit) and pressing the spacebar.
- Press Tab to enter the lower menu.
- Select Delete Unit and press Enter.
- Press Enter to confirm.
Part Three: Create the New Array
- Press Tab to navigate to the top of the list of drives listed as Exportable Unit. Highlight Direct Attached and press Enter. Each drive should have asterisk (*) next to it.
- Press Tab to enter the lower menu.
- Navigate to Create Array and press Enter.
- Select the following sections (Enter) and use the following settings:
- Array Name should be Exacq.
- RAID Configuration is 5 (if you have 3 to 8 drives) or 6 (if you have more than 8 drives).
- Stripe Size should be 256KB.
- Write Cache Setting should be Enabled.
- Read Cache Setting should be Intelligent.
- StorSave Profile should be Performance.
- Auto-Verify should be Enabled.
- Rapid RAID Recovery should be Fast Rebuild/Shutdown.
- Select Advanced, press Enter, and use the following settings:
- Boot Volume Size should be blank.
- If you DO NOT have a separate device for the OS, Boot Volume Size should be 57.
- Drive Querying Mode should be Enabled.
- Continue on Error When Rebuild should be Disabled.
- Initialize Method should be Background.
- Navigate to OK and press Enter.
- Press Y to confirm cache settings.
- Confirm values are correct and press Enter.
- Press F8 to lock in settings.
- Press Y to save configuration and exit.
- You can now load the operating system per the OS Recovery Instructions (if needed).
Early versions of exacqVision systems with the Windows Embedded operating system did not have support for iSCSI. These systems were manufactured in in January and early February 2014.
To determine whether a system supports iSCSI, complete the following steps:
- Open the Start menu.
- Right-click Computer.
- Select Manage from the pop-up menu.
- Double-click Services and Applications.
- Double-click Services.
- If Microsoft iSCSI Initiator Service is not running, start it.
- Click Device Manager.
- Expand the Storage Controllers node if necessary.
- If you do not see Unknown Device, your system supports iSCSI.
- If you do see Unknown Device, right-click it, select Properties, select Details, and select Hardware IDs. If the value is ROOT\ISCSIPRT, your system does not support iSCSI.
To enable iSCSI support on the system, complete the following steps:
- Download iSCSIPack.zip from https://exacq.com/files and unzip it to a flash drive or directly to the exacqVision system.
- On the exacqVision system, navigate to the directory containing the unzipped installation files.
- Double-click the install.bat file.
- The installer prompts for administrator rights, installs the files, and restarts the system.
When upgrading a Linux-based exacqVision server to exacqVision 5.8, the existing mount point for an active iSCSI connected drive might not be recognized. To work around this issue, complete the following steps after the upgrade is complete:
<br>
- Using exacqVision Client 5.8, open the Storage page for the upgraded server.
- Select the Extended tab.
- Look for your iSCSI connection and corresponding mount paths. If they appear as expected, no further action is necessary. Otherwise, continue with the following step.
- Note the mount paths that appear on the Extended tab. The example above shows /mnt/edvr/11/ (and three other mounts).
- On the Drive tab, deselect the recording drives listed on the Extended tab. Click Apply to disable recording to those mount paths.
- Ensure that the originally configured iSCSI mounts are still enabled for recording.
- On the server, run sudo /etc/init.d/edvrserver stop in Terminal.
- Use the mount command to determine the device name of the iSCSI mount point. The output will look similar to this:
/dev/sdb1 on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
/dev/sdc1 on /mnt/edvr/4 type ext4 (rw,_netdev,errors=remount-ro)
/dev/sdd1 on /mnt/edvr/5 type ext4 (rw,_netdev,errors=remount-ro) - Note the /dev/sdxx device name that corresponds to the /mnt/edvr/x mount path from earlier in the procedure.
- Run blkid -o value -s UUID /dev/sdc1 (substituting your device’s name for /dev/sdc1) to determine the UUID for the device.
- Open the /etc/fstab file for editing. Find the entry that specifies the iSCSI UUID. Change the mount point in the entry to the pre-upgrade configuration. For example, if the UUID= entry contains /mnt/edvr/4, change the “4” to “2.”
- Delete the fstab entry created for the iSCSI device before the upgrade. The file will have multiple entries for the mount point; keep the one specifying UUID, and delete the other, which will look similar to /dev/sdc1 /mnt/edvr/2 ext4 _netdev.errors=remount-ro 0 0.
- Save all changes.
- Continue to edit the fstab file for each iSCSI drive on the system.
- Run sudo mount -a to reload the fstab file.
- Open /usr/local/exacq/server and delete archivepi.xml and psfpi.xml.
- Run sudo /etc/init.d/edvrserver start.
<br>
exacqVision Client should now display the correct mount paths on the Extended tab on the Storage page.
<br>