I noticed recently that my monthly archiveconfigs have been taking about 20 hours from start to completion. After some time working with technical support (case# 330696), we determined the bottleneck seemed to be the with the gzip2 compression of the backup data on the appliance itself. In my case, I'm archiving about 260 GB of data which then gets compressed to about 130 GB, all of this ends up on a network file share. I appreciate the good compression ratio but would like to have the option to adjust the compression to save time and processing power.
Some possible feature suggestions:
- Allow compression options to be selected when configuring backup jobs to speed up the process by sacrificing disk space.
- Add the ability to point the backup job to a LEM node by it's agent name, then add compression functionality to the agent installed on the backup computer of choice. This way the backup files can be copied to the machine of choice only once, then compression can be performed locally on the backup machine without the need to copy the data back and forth three times.