In evidenza

BGP configuration on Sophos XG/XGS firewalls

Sophos XG firewalls, at the time of writing, do not offer sufficient flexibility for configuring BGP via the web panel (namely, you cannot e...

Notes on Dell Compellent (SC Series) storage arrays

Dell SC Series/Compellent storage arrays make heavy use of storage virtualization features, and that can sometimes lead to a behavior that is opaque to the user and difficult to predict. Here are some notes on their inner workings that may help administrators manage them more effectively.

Disk space allocation

Compellent arrays only allocate space for blocks only when you fill them with non-zero data. This means that the following operations DO NOT take space on the array:

- Creating a volume

- Formatting it with almost any possible file system

- Zeroing a file (for example, dd if=/dev/zero ... on Linux or creating an eager-zeroed VMDK in VMware)

That means that, for example, that thin, thick or eager-zeroed thick provisioned VMDKs in VMware allocate physical disk space on the array in (almost) the same way - you can safely use thick provisioning for everything.

Storage tiers, RAID levels and rebalancing

Disk space allocated for each storage tier is allocated dynamically. That means, it's perfectly normal for the usage statistic of a storage tier to be always close to 90%, it's working as intended.

By default, inside each storage tiers, you will have a RAID 10 and a RAID 9 level. When you write to the array, this happens:

- The array tries to write to the RAID 10 level

- If the RAID 10 is full, the array writes to the RAID 9

- If the RAID 9 is full, the array tries to write to the next available (lower) storage tier. First to RAID 10, then to RAID 9.

If all the RAID levels on all the storage tiers are full or near full, the system will allocate more disk space. Is it perfectly common and intended behavior to have a storage type reach 100% usage, the array will just allocate more disk space to it.

It is discouraged to force writes directly to a certain RAID level by using custom storage profiles.

Recovering from read-only

If the array reaches 100% usage on all storage types/RAID levels and is unable to allocate more space, it stops accepting writes from the connected servers. That means that the only way to recover is to

- Add disks

- Delete snapshots

- Delete volumes

Altri post