Local Storage Installation Requirements

This guide lists the specific requirements for the Virtuozzo Application Platform installation based on the local storage scenario.

The following diagram shows the requirements of the platform’s local storage installation scenario. It is divided into four platform types based on the purpose:

  • PoC (Proof-of-Concept) - a platform used for feature demonstration or non-complex testing activities
  • Sandbox - non-production platform for testing purposes
  • Production - a public or private platform that is used for production purposes
  • High-Performance Production - the public cloud platform with extended performance capabilities for demanding end-users
Note: Based on your needs, the production scenarios can be configured in either HA (double user hosts’ number) or Capacity (increased user hosts’ capacity) mode.

Virtuozzo PaaS local storage

Obviously, the overall requirements are different for each platform type.

Tip: Check the general concepts and information on other possible installation scenarios.

Servers Requirements

Virtuozzo Application Platform is composed of 2 types of servers:

  • infrastructure or just infra hosts - where the platform services are running
  • user hosts - where the users’ applications are running

infrastructure and user hosts

The requirements for these servers are slightly different:

Infra Hosts

High availability is a mandatory requirement for each Virtuozzo Application Platform, so you must provide at least 2 servers that will be used as infra hosts.

Note: In some cases, you can start with just one server instead of two:

  • for PoC/Sandbox platforms, but the recommended total amount of CPU/RAM and disks should be multiplied by two
  • if lacking the resources at the moment of installation (you will need to provide the second host before the commercial launch)

CPU

  • x86-64 platform with Intel VT-x or AMD-V hardware virtualization support
  • low voltage CPUs (i.e. Intel Atom) are strongly not recommended due to the poor performance
  • CPU clock frequency: 2.0 Ghz+ for bare-metal or 2.4+ Ghz for VM-based hosts
Note: Prior to the platform 8.0.2 release, only up to 8 CPU Sockets are allowed due to the Virtuozzo licensing limitation.

PoC: At least 8 Cores / 16 Threads per infra host
Sandbox: At least 10 Cores / 20 Threads per infra host
Production: At least 12 Cores / 24 Threads per infra host
High-Performance Production: 16 Cores / 32 Threads per infra host

RAM

  • dual/quad-channel is highly recommended
  • low voltage RAM sticks with high latency are strongly not recommended
  • low-end and middle-level DDR3 is not relevant

PoC: At least 32 GB per infra host
Sandbox: At least 48 GB per infra host
Production: At least 64 GB per infra host
High-Performance Production: At least 96 GB per infra host

Network

PoC: External network - 100 Mbit, Internal network - 1 Gbit
Sandbox: External network - 1 Gbit, Internal network - 1 Gbit
Production: External network - 1 Gbit, Internal network - 10 Gbit
High-Performance Production: External network - 10 Gbit, Internal network - 25 Gbit. Network bonding is recommended for both networks.

Storage

  • low-end SSD (desktop grade) can be used for PoC only
  • high-performance SSD (datacenter grade) is recommended for Sandbox and Production platforms
  • NVMe disks are recommended for High-Performance Production clusters
  • local or SAN disks can be used; disks must belong to a single infra host only
  • in the case of SAN disks, multipathing is strongly recommended

Storage reliability and redundancy are required:

  • hardware RAID1 disk(s) are preferred
  • standard Linux partition/software RAID Linux mirroring is recommended
  • LVM is not recommended (due to Virtuozzo limitations for LVM partitioning when storage size is over 2TB)
  • Hardware RAID is highly recommended (MDRaid can still be used as an alternative)
  • at least about 4000 IOPS is recommended
  • storage performance of the /vz volume is vital for overall cluster performance. Block device has to meet the requirements on sustained disk I/O read, write, random read and random write (500 MBps, 150 MBps, 16 MBps, and 4 MBps respectively). Using SSD for /vz partition is also recommended
  • for SATA/SAS devices, 6 Gbps throughput is strongly recommended
  • depending on the type of the platform going to be installed, you need to provide 100-150 GB of usable (already mirrored) storage for the operating system
  • for /vz partition, you need to provide another 400-1000 GB of usable storage (it’s ok to use the same mirror / RAID1 volume)

User Hosts

PoC / Sandbox: At least 1 user host node.

Production (HA): At least 6 low-performance servers for better high availability.
Production (Capacity): At least 3 high-performance hosts for better per-server capacity.
More servers can be added later to cover the growth in users/load.

High-Performance Production (HA): At least 6 high-performance servers for better high availability.
High-Performance Production (Capacity): At least 3 top-performance hosts for better per-server capacity.
If you lack resources during the installation, starting with just two servers is possible, but you’ll need to provide additional hosts before the commercial launch.

Note: For production platforms with the local storage installation scenario and extra demands on the platform reliability - additional spare host is recommended. It will be used for fast container restoration in case of an unexpected hardware failure.

CPU

  • x86-64 platform with Intel VT-x or AMD-V hardware virtualization support
  • low voltage CPUs (i.e. Intel Atom) are strongly not recommended due to the poor performance
  • CPU clock frequency: 2.0 Ghz+ for bare-metal or 2.4+ Ghz for VM-based hosts
Note: Prior to the platform 8.0.2 release, only up to 8 CPU Sockets are allowed due to the Virtuozzo licensing limitation.

PoC: At least 12 Cores / 24 Threads per user host
Sandbox: At least 16 Cores / 32 Threads per user host

Production (HA): At least 16 Cores / 32 Threads per user host
Production (Capacity): At least 24 Cores / 48 Threads per user host

High-Performance Production (HA): At least 24 Cores / 48 Threads per user host
High-Performance Production (Capacity): 40 Cores / 80 Threads per user host

RAM

  • dual/quad-channel is highly recommended
  • low voltage RAM sticks with high latency are strongly not recommended
  • low-end and middle-level DDR3 is not relevant
  • DDR4 is recommended

PoC: At least 48 GB per user host
Sandbox: At least 64 GB per user host

Production (HA): At least 64 GB per user host
Production (Capacity): At least 128 GB per user host

High-Performance Production (HA): At least 128 GB per user host
High-Performance Production (Capacity): At least 256 GB per user host

Network

PoC: External network - 100 Mbit, Internal network - 1 Gbit
Sandbox: External network - 1 Gbit, Internal network - 1 Gbit
Production: External network - 1 Gbit, Internal network - 10 Gbit
High-Performance Production: External network - 10 Gbit, Internal network - 25 Gbit. Network bonding is recommended for both networks.

Storage

  • low-end SSD (desktop grade) can be used for PoC only
  • high-performance SSD (datacenter grade) is recommended for Sandbox and Production platforms
  • NVMe disks are recommended for High-Performance Production clusters
  • local or SAN disks can be used; disks must belong to a single infra host only
  • in the case of SAN disks, multipathing is strongly recommended

Storage reliability and redundancy are required:

  • hardware RAID1 disk(s) are preferred
  • standard Linux partition/software RAID Linux mirroring is recommended
  • LVM is not recommended (due to Virtuozzo limitations for LVM partitioning when storage size is over 2TB)
  • Hardware RAID is highly recommended (MDRaid can still be used as an alternative)
  • at least 8000 IOPS is recommended
  • storage performance of the /vz volume is vital for overall cluster performance. Block device has to meet the requirements on sustained disk I/O read, write, random read and random write (600 MBps, 250 MBps, 24 MBps, and 8 MBps respectively). Using SSD for /vz partition is also recommended
  • for SATA/SAS devices, 6 Gbps throughput is strongly recommended
  • depending on the type of the platform going to be installed, you need to provide 100-150 GB of usable (already mirrored) storage for the operating system
  • for /vz partition, you need to provide another 800-4000 GB of usable storage (it’s ok to use the same mirror / RAID1 volume)

Sizing rules and recommendations for the user containers' file system:

  • one user container occupies from 1500 MB to 2.2 GB; therefore, to provide the required space for about 1000 containers per host node, you will need at least ~2 TB, plus another 1-2 TB of space for user data inside the containers
  • the usual recommendation is to have 1.5-3 TB or more usable storage per user host
  • you can consider the “grow /vz fs as you grow” scenario, starting with 1000+ GB storage size; please, consult with the Operations Team in this case

Running Platform on Virtual Machines

The platform uses Virtuozzo as the underlying virtualization technology, which allows running infra and user hosts on virtual machines. The list of virtualization technologies compatible with the Virtuozzo Application Platform:

  • KVM
  • VMware ESXi
  • Virtuozzo VM
  • Microsoft Hyper-V
Note: Both bare-metal servers and VMs can be used for production deployment, but bare-metal servers usually provide better performance.

Operating System Requirements

Common

CentOS 7, RHEL 7, or Virtuozzo 7 should be installed on all the infrastructure and user hosts. They will be further redeployed into Virtuozzo 7 preserving system mandatory configuration files. Please note that the partitions associated with the /boot, /rootfs, and /vz mount points will be formatted, and all the appropriate data will be lost.

Partitioning

Below are recommendations on partitioning for the Virtuozzo Server. VZ-based infrastructure and user hosts storage requirements and partitioning.

Storage for operating system partitions:

  • /boot - 2 GB, ext4
  • / - 70-100 GB (70 GB minimum, 100 GB recommended), ext4
  • swap - depends on RAM:
    • 4-8 GB - the swap size is equal to the RAM size
    • 8-64 GB - the swap size is half the RAM size
    • 64+ GB - the swap size is 32 GB

Storage for infra containers on the infrastructure hosts:

  • should be a single ext4 file system, mounted as /vz
  • please use all the remaining storage (after the creation of the /, /boot, and swap partitions) at this /vz file system

Storage for user environments on the user hosts:

  • should be a single ext4 file system, mounted as /vz
  • sizing rules can be seen in the User Hosts section above

Other

The server timezone must be set to UTC during the platform installation and must not be changed afterward.

Additional Recommendations

Network

  • All servers should have at least two network interfaces: a WAN interface with the public IP address and a LAN interface connected to the managed port switch.
  • The internal (LAN) network should operate at 1 Gbps speed or faster.
  • The allocated internal network subnet mask should be at least /20 (however, /8 or /16 is recommended). Extended subnet requirements are listed below:
    • at least /20 for the default region with /26 for infrastructure and hosts
    • at least /22 for the remote regions with /29 for infrastructure and /28 for hosts
Note: The 10.0.0.0/24 network range is reserved for NGINX HA applications and should never be used for hosts and infra/end-user containers.
  • External (WAN) connection should have at least 100 Mbps connection; 1 Gbps speed is recommended.
  • Each host node (both user and infra types) must have a public IP address assigned to the external (WAN) connection.
  • 2 or more public IP addresses for the Shared Load Balancers.
  • One public IP for the Patcher sub-platform.
  • Optionally, 2 or more public IP addresses for the platform SSH Gate.
  • Additional public IP addresses for end-users containers.
  • All outbound traffic should be unblocked.
  • Firewalls should be configured - please contact Operations Team for details.

DNS

DNS zone delegation must be already configured:

  • Both domain names, infra-domain.hosterdomain.com and user-domain.hosterdomain.com, should be delegated to platform SLBs.
  • DNS server names and addresses:
    • ns1.infra-domain.hosterdomain.com and ns2.infra-domain.hosterdomain.com
    • ns1.user-domain.hosterdomain.com and ns2.user-domain.hosterdomain.com
    • 2 IP addresses are allocated for these DNS servers (see in the Network Requirements section above)
  • Zone records example (make sure it is part of the file for parent zone hosterdomain.com) - check the four glue records below:
1
2
3
4
5
6
7
8
9
infra-domain.hosterdomain.com IN NS ns1.infra-domain.hosterdomain.com
infra-domain.hosterdomain.com IN NS ns2.infra-domain.hosterdomain.com
ns1.infra-domain.hosterdomain.com IN A 1.1.1.1 ; glue records, in case
ns2.infra-domain.hosterdomain.com IN A 2.2.2.2 ; they are needed

user-domain.hosterdomain.com IN NS ns1.user-domain.hosterdomain.com
user-domain.hosterdomain.com IN NS ns2.user-domain.hosterdomain.com
ns1.user-domain.hosterdomain.com IN A 1.1.1.1 ; glue records, in case
ns2.user-domain.hosterdomain.com IN A 2.2.2.2 ; they are needed

Make sure you don’t have the SOA record for both domains (infra-domain.hosterdomain.com and user-domain.hosterdomain.com) in zones on your DNS servers - otherwise, the delegation will not work properly.

SSL

Wildcard SSL certificates for both selected DNS domains and all of their subdomains must be provided: infra-domain.hosterdomain.com, *.infra-domain.hosterdomain.com, user-domain.hosterdomain.com, and *.user-domain.hosterdomain.com.

Uploader Storage

Uploader storage is a file system mounted via NFS or imported as SCSI LUN (for example, with iSCSI) with an ext4 file system on top of it. Uploader storage, in some cases, can be shared for use with the Docker templates cache storage.

  • external shared storage for the file system is recommended
  • it is possible (but not recommended) to run this file system on a local disk of any infrastructure hosts

Docker Templates Cache Storage

Docker templates cache storage is a file system imported as SCSI LUN (for example, with iSCSI) with an ext4 file system on top of it. Docker templates storage, in some cases, can be shared for use with the Uploader storage.

  • external shared storage for the file system is recommended
  • it is possible (but not recommended) to run this file system on a local disk of any infrastructure hosts
  • all data stored at the Docker templates cache storage is volatile, so no redundancy or cache contents’ backups are required; the cluster-wide Docker environment creation will become unavailable in case of this storage failure

Operating System Settings

Virtuozzo Application Platform requires an account with a user ID and group ID set to 0 (root account) at VZ-based hosts. The password-based authentication has to be enabled for this account on all of the VZ-based host nodes.

What’s next?