Hardware Regions

Click on the Regions option in your admin panel’s Cluster menu section to see the hardware structure of the platform:

  • Regions (or hardware regions) - independent hardware sets from different data centers; each region can contain multiple host groups
  • Host Groups (or environment regions) - a separate set of servers (hosts) within the confines of a particular region with its own options, efficiency, and rules for resources charging
Note: Hardware regions aren’t visible for end-users. User dashboard operates the host groups (availability of each group can be configured separately).

PaaS hardware regions

Here, all the crucial information on Regions is displayed through the following columns:

  • Name of a hardware region or comprised host group(s)
  • Domain assigned to the region
  • SSL certificates configuration for the hardware region
  • Subnet provided for the region
  • Migration shows if users should be able to migrate environments from/to the current hardware region
  • Status of a region/host group (could be either ACTIVE or under MAINTENANCE)
  • Comment with some optional information on a region or host group
  • Docker Host address of the hardware region

Tip: If you want to benefit from providing multiple regions, you should read the appropriate documentation before applying any changes:

Use the tools panel above the regions list to perform the following operations:

Also, you can select a particular region to view its detailed information and manage domain names.

Add New Region

Follow the next steps to add a new hardware region to your PaaS cluster:

Note: Before adding a new region, consider the following prerequisites:

  • hosts must be configured according to hardware requirements
  • at least two internal and two external IPs must be reserved for shared load balancers (resolvers)
  • new region domain delegation must be done to the IPs from the previous point and according to DNS Zones Delegation
  • firewall should be checked and, if necessary, set up
  • In order to set up the internal and external networks in the new region, you need to fulfill the following two conditions:
    • Internal and external interfaces on the server should not interconnect. You can connect these interfaces to different physical switches in the data center or separate them on the data center network level (Vlan).
    • Every host’s internal and external network interface must be configured in the same segment of the data center network (Vlan or Vrack). For example, all internal interfaces of the region - vlan1, all external - vlan2 (i.e. no interconnection).

1. Click the Add Region button at the top pane of the Regions section:

add region

Within the opened Add Region frame, you need to fulfill the required details.

2. Within the first Region Setting section, specify the following information:

  • Unique Name - unique identifier for the region (cannot be changed later)

  • Display Name - changeable region alias, which is displayed in JCA (10 characters max)

  • Domain - hostname assigned to a new region

    Note: The appropriate domain name should be purchased beforehand using any preferred domain registrar.

  • Status - the initial state should be set as MAINTENANCE to avoid false monitoring alerts during region addition

  • Subnet - a dedicated internal subnet for the user nodes and traffic routing between different hardware regions

  • Start and End IP - range of the IP addresses for containers created in this region (cannot exceed the specified subnet)

  • Comment - short information on the current hardware region displayed in the admin panel (optional)

  • Allow migration from/to regions - tick the checkbox to allow environments migration from/to this region by end-users

    Note: This parameter controls the permission for migration across different hardware regions; herewith, transferring between host groups of the same region cannot be disabled.

3. In the Name Servers section, you need to state a pair (or several pairs) of Public IPv4 and Internal IPv4. These addresses will be used by shared load balancers as a region entrance point and, at the same time, its internal and external DNS server.

4. The last Docker Host Settings section configures a separate Docker Engine module for this particular hardware region:

  • Host - domain or IP of your Docker Host
  • SSH and TCP Port - ports for connections via the appropriate protocols
  • Login and Password - access credentials for the Docker Host

Once all the settings are specified, confirm the creation by clicking the Add button.

Add New Host Group

To add a new host group, follow the instructions below.

1. Click the Add Host Group button at the top of the Regions panel.

2. Within the opened Add Host Group dialog, fill in the given fields on the required Basic Data tab:

  • Unique Name - unique identifier for the host group (cannot be changed later)
  • Display Name - changeable host group name displayed in the admin panel and at the end-users' dashboard (10 characters max)
  • Status - initial state of the host group, i.e. the one set after creation (ACTIVE or MAINTENANCE)
  • Comment - short information on the current host group displayed in the admin panel (optional)
  • Region - hardware region this host group should be assigned to (use the drop-down list to select an existing one or to jump to the Add Region dialog)
  • Virtual Network Group (VNG) - a value used for the virtual network grouping (host groups with the same VNG will have the same configs)
    For example: You have two regions (for infrastructure containers and user applications), which are physically in the same data center. If you set “Virtual Network Group (VNG)” for all host groups of both regions to the same value, the virtual network configs will be created as if for the same region.

add host group basic data

Optionally, provide additional information via the Advanced Settings tab:

  • Datacenter
    • Country – choose the country of the datacenter
    • City – specify the city of the datacenter
    • Geo Coordinates – set latitude and longitude coordinates of datacenter in the WGS84 decimal degrees format
    • Vendor – specify the datacenter’s vendor name
  • Dashboard Settings
    • Icon 16x16 – provide URL or use the Browse button to select from the platform resources folder; if not provided, country flag (based on the selected above) is used by default
    • Short Description – write a single-line description
    • Description (markdown) - prepare a detailed description for the host group
Tip: Provided descriptions will be displayed in the dashboard if there are no dedicated records for the host group in the localization files.

add host group advanced settings

Click Add to proceed.

3. Next, you need to set up internal routing between regions by following the steps described in the linked section. Skip this step if it is already configured.

4. Add a host to this newly created host group.

4.1. Check /etc/vz/vz.conf. If the VE_ROUTE_SRC_DEV parameter is commented or indicates an incorrect device, fix the issue and save the file. It should specify the interface name of the server’s internal network.

4.2. If your DOCKER_HOST is on the docker-engine host and you deploy vz7 host, add the next line to the /etc/yum.conf file:

1
echo 'exclude=docker-ce' >> /etc/yum.conf

4.3. Check routes from the new region to infra/user hosts in this and other regions. It could be set automatically via the bird daemon.

4.4. Start the host installation via JCA.

5. Configure shared load balancers (SLB).

5.1. Add a region network to the jelastic.net.subnetworks system settings in JCA.

5.2. Add SLB IPs (both external and internal) to the jelastic.isolation.infra.ips and jelastic.isolation.infra.ips.all system settings in JCA. If isolation is enabled on the platform, you need to disable and re-enable it to apply these new settings.

5.3. In order to create a shared load balancer for the new region, connect to a new host and create the config.ini file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
[general]
VAR_JELASTIC_NETWORK=${PLATFORM_NETWORK}
VAR_JELASTIC_DOMAIN_ZONE=${PLATFORM_DOMAIN}
[zookeeper]
CTID=300
IPS=${ZK_INT_IP}
[jelastic-db]
CTID=301
IPS=${DB_INT_IP}
[resolver5]
CTID=$new_resolver_CTID
IPS=${RSLV_EXT_IP} ${RSLV_INT_IP}
DOMAIN=${REGION_DOMAIN}

The ${PLATFORM_NETWORK} is the primary platform network of the main region.

5.4. Download the create_docker.sh script.

1
wget -qO- http://dot.jelastic.com/download/graf/migration/migration_scripts-5.4.tar.gz | tar -xz

Edit it to specify the platform version in the DOCKER_VERSION=""; line.

For example, if deploying a region to PaaS 5.9-3, set it as follows: DOCKER_VERSION=“5.9-3”;

5.5. Run the script to create a new shared load balancer.

1
./create_docker.sh resolver5

Add all regions' networks to this SLB via the /var/lib/jelastic/customizations/ipconfig.cfg file.

5.6. Update ZooKeeper environment variables (/.jelenv) by adding shared load balancer’s internal IP to OPT_JELASTIC_IPS and new network to JELASTIC_NETWORK. Restart the ZooKeeper service to apply changes:

1
2
systemctl restart zookeeper.service
systemctl status zookeeper.service

5.7. Fix nameservers for SLB containers.

1
vzctl set $new_resolver_CTID --nameserver @resolver1_internal_ip@ --nameserver @resolver2_internal_ip@ --nameserver 8.8.8.8 --save

5.8. Check all infrastructure containers and manually add region network and routes.

Note: Starting with the 6.3 release, the JRouter configurations should be performed on the HCore instances.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
For example, you have 10.100.0.0/16 - internal net of new region and 10.100.1.31/32,10.100.1.32/32 - resolvers IP.

Check and add iptables/routes to infra containers:

CT 300 [zookeeper]
iptables -A INPUT -s 10.100.1.31/32 -p tcp -m tcp --dport 2181 -j ACCEPT
iptables -A INPUT -s 10.100.1.32/32 -p tcp -m tcp --dport 2181 -j ACCEPT

CT 301 [jelastic-db]
iptables -A INPUT -s 10.100.1.31/32 -p tcp -m tcp --dport 3306 -j ACCEPT
iptables -A INPUT -s 10.100.1.32/32 -p tcp -m tcp --dport 3306 -j ACCEPT

CT 304 [gate]
Firewall:
iptables -A LAN -s 10.100.1.31/32 -j ACCEPT
iptables -A LAN -s 10.100.1.32/32 -j ACCEPT

Routes (in case if gate have external IP):
Was:
CT-304-bash-4.2# cat /etc/sysconfig/network-scripts/route-venet0
192.168.0.0/16 dev venet0 src 192.168.1.55

Now:
CT-304-bash-4.2# cat /etc/sysconfig/network-scripts/route-venet0
192.168.0.0/16 dev venet0 src 192.168.1.55
10.100.0.0/16 dev venet0 src 192.168.1.55

ip r a 10.100.0.0/16 dev venet0 scope link src 192.168.1.55

Add all networks to /var/lib/jelastic/customizations/ipconfig.cfg

CT 308 [jrouter]
iptables -A INPUT -s 10.100.0.0/16 -p tcp -m multiport --dports 80,443,8080,8081 -j ACCEPT
iptables -A INPUT -s 10.100.1.31/32 -p tcp -m multiport --dports 21,6010:6020 -j ACCEPT
iptables -A INPUT -s 10.100.1.32/32 -p tcp -m multiport --dports 21,6010:6020 -j ACCEPT

CT 314 [awakener]
iptables -A HTTP_INTERNAL -s 10.100.1.31/32 -j ACCEPT
iptables -A HTTP_INTERNAL -s 10.100.1.32/32 -j ACCEPT

CT 315 [uploader]
iptables -A INPUT -s 10.100.1.31/32 -p tcp -m multiport --dports 80,443,8080,8081 -j ACCEPT
iptables -A INPUT -s 10.100.1.32/32 -p tcp -m multiport --dports 80,443,8080,8081 -j ACCEPT

CT 317 [zabbix]
iptables -A INPUT -s 10.100.1.31/32 -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A INPUT -s 10.100.1.32/32 -p tcp -m tcp --dport 80 -j ACCEPT

CT 318 [webgate]
iptables -A INPUT -s 10.100.1.31/32 -p tcp -m multiport --dports 80,443,8080,8743 -j ACCEPT
iptables -A INPUT -s 10.100.1.32/32 -p tcp -m multiport --dports 80,443,8080,8743 -j ACCEPT

5.9. Check the connection between the SLB and database containers via port 3306.

Tip: If there is no connection, you can temporarily add the following route to the containers:

1
ip r add {db_network} via {slb_ip} dev venet0

Here, {db_network} is the database container network (e.g. 10.100.0.0/16), and {slb_ip} is SLB internal address (e.g. 10.103.1.2).

Run service discovery:

1
jem docker run --ctid $new_resolver_CTID

Check results in /vz/root/$new_resolver_CTID/var/log/discovery.log and, if everything is ok, disable discovery:

1
jem docker addenv --env "SKIP_DISCOVERY=MQ==" --ctid $new_resolver_CTID

5.10. If adding a new shared load balancer(s) to the current region, you need to add the appropriate IPs to the virtual_common.conf file inside the HCore containers:

1
2
CT-314# cat /etc/nginx/conf.d/virtual_common.conf | grep resolver
resolver 192.168.0.1 192.168.0.2 new_resolver_IP1 new_resolver_IP2;

6. Provide Let’s Encrypt SSL certificates via JCA.

7. If needed, apply customizations and run J-runner tests for the new region.

8. Synchronize new SLBs in the patcher.

Under the patcher account, you need to run the “Sync infra components” and “Zabbix updater” JPS scripts from the Marketplace. It will automatically add new SLBs to the Zabbix server.

9. Finally, assign the host group to the appropriate user Groups via the Regions & Pricing tab.

host groups availability

Afterward, your host group will appear in the topology wizard of the Dev dashboard as a new environment region.

Internal Routing between Regions

Tip: This section provides steps to set up internal routing between regions. Skip if already configured.

To connect a new region with the current platform, you need to configure GRE + IPsec tunnels and set up internal routing with the BIRD daemon.

1. Set up GRE tunnels.

You should perform the following configurations between two infrastructure hosts and the first two user hosts in the region (for high availability).

List of the tunnels to be configured:

1
2
3
4
infra1.platform.domain - user1.platform.domain
infra1.platform.domain - user2.platform.domain
infra2.platform.domain - user1.platform.domain
infra2.platform.domain - user2.platform.domain

1.1. Create a configuration file with corresponding hostnames and IP addresses (on all the hosts participating in GRE tunneling):

1
2
3
4
5
6
cat << CONFEOF > /root/gre.conf.ip.data
infra1.platform.domain            01     ovh     100.127.255.11  Ext_IP_infra1
infra2.platform.domain            02     ovh     100.127.255.12  Ext_IP_infra2
user1.platform.domain             01     usr     100.127.255.21  Ext_IP_user1
user2.platform.domain             02     usr     100.127.255.22  Ext_IP_user2
CONFEOF

Here:

  • 1st column - list of valid hostnames assigned to the hosts that are participating in GRE tunneling
  • 2nd column - index number of the host
  • 3rd column - abbreviated location of the hosts (e.g. OVH = ovh, User = usr); don’t use more than three letters, only lowercase with no special symbols
  • 4th column - inner GRE IP addresses (preferably from the 100.127.255.1-100.127.255.254 range)
  • 5th column - external IPv4 addresses of the hosts (obtain with the “ip r g 1 | awk {‘print $NF’} | head -n 1” command)

1.2. Generate configs using the script and start the GRE link interfaces. Execute commands listed below on the required hosts:

  • the first infra host
1
2
wget -q http://dot.jelastic.com/download/team.operations/gen_gre_config.sh -O /tmp/gen_gre_config.sh; bash /tmp/gen_gre_config.sh infra1.platform.domain user1.platform.domain
wget -q http://dot.jelastic.com/download/team.operations/gen_gre_config.sh -O /tmp/gen_gre_config.sh; bash /tmp/gen_gre_config.sh infra1.platform.domain user2.platform.domain

Start GRE tunnels.

1
2
ifup link01ovh-usr01
ifup link01ovh-usr02

Disable reverse path filtering for the host.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
[ -f /etc/sysctl.conf ] || >/etc/sysctl.conf
cat << CONFEOF >> /etc/sysctl.conf
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.br1.rp_filter = 0
net.ipv4.conf.link01ovh-usr01.rp_filter = 0
net.ipv4.conf.link01ovh-usr02.rp_filter = 0
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.all.proxy_ndp = 1
fs.odirect_enable = 1
CONFEOF
sysctl -p
  • the second infra host
1
2
wget -q http://dot.jelastic.com/download/team.operations/gen_gre_config.sh -O /tmp/gen_gre_config.sh; bash /tmp/gen_gre_config.sh infra2.platform.domain user1.platform.domain
wget -q http://dot.jelastic.com/download/team.operations/gen_gre_config.sh -O /tmp/gen_gre_config.sh; bash /tmp/gen_gre_config.sh infra2.platform.domain user2.platform.domain

Start GRE tunnels.

1
2
ifup link02ovh-usr01
ifup link02ovh-usr02

Disable reverse path filtering for the host.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
[ -f /etc/sysctl.conf ] || >/etc/sysctl.conf
cat << CONFEOF >> /etc/sysctl.conf
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.br1.rp_filter = 0
net.ipv4.conf.link02ovh-usr01.rp_filter = 0
net.ipv4.conf.link02ovh-usr02.rp_filter = 0
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.all.proxy_ndp = 1
fs.odirect_enable = 1
CONFEOF
sysctl -p
  • the first user host
1
2
wget -q http://dot.jelastic.com/download/team.operations/gen_gre_config.sh -O /tmp/gen_gre_config.sh; bash /tmp/gen_gre_config.sh user1.platform.domain infra1.platform.domain
wget -q http://dot.jelastic.com/download/team.operations/gen_gre_config.sh -O /tmp/gen_gre_config.sh; bash /tmp/gen_gre_config.sh user1.platform.domain infra2.platform.domain

Start GRE tunnels.

1
2
ifup link01usr-ovh01
ifup link01usr-ovh02

Disable reverse path filtering for the host.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
[ -f /etc/sysctl.conf ] || >/etc/sysctl.conf
cat << CONFEOF >> /etc/sysctl.conf
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.br1.rp_filter = 0
net.ipv4.conf.ifup link01usr-ovh01.rp_filter = 0
net.ipv4.conf.ifup link01usr-ovh02.rp_filter = 0
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.all.proxy_ndp = 1
fs.odirect_enable = 1
CONFEOF
sysctl -p
  • the second user host
1
2
wget -q http://dot.jelastic.com/download/team.operations/gen_gre_config.sh -O /tmp/gen_gre_config.sh; bash /tmp/gen_gre_config.sh user2.platform.domain infra1.platform.domain
wget -q http://dot.jelastic.com/download/team.operations/gen_gre_config.sh -O /tmp/gen_gre_config.sh; bash /tmp/gen_gre_config.sh user2.platform.domain infra2.platform.domain

Start GRE tunnels.

1
2
ifup link02usr-ovh01
ifup link02usr-ovh02

Disable reverse path filtering for the host.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
[ -f /etc/sysctl.conf ] || >/etc/sysctl.conf
cat << CONFEOF >> /etc/sysctl.conf
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.br1.rp_filter = 0
net.ipv4.conf.ifup link02usr-ovh01.rp_filter = 0
net.ipv4.conf.ifup link02usr-ovh02.rp_filter = 0
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.all.proxy_ndp = 1
fs.odirect_enable = 1
CONFEOF
sysctl -p

The hosts at both ends should successfully ping each other (use the IP addresses from the 4th column) to ensure that all links are set up correctly.

2. Set up IPsec tunnels.

2.1. Install the required libreswan software.

1
yum -y install libreswan

2.2. Put the following config template into the /etc/ipsec.conf file on each host.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
cat << EOF > /etc/ipsec.conf
# basic configuration
config setup
    # Debug-logging controls:  "none" for (almost) none, "all" for lots.
    # klipsdebug=none
    # plutodebug="control parsing"
    # For Virtuozzo Linux leave protostack=netkey
    protostack=netkey
    nat_traversal=yes
    virtual_private=
    oe=off
    # Enable this if you see "failed to find any available worker"
    # nhelpers=0
#You may put your configuration (.conf) file in the "/etc/ipsec.d/" and uncomment this.
include /etc/ipsec.d/*.conf
EOF

2.3. Generate IPsec key for all hosts participating in tunneling.

1
2
3
4
5
6
rm -rf /etc/ipsec.d/*.db
ipsec initnss --nssdir /etc/ipsec.d
>/etc/ipsec.d/ipsec.secrets
service ipsec restart

ipsec showhostkey --left --ckaid $(ipsec newhostkey --configdir /etc/ipsec.d --output /etc/ipsec.d/ipsec.secrets --bits 4096 2>&1 | grep CKAID | awk -F 'CKAID' {'print $NF'} | awk {'print $1'}) | grep leftrsasigkey | awk -F '=' {'print $NF'}

As a result of these commands, you’ll get a key. In this guide, we’ll refer to it as $(infranode key).

Repeat this step on the user hosts of the new region to get $(usernode key).

2.4. Create the following configs:

  • /etc/ipsec.d/default.conf on user hosts
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
conn dpd
    dpddelay = 15
    dpdtimeout = 30
    dpdaction = restart

conn self
    also = dpd
    left = $(usernode IP)
    leftid = @$(usernode hostname)
    leftrsasigkey = $(usernode key)
    authby = rsasig
    type = tunnel
    compress = no
    ike = aes128-sha1;modp1024
    esp = aes128-sha1;modp1024

conn gre
    leftprotoport = gre
    rightprotoport = gre
  • /etc/ipsec.d/$(infranode hostname).conf on user hosts
1
2
3
4
5
6
7
conn $(infranode hostname)
    also = self
    also = gre
    right = $(infranode IP)
    rightid = @$(infranode hostname)
    rightrsasigkey = $(infranode key)
    auto = start
  • /etc/ipsec.d/default.conf on infra hosts
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
conn dpd
    dpddelay = 15
    dpdtimeout = 30
    dpdaction = restart

conn self
    also = dpd
    left = $(infranode IP)
    leftid = @$(infranode hostname)
    leftrsasigkey = $(infranode key)
    authby = rsasig
    type = tunnel
    compress = no
    ike = aes128-sha1;modp1024
    esp = aes128-sha1;modp1024

conn gre
    leftprotoport = gre
    rightprotoport = gre
  • /etc/ipsec.d/$(usernode hostname).conf on infra hosts
1
2
3
4
5
6
7
conn $(usernode hostname)
    also = self
    also = gre
    right = $(usernode IP)
    rightid = @$(usernode hostname)
    rightrsasigkey = $(usernode key)
    auto = start

2.5. Restart IPsec and enable it on each host.

1
2
systemctl restart ipsec
systemctl enable ipsec

2.6. Create IPsec connections on:

  • user host
1
2
ipsec auto --add $(infranode hostname)
ipsec auto --up $(infranode hostname)
  • infra host
1
2
ipsec auto --add $(usernode hostname)
ipsec auto --up $(usernode hostname)

Note: Before proceeding to the next step, run the following command to ensure that IPsec is set:

1
ipsec status | grep active

You should see active tunnels in the output, for example:

1
Total IPsec connections: loaded 2, active 2

2.7. Set up the firewall rules on the new hosts and add some firewall rules on the infra hosts.

Note: This step assumes the firewalls have been configured according to the official recommendations and have the LAN_SERVICES and WAN_SERVICES chains.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
ipset create IPSEC_TUN hash:net family inet hashsize 1024 maxelem 65536
ipset add IPSEC_TUN IP_link_infra1
ipset add IPSEC_TUN IP_link_infra2
ipset add IPSEC_TUN IP_link_user1
ipset add IPSEC_TUN IP_link_user2
iptables -N VPN_TUN
iptables -A INPUT -d $Internal_IP_address_host/32 -i link+ -j LAN_SERVICES
iptables -A INPUT -m set --match-set IPSEC_TUN src -j VPN_TUN
iptables -A INPUT -i br1 -p ospf -j ACCEPT
iptables -A INPUT -i link+ -p ospf -j ACCEPT
iptables -A VPN_TUN -p udp -m udp --dport 500 -j ACCEPT
iptables -A VPN_TUN -p esp -j ACCEPT
iptables -A VPN_TUN -p gre -m policy --dir in --pol ipsec -j ACCEPT
iptables -A VPN_TUN -m set --match-set IPSEC_TUN src -j ACCEPT
iptables -A WAN_SERVICES -p gre -m policy --dir in --pol ipsec -j ACCEPT

Here:

  • Internal_IP_address_host - internal IP address of the host
  • br1 - a name of the internal network interface for local connections
  • IP_link_infra1, IP_link_infra2, IP_link_user1, IP_link_user2 - external IP addresses of hosts that participate in tunneling

2.8. Add the following rules to other user hosts that are in the cluster but did not participate in configuring IPsec tunnels:

1
2
3
iptables -A INPUT -d $Internal_IP_address_host/32 -i link+ -j LAN_SERVICES
iptables -A INPUT -i br1 -p ospf -j ACCEPT
iptables -A INPUT -i link+ -p ospf -j ACCEPT

2.9. Check if IPsec tunnels are working. Execute the command below on any host and ensure that the packets go between the hosts.

1
tcpdump -n -i any esp or udp port 500 or udp port 4500

If everything works fine, please save the firewall rules on each node.

1
2
service iptables save
service ipset save

3. Set up BIRD.

You can automatically set up internal routing with the BIRD daemon.

3.1. Install BIRD on the infra and user hosts:

1
2
yum install bird
systemctl enable bird

3.2. Set up BIRD via the /etc/bird.conf configuration file.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
vim /etc/bird.conf

File content (example for infra1):
router id $Internal_IP_address_host;

protocol kernel {
    persist;        # Don't remove routes on bird shutdown
    scan time 20;        # Scan kernel routing table every 20 seconds
    export all;        # Default is export none
    import all;
    learn;
}

protocol device {
    scan time 10;        # Scan interfaces every 10 seconds
}

protocol static {
}

protocol ospf {
        tick 2;
        rfc1583compat yes;
        ecmp yes;
        merge external yes;

    import filter {
        krt_prefsrc = $Internal_IP_address_host;
        accept;
    };

    area 1 {
        interface "$br1" {
                        stub no;
                        cost 10;
                        dead 15;
                        hello 10;
            type broadcast;
                        authentication cryptographic;
                        password "$anypassword";
                };

                 interface "link*" {
                           hello 10;
                           retransmit 6;
                           cost 15;
                           transmit delay 5;
                           dead count 5;
                           wait 50;
                           type pointopoint;
              authentication cryptographic;
                           password "$anypassword";
            };
    };
}

Here:

  • Internal_IP_address_host - the internal IP address of the host
  • anypassword - generate any password for your BIRD connection
  • br1 - internal network interface for local connections

3.3. If you do not use IPsec + GRE, remove the following code block from the BIRD configurations:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
interface "link*" {
                           hello 10;
                           retransmit 6;
                           cost 15;
                           transmit delay 5;
                           dead count 5;
                           wait 50;
                           type pointopoint;
              authentication cryptographic;
                           password "$anypassword";
            };

3.4. Restart the BIRD service and ensure that other hosts are present in the Full/PtP state. For example:

1
2
3
4
5
6
7
# birdc
BIRD 1.6.8 ready.
bird> show ospf neighbors
ospf1:
Router ID       Pri        State         DTime    Interface  Router IP
172.16.0.1      1    Full/PtP      00:46    link01ovh-usr01 100.127.255.11
172.16.0.2      1    Full/PtP      00:49    link01ovh-usr02 100.127.255.12

3.5. Manually test the connection from the infra to user hosts via the internal IP (in both directions).

3.6. Fill in the jelastic.net.subnetworks system setting in JCA and verify that all internal networks are present for all platform regions.

For example: 172.16.0.0/16;10.30.0.0/16.

That’s all. You can proceed with the host group configuration (the fourth step).

Edit Region/Host Group

You can adjust the existing regions and host groups by simply double-clicking on the required item or using the Edit button at the top of the Regions panel.

edit host group

Within the corresponding region/host group Edit dialog, you can adjust everything on both tabs (same as for the addition) except the Unique Name value.

Apply changes with the Save button at the bottom-right corner of the frame.

SSL Certificates for Regions

Using the SSL column within the Regions section, you can configure SSL certificates of the primary domain (go to the dedicated section to manage all domains):

  • Add Certificates – sets up SSL for the region
  • Edit - allows switching between the Let’s Encrypt and custom SSL certificates
  • Update - provides a new LE certificate for the hardware region (this option is hidden for custom SSL)
  • Remove - detaches certificate from the region

SSL for hardware regions

1. While adding or editing your certificate, you can choose between two options:

  • Use Let’s Encrypt - automatically fetch and apply certificates from the Let’s Encrypt free and open Certificate Authority
  • Upload Custom Certificates - upload valid RSA-based Server Key, Intermediate Certificate (CA), and Domain Certificate files to automatically apply them. Self-signed certificates can be used as well, e.g. for testing purposes

add hardware region SSL

Click Save to confirm changes.

2. If needed, you can configure the Let’s Encrypt certificates provisioning via the certain System Settings:

  • jelastic.letsencrypt.renewal.days - displays an alert at JCA if any of the SSL certificates are valid for fewer days than a provided value (21, by default)
  • qjob.ssl_checker.cron_schedule - checks the status of the Let’s Encrypt SSL certificates for hardware regions and automatically updates those, which are valid for fewer days than specified in the jelastic.letsencrypt.renewal.days setting; the default value is 0 0 15 * * ?, i.e. this job is run daily at 15:00
  • hcore.platform.admin.username - sets platform admin email address, which, in case any issue occurs, receives notification from Let’s Encrypt

To update or remove a certificate, select the appropriate option from the list, and confirm the action via the pop-up window.

Remove Region/Host Group

No longer needed regions and host groups can be deleted with the help of the Remove button at the top tools panel.

remove host group

Note: Hardware regions and host groups with at least one user container inside cannot be deleted. You need to migrate all the instances to another host before initiating the removal.

Confirm your decision via the pop-up window.

What’s next?