728x90
반응형

 

 

메뉴얼 원본

 

doc-kolla-ansible.pdf
0.62MB

 

Release 17.0.1.dev31

 

OpenStack Foundation

 

Jan 06, 2024

 

 

Kollas mission is to provide production-ready containers and deployment tools for operating OpenStack clouds.

 

Kolla Ansible is highly opinionated out of the box, but allows for complete customization. This permits operators with minimal experience to deploy OpenStack quickly and as experience grows modify the OpenStack configuration to suit the operators exact requirements.

 

CHAPTER
ONE

RELATED PROJECTS

This documentation is for Kolla Ansible.

For information on building container images for use with Kolla Ansible, please refer to the Kolla image documentation.

Kayobe is a subproject of Kolla that uses Kolla Ansible and Bifrost to deploy an OpenStack control plane to bare metal.

 

 

CHAPTER
TWO

SITE NOTES

 

This documentation is continually updated and may not represent the state of the project at any specific prior release. To access documentation for a previous release of Kolla Ansible, append the OpenStack release name to the URL. For example, to access documentation for the Stein release: https:
//docs.openstack.org/kolla-ansible/stein

 

CHAPTER
THREE

 

RELEASE NOTES

The release notes for the project can be found here: https://docs.openstack.org/releasenotes/ kolla-ansible/

 

 

 

 

CHAPTER
FOUR

 

ADMINISTRATOR GUIDE

 

4.1 Admin Guides

 

4.1.1 Advanced Configuration

 

Endpoint Network Configuration

 

When an OpenStack cloud is deployed, the REST API of each service is presented as a series of endpoints. These endpoints are the admin URL, the internal URL, and the external URL.

 

Kolla offers two options for assigning these endpoints to network addresses: - Combined - Where all three endpoints share the same IP address - Separate - Where the external URL is assigned to an IP address that is different than the IP address shared by the internal and admin URLs

 

The configuration parameters related to these options are: - kolla_internal_vip_address - network_interface - kolla_external_vip_address - kolla_external_vip_interface

 

For the combined option, set the two variables below, while allowing the other two to accept their default values. In this configuration all REST API requests, internal and external, will flow over the same network.

kolla_internal_vip_address: "10.10.10.254"
network_interface: "eth0"

 

For the separate option, set these four variables. In this configuration the internal and external REST API requests can flow over separate networks.

kolla_internal_vip_address: "10.10.10.254"
network_interface: "eth0"
kolla_external_vip_address: "10.10.20.254"
kolla_external_vip_interface: "eth1"

 

Fully Qualified Domain Name Configuration

 

When addressing a server on the internet, it is more common to use a name, like www.example.net, instead of an address like 10.10.10.254. If you prefer to use names to address the endpoints in your kolla deployment use the variables:

  • kolla_internal_fqdn 
  • kolla_external_fqdn
kolla_internal_fqdn: inside.mykolla.example.net
kolla_external_fqdn: mykolla.example.net

 

Provisions must be taken outside of kolla for these names to map to the configured IP addresses. Using a DNS server or the /etc/hosts file are two ways to create this mapping.

 

RabbitMQ Hostname Resolution

 

RabbitMQ doesnt work with IP address, hence the IP address of api_interface should be resolvable by hostnames to make sure that all RabbitMQ Cluster hosts can resolve each others hostname beforehand.

 

TLS Configuration

 

Configuration of TLS is now covered here

 

OpenStack Service Configuration in Kolla

 

An operator can change the location where custom config files are read from by editing /etc/kolla/ globals.yml and adding the following line.

# The directory to merge custom config files the kolla's config files
node_custom_config: "/etc/kolla/config"

 

Kolla allows the operator to override configuration of services. Kolla will generally look for a file in /etc/kolla/config/<< config file >>, /etc/kolla/config/<< service name >>/<< config file >> or /etc/kolla/config/<< service name >>/<< hostname >>/<< config file >>, but these locations sometimes vary and you should check the config task in the appropriate Ansible role for a full list of supported locations. For example, in the case of nova.conf the following locations are supported, assuming that you have services using nova.conf running on hosts called controller-0001, controller-0002 and controller-0003:

  • /etc/kolla/config/nova.conf 
  • /etc/kolla/config/nova/controller-0001/nova.conf
  • /etc/kolla/config/nova/controller-0002/nova.conf
  • /etc/kolla/config/nova/controller-0003/nova.conf
  • /etc/kolla/config/nova/nova-scheduler.conf

Using this mechanism, overrides can be configured per-project, per-project-service or per-projectservice-on-specified-host.

 

Overriding an option is as simple as setting the option under the relevant section. For example, to set override scheduler_max_attempts in nova scheduler, the operator could create /etc/kolla/ config/nova/nova-scheduler.conf with content:

[DEFAULT]
scheduler_max_attempts = 100

 

If the operator wants to configure compute node cpu and ram allocation ratio on host myhost, the operator needs to create file /etc/kolla/config/nova/myhost/nova.conf with content:

[DEFAULT]
cpu_allocation_ratio = 16.0
ram_allocation_ratio = 5.0

 

This method of merging configuration sections is supported for all services using Oslo Config, which includes the vast majority of OpenStack services, and in some cases for services using YAML configuration. Since the INI format is an informal standard, not all INI files can be merged in this way. In these cases Kolla supports overriding the entire config file.

 

Additional flexibility can be introduced by using Jinja conditionals in the config files. For example, you may create Nova cells which are homogeneous with respect to the hypervisor model. In each cell, you may wish to configure the hypervisors differently, for example the following override shows one way of setting the bandwidth_poll_interval variable as a function of the cell:

[DEFAULT]
{% if 'cell0001' in group_names %}
bandwidth_poll_interval = 100
{% elif 'cell0002' in group_names %}
bandwidth_poll_interval = -1
{% else %}
bandwidth_poll_interval = 300
{% endif %}

 

An alternative to Jinja conditionals would be to define a variable for the bandwidth_poll_interval and set it in according to your requirements in the inventory group or host vars:

[DEFAULT]
bandwidth_poll_interval = {{ bandwidth_poll_interval }}

 

Kolla allows the operator to override configuration globally for all services. It will look for a file called /etc/kolla/config/global.conf.

For example to modify database pool size connection for all services, the operator needs to create /etc/ kolla/config/global.conf with content:

[database]
max_pool_size = 100

 

OpenStack policy customisation


OpenStack services allow customisation of policy. Since the Queens release, default policy configuration
is defined within the source code for each service, meaning that operators only need to override rules
they wish to change. Projects typically provide documentation on their default policy configuration, for
example, Keystone.


Policy can be customised via JSON or YAML files. As of the Wallaby release, the JSON format is
deprecated in favour of YAML. One major benefit of YAML is that it allows for the use of comments.


For example, to customise the Neutron policy in YAML format, the operator should add the customised
rules in /etc/kolla/config/neutron/policy.yaml.


The operator can make these changes after services have been deployed by using the following command:

kolla-ansible deploy

 

In order to present a user with the correct interface, Horizon includes policy for other services. Customisations made to those services may need to be replicated in Horizon. For example, to customise the Neutron policy in YAML format for Horizon, the operator should add the customised rules in /etc/ kolla/config/horizon/neutron_policy.yaml.

 

IP Address Constrained Environments

If a development environment doesnt have a free IP address available for VIP configuration, the hosts IP address may be used here by disabling HAProxy by adding:

enable_haproxy: "no

 

Note this method is not recommended and generally not tested by the Kolla community, but included
since sometimes a free IP is not available in a testing environment.


In this mode it is still necessary to configure kolla_internal_vip_address, and it should take the IP
address of the api_interface interface.

 


External Elasticsearch/Kibana environment


It is possible to use an external Elasticsearch/Kibana environment. To do this first disable the deployment
of the central logging

enable_central_logging: "no"

 

Now you can use the parameter elasticsearch_address to configure the address of the external Elasticsearch environment.

 

Non-default port

 

It is sometimes required to use a different than default port for service(s) in Kolla. It is possible with setting _port in globals.yml file. For example:

database_port: 3307

As _port value is saved in different services configuration so its advised to make above change before deploying.

 

Use an external Syslog server

 

By default, Fluentd is used as a syslog server to collect Swift and HAProxy logs. When Fluentd is disabled or you want to use an external syslog server, You can set syslog parameters in globals.yml file. For example:

syslog_server: "172.29.9.145"
syslog_udp_port: "514"

 

You can also set syslog facility names for Swift and HAProxy logs. By default, Swift and HAProxy use local0 and local1, respectively.

syslog_swift_facility: "local0"
syslog_haproxy_facility: "local1"

If Glance TLS backend is enabled (glance_enable_tls_backend), the syslog facility for the glance_tls_proxy service uses local2 by default. This can be set via syslog_glance_tls_proxy_facility.


If Neutron TLS backend is enabled (neutron_enable_tls_backend), the syslog facility for the neutron_tls_proxy service uses local4 by default. This can be set via syslog_neutron_tls_proxy_facility.

 

Mount additional Docker volumes in containers

 

It is sometimes useful to be able to mount additional Docker volumes into one or more containers. This may be to integrate 3rd party components into OpenStack, or to provide access to site-specific data such
as x.509 certificate bundles.
Additional volumes may be specified at three levels:

 

  • globally
  • per-service (e.g. nova)
  • per-container (e.g. nova-api)

To specify additional volumes globally for all containers, set default_extra_volumes in globals.
yml. For example:

default_extra_volumes:
- "/etc/foo:/etc/foo"

 

To specify additional volumes for all containers in a service, set _extra_volumes in globals.yml. For example:

nova_extra_volumes:
- "/etc/foo:/etc/foo"

 

To specify additional volumes for a single container, set _extra_volumes in globals.yml. For example:

nova_libvirt_extra_volumes:
- "/etc/foo:/etc/foo

 

 

4.1.2 TLS

 

This guide describes how to configure Kolla Ansible to deploy OpenStack with TLS enabled. Enabling TLS on the provided internal and/or external VIP address allows OpenStack clients to authenticate and encrypt network communication with OpenStack services.

 

When an OpenStack service exposes an API endpoint, Kolla Ansible will configure HAProxy for that service to listen on the internal and/or external VIP address. The HAProxy container load-balances requests on the VIPs to the nodes running the service container.

 

There are two different layers of TLS configuration for OpenStack APIs:

 

1. Enabling TLS on the internal and/or external VIP, so communication between an OpenStack client and the HAProxy listening on the VIP is secure.

2. Enabling TLS on the backend network, so communication between HAProxy and the backend API services is secure.

Note: The certificates generated by Kolla Ansible use a simple Certificate Authority setup and are not suitable for a production deployment. Only certificates signed by a trusted Certificate Authority should be used in a production deployment.

 

To deploy OpenStack with TLS enabled for the external, internal and backend APIs, configure the following in globals.yml:

kolla_enable_tls_internal: "yes"
kolla_enable_tls_external: "yes"
kolla_enable_tls_backend: "yes"
kolla_copy_ca_into_containers: "yes"

 

If deploying on Debian or Ubuntu:

openstack_cacert: "/etc/ssl/certs/ca-certificates.crt"

 

If on CentOS or Rocky:

openstack_cacert: "/etc/pki/tls/certs/ca-bundle.crt"

 

The Kolla Ansible certificates command generates a private test Certificate Authority, and then uses the CA to sign the generated certificates for the enabled VIP(s) to test TLS in your OpenStack deployment. Assuming you are using the multinode inventory:

kolla-ansible -i ~/multinode certificates

 

TLS Configuration for internal/external VIP

 

The configuration variables that control TLS for the internal and/or external VIP are:

  • kolla_enable_tls_external
  • kolla_enable_tls_internal
  • kolla_internal_fqdn_cert
  • kolla_external_fqdn_cert
Note: If TLS is enabled only on the internal or external network, then kolla_internal_vip_address and kolla_external_vip_address must be different.
If there is only a single network configured in your topology (as opposed to separate internal and external networks), TLS can only be enabled using the internal network configuration variables.

 

The default state for TLS networking is disabled. To enable external TLS encryption:

kolla_enable_tls_external: "yes

 

To enable internal TLS encryption:

kolla_enable_tls_internal: "yes"

 

Two certificate files are required to use TLS securely with authentication, which will be provided by your Certificate Authority:

  • server certificate with private key
  • CA certificate with any intermediate certificates

The combined server certificate and private key needs to be provided to Kolla Ansible, with the path configured via kolla_external_fqdn_cert or kolla_internal_fqdn_cert. These paths default to {{ kolla_certificates_dir }}/haproxy.pem and {{ kolla_certificates_dir }}/haproxy-internal.pem respectively, where kolla_certificates_dir is /etc/kolla/ certificates by default.

 

If the server certificate provided is not already trusted by clients, then the CA certificate file will need to be distributed to the clients. This is discussed in more detail in Configuring the OpenStack Client for TLS and Adding CA Certificates to the Service Containers.

 

Configuring the OpenStack Client for TLS

 

The location for the CA certificate for the admin-openrc.sh file is configured with the kolla_admin_openrc_cacert variable, which is not set by default. This must be a valid path on all hosts where admin-openrc.sh is used.

 

When TLS is enabled on a VIP, and kolla_admin_openrc_cacert is set to /etc/pki/tls/certs/ ca-bundle.crt, an OpenStack client will have settings similar to this configured by admin-openrc.

sh:

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=demoPassword
export OS_AUTH_URL=https://mykolla.example.net:5000
export OS_INTERFACE=internal
export OS_ENDPOINT_TYPE=internalURL
export OS_MISTRAL_ENDPOINT_TYPE=internalURL
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=RegionOne
export OS_AUTH_PLUGIN=password
# os_cacert is optional for trusted certificates
export OS_CACERT=/etc/pki/tls/certs/ca-bundle.crt

 

Adding CA Certificates to the Service Containers

 

To copy CA certificate files to the service containers:

kolla_copy_ca_into_containers: "yes"

 

When kolla_copy_ca_into_containers is configured to yes, the CA certificate files in /etc/ kolla/certificates/ca will be copied into service containers to enable trust for those CA certificates. This is required for any certificates that are either self-signed or signed by a private CA, and are not already present in the service image trust store. Kolla will install these certificates in the container system wide trust store when the container starts.

 

All certificate file names will have the kolla-customca- prefix prepended to them when they are copied into the containers. For example, if a certificate file is named internal.crt, it will be named kolla-customca-internal.crt in the containers.

 

For Debian and Ubuntu containers, the certificate files will be copied to the /usr/local/share/ ca-certificates/ directory.

 

For CentOS and Rocky containers, the certificate files will be copied to the /etc/pki/ca-trust/ source/anchors/ directory.

 

In both cases, valid certificates will be added to the system trust store - /etc/ssl/certs/ ca-certificates.crt on Debian and Ubuntu, and /etc/pki/tls/certs/ca-bundle.crt on CentOS and Rocky.

 

Configuring a CA bundle

 

OpenStack services do not always trust CA certificates from the system trust store by default. To resolve this, the openstack_cacert variable should be configured with the path to the CA Certificate in the container.

 

To use the system trust store on Debian or Ubuntu:

openstack_cacert: /etc/ssl/certs/ca-certificates.crt

 

For CentOS or Rocky:

openstack_cacert: /etc/pki/tls/certs/ca-bundle.crt

 

Back-end TLS Configuration

 

Enabling TLS on the backend services secures communication between the HAProxy listing on the internal/external VIP and the OpenStack services. It also enables secure end-to-end communication between OpenStack services that support TLS termination. The OpenStack services that support backend TLS termination in Victoria are: Nova, Ironic, Neutron, Keystone, Glance, Heat, Placement, Horizon, Barbican, and Cinder.

 

The configuration variables that control back-end TLS for service endpoints are:

  • kolla_enable_tls_backend
  • kolla_tls_backend_cert
  • kolla_tls_backend_key
  • haproxy_backend_cacert
  • haproxy_backend_cacert_dir

The default state for back-end TLS is disabled. To enable TLS for the back-end communication:

kolla_enable_tls_backend: "yes"

 

It is also possible to enable back-end TLS on a per-service basis. For example, to enable back-end TLS for Keystone, set keystone_enable_tls_backend to yes.

 

The default values for haproxy_backend_cacert and haproxy_backend_cacert_dir should suffice if the certificate is in the system trust store. Otherwise, they should be configured to a location of the CA certificate installed in the service containers.

 

Each backend service requires a certificate and private key. In many cases it is necessary to use a separate certificate and key for each host, or even per-service. The following precedence is used for the certificate:

  • {{ kolla_certificates_dir }}/{{ inventory_hostname }}/{{ project_name}}-cert.pem
  • {{ kolla_certificates_dir }}/{{ inventory_hostname }}-cert.pem
  • {{ kolla_certificates_dir }}/{{ project_name }}-cert.pem
  • {{ kolla_tls_backend_cert }}

And for the private key:

  • {{ kolla_certificates_dir }}/{{ inventory_hostname }}/{{ project_name}}-key.pem
  • {{ kolla_certificates_dir }}/{{ inventory_hostname }}-key.pem
  • {{ kolla_certificates_dir }}/{{ project_name }}-key.pem
  • {{ kolla_tls_backend_key }}

The default for kolla_certificates_dir is /etc/kolla/certificates.

 

kolla_tls_backend_cert and kolla_tls_backend_key, default to {{ kolla_certificates_dir }}/backend-cert.pem and {{ kolla_certificates_dir }}/ backend-key.pem respectively.

 

project_name is the name of the OpenStack service, e.g. keystone or cinder

 

Note: The back-end TLS cert/key can be the same certificate that is used for the VIP, as long as those certificates are configured to allow requests from both the VIP and internal networks.

 

 

 

 

728x90
반응형
LIST

+ Recent posts