https://docs.openstack.org/install-guide/
#원본 PDF
OpenStack contributors
Jan 04, 2024
CONTENTS
CHAPTER
ONE
CONVENTIONS
The OpenStack documentation uses several typesetting conventions
1.1 Notices
Notices take these forms:
Note: A comment with additional information that explains a part of the text.
Important: Something you must be aware of before proceeding
Tip: An extra but helpful piece of practical advice.
Caution: Helpful information that prevents the user from making mistakes.
Warning: Critical information about the risk of data loss or security issues.
1.2 Command prompts
$ command
Any user, including the root user, can run commands that are prefixed with the $ prompt.
# command
The root user must run commands that are prefixed with the # prompt. You can also prefix these commands with the sudo command, if available, to run them.
CHAPTER
TWO
2.1 Abstract
The OpenStack system consists of several key services that are separately installed. These services work together depending on your cloud needs and include the Compute, Identity, Networking, Image, Block Storage, Object Storage, Telemetry, Orchestration, and Database services. You can install any of these projects separately and configure them stand-alone or as connected entities. Explanations of configuration options and sample configuration files are included. This guide documents the installation of OpenStack starting with the Pike release. It covers multiple releases.
Warning: This guide is a work-in-progress and is subject to updates frequently. Pre-release packages have been used for testing, and some instructions may not work with final versions. Please help us make this guide better by reporting any errors you encounter.
2.2 Operating systems
Currently, this guide describes OpenStack installation for the following Linux distributions:
openSUSE and SUSE Linux Enterprise Server
You can install OpenStack by using packages on openSUSE Leap 42.3, openSUSE Leap 15, SUSE Linux Enterprise Server 12 SP4, SUSE Linux Enterprise Server 15 through the Open Build Service Cloud repository.
Red Hat Enterprise Linux and CentOS
You can install OpenStack by using packages available on both Red Hat Enterprise Linux 7 and 8 and their derivatives through the RDO repository.
Note: OpenStack Wallaby is available for CentOS Stream 8. OpenStack Ussuri and Victoria are available for both CentOS 8 and RHEL 8. OpenStack Train and earlier are available on both CentOS 7 and RHEL 7.
Ubuntu
You can walk through an installation by using packages available through Canonicals Ubuntu Cloud archive repository for Ubuntu 16.04+ (LTS).
Note: The Ubuntu Cloud Archive pockets for Pike and Queens provide OpenStack packages for Ubuntu 16.04 LTS; OpenStack Queens is installable direct using Ubuntu 18.04 LTS; the Ubuntu Cloud Archive pockets for Rocky and Stein provide OpenStack packages for Ubuntu 18.04 LTS; the Ubuntu Cloud Archive pocket for Victoria provides OpenStack packages for Ubuntu 20.04 LTS.
CHAPTER
THREE
GET STARTED WITH OPENSTACK
The OpenStack project is an open source cloud computing platform for all types of clouds, which aims to be simple to implement, massively scalable, and feature rich. Developers and cloud computing technologists from around the world create the OpenStack project.
OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a set of interrelated services. Each service offers an Application Programming Interface (API) that facilitates this integration. Depending on your needs, you can install some or all services.
3.1 The OpenStack services
The OpenStack project navigator lets you browse the OpenStack services that make up the OpenStack architecture. The services are categorized per the service type and release series.
3.2 The OpenStack architecture
The following sections describe the OpenStack architecture in more detail:
3.2.1 Conceptual architecture
The following diagram shows the relationships among the OpenStack services:
3.2.2 Logical architecture
To design, deploy, and configure OpenStack, administrators must understand the logical architecture.
As shown in Conceptual architecture, OpenStack consists of several independent parts, named the OpenStack services. All services authenticate through a common Identity service. Individual services interact with each other through public APIs, except where privileged administrator commands are necessary.
Internally, OpenStack services are composed of several processes. All services have at least one API process, which listens for API requests, preprocesses them and passes them on to other parts of the service. With the exception of the Identity service, the actual work is done by distinct processes.
For communication between the processes of one service, an AMQP message broker is used. The services state is stored in a database. When deploying and configuring your OpenStack cloud, you can choose among several message broker and database solutions, such as RabbitMQ, MySQL, MariaDB, and SQLite.
Users can access OpenStack via the web-based user interface implemented by the Horizon Dashboard, via command-line clients and by issuing API requests through tools like browser plug-ins or curl. For applications, several SDKs are available. Ultimately, all these access methods issue REST API calls to the various OpenStack services.
The following diagram shows the most common, but not the only possible, architecture for an OpenStack cloud:
CHAPTER
FOUR
OVERVIEW
The OpenStack project is an open source cloud computing platform that supports all types of cloud environments. The project aims for simple implementation, massive scalability, and a rich set of features. Cloud computing experts from around the world contribute to the project.
OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a variety of complementary services. Each service offers an Application Programming Interface (API) that facilitates this integration.
This guide covers step-by-step deployment of the major OpenStack services using a functional example architecture suitable for new users of OpenStack with sufficient Linux experience. This guide is not intended to be used for production system installations, but to create a minimum proof-of-concept for the purpose of learning about OpenStack
After becoming familiar with basic installation, configuration, operation, and troubleshooting of these OpenStack services, you should consider the following steps toward deployment using a production architecture:
- Determine and implement the necessary core and optional services to meet performance and redundancy requirements.
- Increase security using methods such as firewalls, encryption, and service policies.
- Use a deployment tool such as Ansible, Chef, Puppet, or Salt to automate deployment and management of the production environment. The OpenStack project has a couple of deployment projects with specific guides per version: - 2023.2 (Bobcat) release - 2023.1 (Antelope) release - Zed release - Yoga release - Xena release - Wallaby release - Victoria release - Ussuri release - Train release - Stein release
4.1 Example architecture
The example architecture requires at least two nodes (hosts) to launch a basic virtual machine or instance. Optional services such as Block Storage and Object Storage require additional nodes.
Important: The example architecture used in this guide is a minimum configuration, and is not intended for production system installations. It is designed to provide a minimum proof-of-concept for the purpose of learning about OpenStack. For information on creating architectures for specific use cases, or how to determine which architecture is required, see the Architecture Design Guide.
This example architecture differs from a minimal production architecture as follows:
- Networking agents reside on the controller node instead of one or more dedicated network nodes.
- Overlay (tunnel) traffic for self-service networks traverses the management network instead of a dedicated network.
For more information on production architectures for Pike, see the Architecture Design Guide, OpenStack Networking Guide for Pike, and OpenStack Administrator Guides for Pike.
For more information on production architectures for Queens, see the Architecture Design Guide, OpenStack Networking Guide for Queens, and OpenStack Administrator Guides for Queens.
For more information on production architectures for Rocky, see the Architecture Design Guide, OpenStack Networking Guide for Rocky, and OpenStack Administrator Guides for Rocky.
4.1.1 Controller
The controller node runs the Identity service, Image service, Placement service, management portions of Compute, management portion of Networking, various Networking agents, and the Dashboard. It also includes supporting services such as an SQL database, message queue, and NTP.
Optionally, the controller node runs portions of the Block Storage, Object Storage, Orchestration, and Telemetry services.
The controller node requires a minimum of two network interfaces.
4.1.2 Compute
The compute node runs the hypervisor portion of Compute that operates instances. By default, Compute uses the KVM hypervisor. The compute node also runs a Networking service agent that connects instances to virtual networks and provides firewalling services to instances via security groups. You can deploy more than one compute node. Each node requires a minimum of two network interfaces.
4.1.3 Block Storage
The optional Block Storage node contains the disks that the Block Storage and Shared File System services provision for instances. For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security. You can deploy more than one block storage node. Each node requires a minimum of one network interface.
4.1.4 Object Storage
The optional Object Storage node contain the disks that the Object Storage service uses for storing accounts, containers, and objects. For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security. This service requires two nodes. Each node requires a minimum of one network interface. You can deploy more than two object storage nodes.
4.2 Networking
Choose one of the following virtual networking options.
4.2.1 Networking Option 1: Provider networks
The provider networks option deploys the OpenStack Networking service in the simplest way possible with primarily layer-2 (bridging/switching) services and VLAN segmentation of networks. Essentially, it bridges virtual networks to physical networks and relies on physical network infrastructure for layer-3 (routing) services. Additionally, a DHCP service provides IP address information to instances.
The OpenStack user requires more information about the underlying network infrastructure to create a virtual network to exactly match the infrastructure.
Warning: This option lacks support for self-service (private) networks, layer-3 (routing) services, and advanced services such as LBaaS and FWaaS. Consider the self-service networks option below if you desire these features.
4.2.2 Networking Option 2: Self-service networks
The self-service networks option augments the provider networks option with layer-3 (routing) services that enable self-service networks using overlay segmentation methods such as VXLAN. Essentially, it routes virtual networks to physical networks using NAT. Additionally, this option provides the foundation for advanced services such as LBaaS and FWaaS.
The OpenStack user can create virtual networks without the knowledge of underlying infrastructure on the data network. This can also include VLAN networks if the layer-2 plug-in is configured accordingly.
CHAPTER
FIVE
This section explains how to configure the controller node and one compute node using the example architecture.
Although most environments include Identity, Image service, Compute, at least one networking service, and the Dashboard, the Object Storage service can operate independently. If your use case only involves Object Storage, you can skip to
- Object Storage Installation Guide for 2023.2 (Bobcat)
- Object Storage Installation Guide for 2023.1 (Antelope)
- Object Storage Installation Guide for Zed
- Object Storage Installation Guide for Yoga
- Object Storage Installation Guide for Stein
after configuring the appropriate nodes for it.
You must use an account with administrative privileges to configure each node. Either run the commands as the root user or configure the sudo utility.
Note: The systemctl enable call on openSUSE outputs a warning message when the service uses SysV Init scripts instead of native systemd files. This warning can be ignored.
For best performance, we recommend that your environment meets or exceeds the hardware requirements in Hardware requirements.
The following minimum requirements should support a proof-of-concept environment with core services and several CirrOS instances:
- Controller Node: 1 processor, 4 GB memory, and 5 GB storage
- Compute Node: 1 processor, 2 GB memory, and 10 GB storage
As the number of OpenStack services and virtual machines increase, so do the hardware requirements for the best performance. If performance degrades after enabling additional services or virtual machines, consider adding hardware resources to your environment.
To minimize clutter and provide more resources for OpenStack, we recommend a minimal installation of your Linux distribution. Also, you must install a 64-bit version of your distribution on each node.
A single disk partition on each node works for most basic installations. However, you should consider Logical Volume Manager (LVM) for installations with optional services such as Block Storage.
For first-time installation and testing purposes, many users select to build each host as a virtual machine (VM). The primary benefits of VMs include the following:
- One physical server can support multiple nodes, each with almost any number of network interfaces.
- Ability to take periodic snap shots throughout the installation process and roll back to a working configuration in the event of a problem
However, VMs will reduce performance of your instances, particularly if your hypervisor and/or processor lacks support for hardware acceleration of nested VMs.
Note: If you choose to install on VMs, make sure your hypervisor provides a way to disable MAC address filtering on the provider network interface
For more information about system requirements, see the OpenStack 2023.2 (Bobcat) Administrator Guides, the OpenStack 2023.1 (Antelope) Administrator Guides, the OpenStack Zed Administrator Guides, the OpenStack Yoga Administrator Guides, or the OpenStack Stein Administrator Guides.
5.1 Security
OpenStack services support various security methods including password, policy, and encryption. Additionally, supporting services including the database server and message broker support password security.
To ease the installation process, this guide only covers password security where applicable. You can create secure passwords manually, but the database connection string in services configuration file cannot accept special characters like @. We recommend you generate them using a tool such as pwgen, or by running the following command:
$ openssl rand -hex 10
For OpenStack services, this guide uses SERVICE_PASS to reference service account passwords and SERVICE_DBPASS to reference database passwords.
The following table provides a list of services that require passwords and their associated references in the guide.
OpenStack and supporting services require administrative privileges during installation and operation. In some cases, services perform modifications to the host that can interfere with deployment automation tools such as Ansible, Chef, and Puppet. For example, some OpenStack services add a root wrapper to sudo that can interfere with security policies. See the Compute service documentation for Pike, the Compute service documentation for Queens, or the Compute service documentation for Rocky for more information.
The Networking service assumes default values for kernel network parameters and modifies firewall rules. To avoid most issues during your initial installation, we recommend using a stock deployment of a supported distribution on your hosts. However, if you choose to automate deployment of your hosts, review the configuration and policies applied to them before proceeding further.
5.2 Host networking
After installing the operating system on each node for the architecture that you choose to deploy, you must configure the network interfaces. We recommend that you disable any automated network management tools and manually edit the appropriate configuration files for your distribution. For more information on how to configure networking on your distribution, see the documentation.
See also:
- Ubuntu Network Configuration
- RHEL 7 or RHEL 8 Network Configuration
- SLES 12 or SLES 15 or openSUSE Network Configuration
All nodes require Internet access for administrative purposes such as package installation, security updates, DNS, and NTP. In most cases, nodes should obtain Internet access through the management network interface. To highlight the importance of network separation, the example architectures use private address space for the management network and assume that the physical network infrastructure provides Internet access via NAT or other methods. The example architectures use routable IP address space for the provider (external) network and assume that the physical network infrastructure provides direct Internet access
In the provider networks architecture, all instances attach directly to the provider network. In the selfservice (private) networks architecture, instances can attach to a self-service or provider network. Selfservice networks can reside entirely within OpenStack or provide some level of external network access using NAT through the provider network.
The example architectures assume use of the following networks:
- Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all nodes for administrative purposes such as package installation, security updates, DNS, and NTP.
- Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to instances in your OpenStack environment.
You can modify these ranges and gateways to work with your particular network infrastructure. Network interface names vary by distribution. Traditionally, interfaces use eth followed by a sequential number. To cover all variations, this guide refers to the first interface as the interface with the lowest number and the second interface as the interface with the highest number.
Note: Ubuntu has changed the network interface naming concept. Refer Changing Network Interfaces name Ubuntu 16.04.
Unless you intend to use the exact configuration provided in this example architecture, you must modify the networks in this procedure to match your environment. Each node must resolve the other nodes by name in addition to IP address. For example, the controller name must resolve to 10.0.0.11, the IP address of the management interface on the controller node.
Warning: Reconfiguring network interfaces will interrupt network connectivity. We recommend using a local terminal session for these procedures.
Note: RHEL, CentOS and SUSE distributions enable a restrictive firewall by default. Ubuntu does not. For more information about securing your environment, refer to the OpenStack Security Guide.
5.2.1 Controller node
Configure network interfaces
1. Configure the first interface as the management interface:
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
2. The provider interface uses a special configuration without an IP address assigned to it. Configure the second interface as the provider interface:
Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.
For Ubuntu:
• Edit the /etc/network/interfaces file to contain the following:
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
For RHEL or CentOS:
- Edit the /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME file to contain the following
Do not change the HWADDR and UUID keys.
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
For SUSE:
- Edit the /etc/sysconfig/network/ifcfg-INTERFACE_NAME file to contain the following
STARTMODE='auto'
BOOTPROTO='static'
3. Reboot the system to activate the changes.
Configure name resolution
1. Set the hostname of the node to controller.
2. Edit the /etc/hosts file to contain the following:
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry.
Note: This guide includes host entries for optional services in order to reduce complexity should you choose to deploy them.
5.2.2 Compute node
Configure network interfaces
1. Configure the first interface as the management interface:
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
Note: Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
2. The provider interface uses a special configuration without an IP address assigned to it. Configure the second interface as the provider interface:
Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.
For Ubuntu:
- Edit the /etc/network/interfaces file to contain the following:
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
For RHEL or CentOS:
- Edit the /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME file to contain the following:
Do not change the HWADDR and UUID keys.
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
For SUSE:
- Edit the /etc/sysconfig/network/ifcfg-INTERFACE_NAME file to contain the following:
STARTMODE='auto'
BOOTPROTO='static'
3. Reboot the system to activate the changes.
Configure name resolution
1. Set the hostname of the node to compute1.
2. Edit the /etc/hosts file to contain the following:
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry.
Note: This guide includes host entries for optional services in order to reduce complexity should you choose to deploy them.
5.2.3 Block storage node (Optional)
If you want to deploy the Block Storage service, configure one additional storage node.
Configure network interfaces
- Configure the management interface:
– IP address: 10.0.0.41
– Network mask: 255.255.255.0 (or /24)
– Default gateway: 10.0.0.1
Configure name resolution
1. Set the hostname of the node to block1.
2. Edit the /etc/hosts file to contain the following:
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry.
Note: This guide includes host entries for optional services in order to reduce complexity should you choose to deploy them.
3. Reboot the system to activate the changes.
5.2.4 Verify connectivity
We recommend that you verify network connectivity to the Internet and among the nodes before proceeding further
1. From the controller node, test access to the Internet:
# ping -c 4 docs.openstack.org
PING files02.openstack.org (23.253.125.17) 56(84) bytes of data.
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=1 ttl=43␣,→time=125 ms
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=2 ttl=43␣,→time=125 ms
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=3 ttl=43␣,→time=125 ms
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=4 ttl=43␣,→time=125 ms
--- files02.openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 125.192/125.282/125.399/0.441 ms
2. From the controller node, test access to the management interface on the compute node:
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
3. From the compute node, test access to the Internet:
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
4. From the compute node, test access to the management interface on the controller node:
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
Note: RHEL, CentOS and SUSE distributions enable a restrictive firewall by default. During the installation process, certain steps will fail unless you alter or disable the firewall. For more information about securing your environment, refer to the OpenStack Security Guide
Ubuntu does not enable a restrictive firewall by default. For more information about securing your environment, refer to the OpenStack Security Guide
5.3 Network Time Protocol (NTP)
To properly synchronize services among nodes, you can install Chrony, an implementation of NTP. We recommend that you configure the controller node to reference more accurate (lower stratum) servers and other nodes to reference the controller node.
5.3.1 Controller node
Perform these steps on the controller node.
Install and configure components
1. Install the packages:
For Ubuntu:
# apt install chrony
For RHEL or CentOS:
# yum install chrony
For SUSE
# zypper install chrony
2. Edit the chrony.conf file and add, change, or remove the following keys as necessary for your environment
For RHEL, CentOS, or SUSE, edit the /etc/chrony.conf file:
server NTP_SERVER iburst
For Ubuntu, edit the /etc/chrony/chrony.conf file:
server NTP_SERVER iburst
Replace NTP_SERVER with the hostname or IP address of a suitable more accurate (lower stratum) NTP server. The configuration supports multiple server keys.
Note: By default, the controller node synchronizes the time via a pool of public servers. However, you can optionally configure alternative servers such as those provided by your organization.
3. To enable other nodes to connect to the chrony daemon on the controller node, add this key to the same chrony.conf file mentioned above:
allow 10.0.0.0/24
If necessary, replace 10.0.0.0/24 with a description of your subnet.
4. Restart the NTP service:
For Ubuntu:
# service chrony restart
For RHEL, CentOS, or SUSE:
# systemctl enable chronyd.service
# systemctl start chronyd.service
5.3.2 Other nodes
Other nodes reference the controller node for clock synchronization. Perform these steps on all other nodes.
Install and configure components
1. Install the packages.
For Ubuntu:
# apt install chrony
For RHEL or CentOS:
# yum install chrony
For SUSE:
# zypper install chrony
2. Configure the chrony.conf file and comment out or remove all but one server key. Change it to reference the controller node.
For RHEL, CentOS, or SUSE, edit the /etc/chrony.conf file:
server controller iburst
For Ubuntu, edit the /etc/chrony/chrony.conf file:
server controller iburst
3. Comment out the pool 2.debian.pool.ntp.org offline iburst line.
4. Restart the NTP service.
For Ubuntu:
# service chrony restart
For RHEL, CentOS, or SUSE:
# systemctl enable chronyd.service
# systemctl start chronyd.service
5.3.3 Verify operation
We recommend that you verify NTP synchronization before proceeding further. Some nodes, particularly those that reference the controller node, can take several minutes to synchronize.
1. Run this command on the controller node:
# chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
␣
,→===============================================================================
^- 192.0.2.11 2 7 12 137 -2814us[-3000us] +/-,→ 43ms
^* 192.0.2.12 2 6 177 46 +17us[ -23us] +/-,→ 68ms
Contents in the Name/IP address column should indicate the hostname or IP address of one or more NTP servers. Contents in the MS column should indicate * for the server to which the NTP service is currently synchronized.
2. Run the same command on all other nodes:
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
␣
,→===============================================================================
^* controller 3 9 377 421 +15us[ -87us] +/
,→- 15ms
Contents in the Name/IP address column should indicate the hostname of the controller node.
5.4 OpenStack packages
Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.
Note: The set up of OpenStack packages described here needs to be done on all nodes: controller, compute, and Block Storage nodes.
Warning: Your hosts must contain the latest versions of base installation packages available for your distribution before proceeding further
Note: Disable or remove any automatic update services because they can impact your OpenStack environment.
5.4.1 OpenStack packages for SUSE
Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.
Note: The set up of OpenStack packages described here needs to be done on all nodes: controller, compute, and Block Storage nodes.
Warning: Your hosts must contain the latest versions of base installation packages available for your distribution before proceeding further.
Note: Disable or remove any automatic update services because they can impact your OpenStack environment.
Enable the OpenStack repository
- Enable the Open Build Service repositories based on your openSUSE or SLES version, and on the version of OpenStack you want to install:
On openSUSE for OpenStack Ussuri:
# zypper addrepo -f obs://Cloud:OpenStack:Ussuri/openSUSE_Leap_15.1 Ussuri
On openSUSE for OpenStack Train:
# zypper addrepo -f obs://Cloud:OpenStack:Train/openSUSE_Leap_15.0 Train
On openSUSE for OpenStack Stein:
# zypper addrepo -f obs://Cloud:OpenStack:Stein/openSUSE_Leap_15.0 Stein
On openSUSE for OpenStack Rocky:
# zypper addrepo -f obs://Cloud:OpenStack:Rocky/openSUSE_Leap_15.0 Rocky
On openSUSE for OpenStack Queens:
# zypper addrepo -f obs://Cloud:OpenStack:Queens/openSUSE_Leap_42.3 Queens
On openSUSE for OpenStack Pike:
# zypper addrepo -f obs://Cloud:OpenStack:Pike/openSUSE_Leap_42.3 Pike
Note: The openSUSE distribution uses the concept of patterns to represent collections of packages. If you selected Minimal Server Selection (Text Mode) during the initial installation, you may be presented with a dependency conflict when you attempt to install the OpenStack packages. To avoid this, remove the minimal_base-conflicts package:
# zypper rm patterns-openSUSE-minimal_base-conflicts
On SLES for OpenStack Ussuri:
# zypper addrepo -f obs://Cloud:OpenStack:Ussuri/SLE_15_SP2 Ussuri
On SLES for OpenStack Train:
# zypper addrepo -f obs://Cloud:OpenStack:Train/SLE_15_SP1 Train
On SLES for OpenStack Stein:
# zypper addrepo -f obs://Cloud:OpenStack:Stein/SLE_15 Stein
On SLES for OpenStack Rocky:
# zypper addrepo -f obs://Cloud:OpenStack:Rocky/SLE_12_SP4 Rocky
On SLES for OpenStack Queens:
# zypper addrepo -f obs://Cloud:OpenStack:Queens/SLE_12_SP3 Queens
On SLES for OpenStack Pike:
# zypper addrepo -f obs://Cloud:OpenStack:Pike/SLE_12_SP3 Pike
Note: The packages are signed by GPG key D85F9316. You should verify the fingerprint of the imported GPG key before using it.
Key Name: Cloud:OpenStack OBS Project <Cloud:OpenStack@build.opensuse.org>
Key Fingerprint: 35B34E18 ABC1076D 66D5A86B 893A90DA D85F9316
Key Created: 2015-12-16T16:48:37 CET
Key Expires: 2018-02-23T16:48:37 CET
Finalize the installation
1. Upgrade the packages on all nodes:
# zypper refresh && zypper dist-upgrade
Note: If the upgrade process includes a new kernel, reboot your host to activate it
2. Install the OpenStack client:
# zypper install python-openstackclient
5.4.2 OpenStack packages for RHEL and CentOS
Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.
Warning: Starting with the Ussuri release, you will need to use either CentOS8 or RHEL 8. Previous OpenStack releases will need to use either CentOS7 or RHEL 7. Instructions are included for both distributions and versions where different.
Note: The set up of OpenStack packages described here needs to be done on all nodes: controller, compute, and Block Storage nodes.
Warning: Your hosts must contain the latest versions of base installation packages available for your distribution before proceeding further.
Note: Disable or remove any automatic update services because they can impact your OpenStack environment.
Prerequisites
Warning: We recommend disabling EPEL when using RDO packages due to updates in EPEL breaking backwards compatibility. Or, preferably pin package versions using the yum-versionlock plugin.
Note: The following steps apply to RHEL only. CentOS does not require these steps
1. When using RHEL, it is assumed that you have registered your system using Red Hat Subscription Management and that you have the rhel-7-server-rpms or rhel-8-for-x86_64-baseos-rpms repository enabled by default depending on your version.
For more information on registering a RHEL 7 system, see the Red Hat Enterprise Linux 7 System Administrators Guide.
2. In addition to rhel-7-server-rpms on a RHEL 7 system, you also need to have the rhel-7-server-optional-rpms, rhel-7-server-extras-rpms, and rhel-7-server-rh-common-rpms repositories enabled:
# subscription-manager repos --enable=rhel-7-server-optional-rpms \
--enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms
For more information on registering a RHEL 8 system, see the Red Hat Enterprise Linux 8 Installation Guide.
In addition to rhel-8-for-x86_64-baseos-rpms on a RHEL 8 system, you also need to have the rhel-8-for-x86_64-appstream-rpms, rhel-8-for-x86_64-supplementary-rpms, and codeready-builder-for-rhel-8-x86_64-rpms repositories enabled:
# subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms \
--enable=rhel-8-for-x86_64-supplementary-rpms --enable=codeready-builder-for-rhel-8-x86_64-rpms
Enable the OpenStack repository
- On CentOS, the extras repository provides the RPM that enables the OpenStack repository. CentOS includes the extras repository by default, so you can simply install the package to enable the OpenStack repository. For CentOS8, you will also need to enable the PowerTools repository
When installing the Victoria release, run:
# yum install centos-release-openstack-victoria
# yum config-manager --set-enabled powertools
When installing the Ussuri release, run:
# yum install centos-release-openstack-ussuri
# yum config-manager --set-enabled powertools
When installing the Train release, run:
# yum install centos-release-openstack-train
When installing the Stein release, run:
# yum install centos-release-openstack-stein
When installing the Rocky release, run:
# yum install centos-release-openstack-rocky
When installing the Queens release, run:
# yum install centos-release-openstack-queens
When installing the Pike release, run:
# yum install centos-release-openstack-pike
- On RHEL, download and install the RDO repository RPM to enable the OpenStack repository.
On RHEL 7:
The RDO repository RPM installs the latest available OpenStack release
On RHEL 8:
# dnf install https://www.rdoproject.org/repos/rdo-release.el8.rpm
The RDO repository RPM installs the latest available OpenStack release
Finalize the installation
5.4.3 OpenStack packages for Ubuntu
Ubuntu releases OpenStack with each Ubuntu release. Ubuntu LTS releases are provided every two years. OpenStack packages from interim releases of Ubuntu are made available to the prior Ubuntu LTS via the Ubuntu Cloud Archive.
Note: The archive enablement described here needs to be done on all nodes that run OpenStack services.
Archive Enablement OpenStack 2023.2 Bobcat for Ubuntu 22.04 LTS:
# add-apt-repository cloud-archive:bobcat
OpenStack 2023.1 Antelope for Ubuntu 22.04 LTS:
# add-apt-repository cloud-archive:antelope
OpenStack Zed for Ubuntu 22.04 LTS:
# add-apt-repository cloud-archive:zed
OpenStack Yoga for Ubuntu 22.04 LTS:
OpenStack Yoga is available by default using Ubuntu 22.04 LTS.
OpenStack Yoga for Ubuntu 20.04 LTS:
# add-apt-repository cloud-archive:yoga
OpenStack Xena for Ubuntu 20.04 LTS:
# add-apt-repository cloud-archive:xena
OpenStack Wallaby for Ubuntu 20.04 LTS:
# add-apt-repository cloud-archive:wallaby
OpenStack Victoria for Ubuntu 20.04 LTS:
# add-apt-repository cloud-archive:victoria
OpenStack Ussuri for Ubuntu 20.04 LTS:
OpenStack Ussuri is available by default using Ubuntu 20.04 LTS.
OpenStack Ussuri for Ubuntu 18.04 LTS:
# add-apt-repository cloud-archive:ussuri
OpenStack Train for Ubuntu 18.04 LTS:
# add-apt-repository cloud-archive:train
OpenStack Stein for Ubuntu 18.04 LTS:
# add-apt-repository cloud-archive:stein
OpenStack Rocky for Ubuntu 18.04 LTS:
# add-apt-repository cloud-archive:rocky
OpenStack Queens for Ubuntu 18.04 LTS:
OpenStack Queens is available by default using Ubuntu 18.04 LTS.
Note: For a full list of supported Ubuntu OpenStack releases, see Ubuntu OpenStack release cycle at https://www.ubuntu.com/about/release-cycle.
Sample Installation
# apt install nova-compute
Client Installation
# apt install python3-openstackclient
5.5 SQL database
Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.
Note: If you see Too many connections or Too many open files error log messages on OpenStack services, verify that maximum number of connection settings are well applied to your environment. In MariaDB, you may also need to change open_files_limit configuration
5.5.1 SQL database for SUSE
Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.
Install and configure components
1. Install the packages:
# zypper install mariadb-client mariadb python-PyMySQL
2. Create and edit the /etc/my.cnf.d/openstack.cnf file and complete the following actions:
- Create a [mysqld] section, and set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
Finalize installation
1. Start the database service and configure it to start when the system boots:
# systemctl enable mysql.service
# systemctl start mysql.service
2. Secure the database service by running the mysql_secure_installation script. In particular, choose a suitable password for the database root account:
# mysql_secure_installation
5.5.2 SQL database for RHEL and CentOS
Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.
Install and configure components
1. Install the packages:
# yum install mariadb mariadb-server python2-PyMySQL
2. Create and edit the /etc/my.cnf.d/openstack.cnf file (backup existing configuration files in /etc/my.cnf.d/ if needed) and complete the following actions:
- Create a [mysqld] section, and set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
Finalize installation
1. Start the database service and configure it to start when the system boots:
# systemctl enable mariadb.service
# systemctl start mariadb.service
2. Secure the database service by running the mysql_secure_installation script. In particular, choose a suitable password for the database root account:
# mysql_secure_installation
5.5.3 SQL database for Ubuntu
Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.
Note: As of Ubuntu 16.04, MariaDB was changed to use the unix_socket Authentication Plugin. Local authentication is now performed using the user credentials (UID), and password authentication is no longer used by default. This means that the root user no longer uses a password for local access to the server.
Note: As of Ubuntu 18.04, the mariadb-server package is no longer available from the default repository. To install successfully, enable the Universe repository on Ubunt
Install and configure components
1. Install the packages:
- As of Ubuntu 20.04, install the packages:
# apt install mariadb-server python3-pymysql
- As of Ubuntu 18.04 or 16.04, install the packages:
# apt install mariadb-server python-pymysq
2. Create and edit the /etc/mysql/mariadb.conf.d/99-openstack.cnf file and complete the following actions:
- Create a [mysqld] section, and set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
Finalize installation
1. Restart the database service:
# service mysql restart
2. Secure the database service by running the mysql_secure_installation script. In particular, choose a suitable password for the database root account:
# mysql_secure_installation
5.6 Message queue
OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.
The message queue runs on the controller node.
5.6.1 Message queue for SUSE
1. Install the package:
# zypper install rabbitmq-server
2. Start the message queue service and configure it to start when the system boots:
# systemctl enable rabbitmq-server.service # systemctl start rabbitmq-server.service
3. Add the openstack user:
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
Replace RABBIT_PASS with a suitable password.
4. Permit configuration, write, and read access for the openstack user:
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
5.6.2 Message queue for RHEL and CentOS
OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.
The message queue runs on the controller node.
Install and configure components
1. Install the package:
# yum install rabbitmq-server
2. Start the message queue service and configure it to start when the system boots:
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
3. Add the openstack user:
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
Replace RABBIT_PASS with a suitable password.
4. Permit configuration, write, and read access for the openstack user:
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
5.6.3 Message queue for Ubuntu
OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.
The message queue runs on the controller node.
Install and configure components
1. Install the package:
# apt install rabbitmq-server
2. Add the openstack user:
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
Replace RABBIT_PASS with a suitable password.
3. Permit configuration, write, and read access for the openstack user:
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
5.7 Memcached
The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure
5.7.1 Memcached for SUSE
The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.
Install and configure components
1. Install the packages:
# zypper install memcached python-python-memcached
2. Edit the /etc/sysconfig/memcached file and complete the following actions:
- Configure the service to use the management IP address of the controller node. This is to enable access by other nodes via the management network:
MEMCACHED_PARAMS="-l 10.0.0.11"
Note: Change the existing line MEMCACHED_PARAMS="-l 127.0.0.1".
Finalize installation
- Start the Memcached service and configure it to start when the system boots:
# systemctl enable memcached.service
# systemctl start memcached.service
5.7.2 Memcached for RHEL and CentOS
The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.
Install and configure components
1. Install the packages:
For CentOS 7 and RHEL 7
# yum install memcached python-memcached
For CentOS 8 and RHEL 8
# yum install memcached python3-memcached
2. Edit the /etc/sysconfig/memcached file and complete the following actions:
- Configure the service to use the management IP address of the controller node. This is to enable access by other nodes via the management network:
OPTIONS="-l 127.0.0.1,::1,controller"
Note: Change the existing line OPTIONS="-l 127.0.0.1,::1".
Finalize installation
- Start the Memcached service and configure it to start when the system boots:
# systemctl enable memcached.service
# systemctl start memcached.service
5.7.3 Memcached for Ubuntu
The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.
Install and configure components
1. Install the packages:
For Ubuntu versions prior to 18.04 use:
# apt install memcached python-memcache
For Ubuntu 18.04 and newer versions use:
# apt install memcached python3-memcache
2. Edit the /etc/memcached.conf file and configure the service to use the management IP address of the controller node. This is to enable access by other nodes via the management network:
-l 10.0.0.11
Note: Change the existing line that had -l 127.0.0.1.
Finalize installation
- Restart the Memcached service:
# service memcached restart
5.8 Etcd
OpenStack services may use Etcd, a distributed reliable key-value store for distributed key locking, storing configuration, keeping track of service live-ness and other scenarios.
5.8.1 Etcd for SUSE
Right now, there is no distro package available for etcd3. This guide uses the tarball installation as a workaround until proper distro packages are available.
The etcd service runs on the controller node.
Install and configure components
1. Install etcd:
- Create etcd user:
# groupadd --system etcd
# useradd --home-dir "/var/lib/etcd" \
--system \
--shell /bin/false \
-g etcd \
etcd
- Create the necessary directories:
# mkdir -p /etc/etcd
# chown etcd:etcd /etc/etcd
# mkdir -p /var/lib/etcd
# chown etcd:etcd /var/lib/etcd
- Determine your system architecture:
• Determine your system architecture:
- Download and install the etcd tarball for x86_64/amd64:
# ETCD_VER=v3.2.7
# rm -rf /tmp/etcd && mkdir -p /tmp/etcd
# curl -L \
https://github.com/coreos/etcd/releases/download/${ETCD_VER}/
,→etcd-${ETCD_VER}-linux-amd64.tar.gz \
-o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
# tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz \
-C /tmp/etcd --strip-components=1
# cp /tmp/etcd/etcd /usr/bin/etcd
# cp /tmp/etcd/etcdctl /usr/bin/etcdctl
Or download and install the etcd tarball for arm64:
# ETCD_VER=v3.2.7
# rm -rf /tmp/etcd && mkdir -p /tmp/etcd
# curl -L \
https://github.com/coreos/etcd/releases/download/${ETCD_VER}/
,→etcd-${ETCD_VER}-linux-arm64.tar.gz \
-o /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz
# tar xzvf /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz \
-C /tmp/etcd --strip-components=1
# cp /tmp/etcd/etcd /usr/bin/etcd
# cp /tmp/etcd/etcdctl /usr/bin/etcdctl
2. Create and edit the /etc/etcd/etcd.conf.yml file and set the initial-cluster, initial-advertise-peer-urls, advertise-client-urls, listen-client-urls to the management IP address of the controller node to enable access by other nodes via the management network:
name: controller
data-dir: /var/lib/etcd
initial-cluster-state: 'new'
initial-cluster-token: 'etcd-cluster-01'
initial-cluster: controller=http://10.0.0.11:2380
initial-advertise-peer-urls: http://10.0.0.11:2380
advertise-client-urls: http://10.0.0.11:2379
listen-peer-urls: http://0.0.0.0:2380
listen-client-urls: http://10.0.0.11:2379
3. Create and edit the /usr/lib/systemd/system/etcd.service file:
[Unit]
After=network.target
Description=etcd - highly-available key value store
[Service]
# Uncomment this on ARM64.
# Environment="ETCD_UNSUPPORTED_ARCH=arm64"
LimitNOFILE=65536
Restart=on-failure
Type=notify
ExecStart=/usr/bin/etcd --config-file /etc/etcd/etcd.conf.yml
User=etcd
[Install]
WantedBy=multi-user.target
Reload systemd service files with:
# systemctl daemon-reload
Finalize installation
1. Enable and start the etcd service:
# systemctl enable etcd
# systemctl start etcd
'서버가상화 > openstack' 카테고리의 다른 글
openstack Releases 비밀(힌트) (1) | 2024.01.14 |
---|---|
openstack img 파일 생성 (0) | 2024.01.13 |
openstack 이미지(img) 다운로드 사이트 (0) | 2024.01.07 |
openstack chrony(NTP, 네트워크 타임 서비스 설치) (0) | 2024.01.07 |
openstack apache2 (0) | 2024.01.07 |