# vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "clincha/proxmox-ve-8"
end
Oracle VM 환경설정
Proxmox 설치
# vagrantfile을 만들고
# C:\Users\shim>type vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "clincha/proxmox-ve-8"
end
# vagrant up 실행
# C:\Users\shim>vagrant up
# vagrant ssh 로 접속
#C:\Users\shim>vagrant ssh
Linux pve 6.8.4-3-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.4-3 (2024-05-02T11:55Z) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Jun 3 17:17:39 2024 from 10.0.2.2
# su로 로그인 초기 패스워드는 vagrant 임
# vagrant@pve:~$ su - root
Password:
# C:\Users\shim>vagrant ssh
Linux pve 6.8.4-3-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.4-3 (2024-05-02T11:55Z) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Jun 3 17:17:39 2024 from 10.0.2.2
# Network enp0s3은 기본적으로 설치가 되어있음
# ip addr enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:ee:b0:b6 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp0s3
valid_lft 84923sec preferred_lft 84923sec
inet6 fe80::f98b:8517:b612:9da8/64 scope link noprefixroute
valid_lft forever preferred_lft forever
# Network enp0s8은 Oracle VM 영역 IP로 설정해주면됨 (이 서버는 192.168.56.22로 설정함)
# ip addr enp0s8
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:1a:4c:96 brd ff:ff:ff:ff:ff:ff
inet 192.168.56.22/24 brd 192.168.56.255 scope global noprefixroute enp0s8
valid_lft forever preferred_lft forever
inet6 fe80::9a38:2f27:239e:c4c8/64 scope link noprefixroute
valid_lft forever preferred_lft forever
# Nexbox 데몬은 8006 임
# netstat -ntpa |grep LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/init
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1437/sshd: /usr/sbi
tcp 0 0 127.0.0.1:85 0.0.0.0:* LISTEN 1672/pvedaemon
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1613/master
tcp6 0 0 ::1:25 :::* LISTEN 1613/master
tcp6 0 0 :::3128 :::* LISTEN 1692/spiceproxy
tcp6 0 0 :::111 :::* LISTEN 1/init
tcp6 0 0 :::22 :::* LISTEN 1437/sshd: /usr/sbi
tcp6 0 0 :::8006 :::* LISTEN 1685/pveproxy
root@pve:/etc#
Network enp0s8 IP 추가,수정
# apt-get 으로 nmtui 설치
# apt-get install network-manager
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
network-manager is already the newest version (1.42.4-1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
# mkdir /tmp/img/
# cd /tmp/img/
# wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img
or
# wget https://cloud.centos.org/centos/8/vagrant/x86_64/images/CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2
이미지 패스워드 변경
이미지를 서버에 다운로드 받은후
우분투 클라우드 이미지에는 기본 사용자 이름/비밀번호가 없습니다.이미지에서 인스턴스를 생성하기 전에 아래 cmd를 사용하여 구성해야 합니다.
cmd를 얻으려면 아래 pkg를 설치해야 합니다virt-customize.
# sudo apt install libguestfs-tools
# virt-customize -a focal-server-cloudimg-amd64.img --root-password password:openstack
[ 0.0] Examining the guest ...
[ 83.3] Setting a random seed
virt-customize: warning: random seed could not be set for this type of guest
[ 83.6] Setting the machine ID in /etc/machine-id
[ 83.7] Setting passwords
[ 93.7] Finishing off
or
# virt-customize -a CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2 --root-password password:openstack
[ 0.0] Examining the guest ...
[ 18.3] Setting a random seed
[ 18.5] Setting the machine ID in /etc/machine-id
[ 18.6] Setting passwords
[ 26.2] Finishing off
The OpenStack documentation uses several typesetting conventions
1.1 Notices
Notices take these forms:
Note:A comment with additional information that explains a part of the text.
Important:Something you must be aware of before proceeding
Tip:An extra but helpful piece of practical advice.
Caution:Helpful information that prevents the user from making mistakes.
Warning:Critical information about the risk of data loss or security issues.
1.2 Command prompts
$ command
Any user, including the root user, can run commands that are prefixed with the $ prompt.
# command
The root user must run commands that are prefixed with the # prompt. You can also prefix these commands with the sudo command, if available, to run them.
CHAPTER TWO
2.1 Abstract
The OpenStack system consists of several key services that are separately installed. These services work together depending on your cloud needs and include the Compute, Identity, Networking, Image, Block Storage, Object Storage, Telemetry, Orchestration, and Database services. You can install any of these projects separately and configure them stand-alone or as connected entities. Explanations of configuration options and sample configuration files are included. This guide documents the installation of OpenStack starting with the Pike release. It covers multiple releases.
Warning:This guide is a work-in-progress and is subject to updates frequently. Pre-release packages have been used for testing, and some instructions may not work with final versions. Please help us make this guide better by reporting any errors you encounter.
2.2 Operating systems
Currently, this guide describes OpenStack installation for the following Linux distributions:
openSUSE and SUSE Linux Enterprise Server
You can install OpenStack by using packages on openSUSE Leap 42.3, openSUSE Leap 15, SUSE Linux Enterprise Server 12 SP4, SUSE Linux Enterprise Server 15 through the Open Build Service Cloud repository.
Red Hat Enterprise Linux and CentOS
You can install OpenStack by using packages available on both Red Hat Enterprise Linux 7 and 8 and their derivatives through the RDO repository.
Note:OpenStack Wallaby is available for CentOS Stream 8. OpenStack Ussuri and Victoria are available for both CentOS 8 and RHEL 8. OpenStack Train and earlier are available on both CentOS 7 and RHEL 7.
Ubuntu
You can walk through an installation by using packages available through Canonicals Ubuntu Cloud archive repository for Ubuntu 16.04+ (LTS).
Note:The Ubuntu Cloud Archive pockets for Pike and Queens provide OpenStack packages for Ubuntu 16.04 LTS; OpenStack Queens is installable direct using Ubuntu 18.04 LTS; the Ubuntu Cloud Archive pockets for Rocky and Stein provide OpenStack packages for Ubuntu 18.04 LTS; the Ubuntu Cloud Archive pocket for Victoria provides OpenStack packages for Ubuntu 20.04 LTS.
CHAPTER THREE
GET STARTED WITH OPENSTACK
The OpenStack project is an open source cloud computing platform for all types of clouds, which aims to be simple to implement, massively scalable, and feature rich. Developers and cloud computing technologists from around the world create the OpenStack project.
OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a set of interrelated services. Each service offers an Application Programming Interface (API) that facilitates this integration. Depending on your needs, you can install some or all services.
3.1 The OpenStack services
The OpenStack project navigator lets you browse the OpenStack services that make up the OpenStack architecture. The services are categorized per the service type and release series.
3.2 The OpenStack architecture
The following sections describe the OpenStack architecture in more detail:
3.2.1 Conceptual architecture
The following diagram shows the relationships among the OpenStack services:
3.2.2 Logical architecture
To design, deploy, and configure OpenStack, administrators must understand the logical architecture.
As shown in Conceptual architecture, OpenStack consists of several independent parts, named the OpenStack services. All services authenticate through a common Identity service. Individual services interact with each other through public APIs, except where privileged administrator commands are necessary.
Internally, OpenStack services are composed of several processes. All services have at least one API process, which listens for API requests, preprocesses them and passes them on to other parts of the service. With the exception of the Identity service, the actual work is done by distinct processes.
For communication between the processes of one service, an AMQP message broker is used. The services state is stored in a database. When deploying and configuring your OpenStack cloud, you can choose among several message broker and database solutions, such as RabbitMQ, MySQL, MariaDB, and SQLite.
Users can access OpenStack via the web-based user interface implemented by the Horizon Dashboard, via command-line clients and by issuing API requests through tools like browser plug-ins or curl. For applications, several SDKs are available. Ultimately, all these access methods issue REST API calls to the various OpenStack services.
The following diagram shows the most common, but not the only possible, architecture for an OpenStack cloud:
CHAPTER FOUR
OVERVIEW
The OpenStack project is an open source cloud computing platform that supports all types of cloud environments. The project aims for simple implementation, massive scalability, and a rich set of features. Cloud computing experts from around the world contribute to the project.
OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a variety of complementary services. Each service offers an Application Programming Interface (API) that facilitates this integration.
This guide covers step-by-step deployment of the major OpenStack services using a functional example architecture suitable for new users of OpenStack with sufficient Linux experience. This guide is not intended to be used for production system installations, but to create a minimum proof-of-concept for the purpose of learning about OpenStack
After becoming familiar with basic installation, configuration, operation, and troubleshooting of these OpenStack services, you should consider the following steps toward deployment using a production architecture:
Determine and implement the necessary core and optional services to meet performance and redundancy requirements.
Increase security using methods such as firewalls, encryption, and service policies.
Use a deployment tool such as Ansible, Chef, Puppet, or Salt to automate deployment and management of the production environment. The OpenStack project has a couple of deployment projects with specific guides per version: - 2023.2 (Bobcat) release - 2023.1 (Antelope) release - Zed release - Yoga release - Xena release - Wallaby release - Victoria release - Ussuri release - Train release - Stein release
4.1 Example architecture
The example architecture requires at least two nodes (hosts) to launch a basic virtual machine or instance. Optional services such as Block Storage and Object Storage require additional nodes.
Important:The example architecture used in this guide is a minimum configuration, and is not intended for production system installations. It is designed to provide a minimum proof-of-concept for the purpose of learning about OpenStack. For information on creating architectures for specific use cases, or how to determine which architecture is required, see the Architecture Design Guide.
This example architecture differs from a minimal production architecture as follows:
Networking agents reside on the controller node instead of one or more dedicated network nodes.
Overlay (tunnel) traffic for self-service networks traverses the management network instead of a dedicated network.
For more information on production architectures for Pike, see theArchitecture Design Guide, OpenStack Networking Guide for Pike, and OpenStack Administrator Guides for Pike.
For more information on production architectures for Queens, see theArchitecture Design Guide, OpenStack Networking Guide for Queens, and OpenStack Administrator Guides for Queens.
For more information on production architectures for Rocky, see theArchitecture Design Guide, OpenStack Networking Guide for Rocky, and OpenStack Administrator Guides for Rocky.
4.1.1 Controller
The controller node runs the Identity service, Image service, Placement service, management portions of Compute, management portion of Networking, various Networking agents, and the Dashboard. It also includes supporting services such as an SQL database, message queue, and NTP.
Optionally, the controller node runs portions of the Block Storage, Object Storage, Orchestration, and Telemetry services.
The controller node requires a minimum of two network interfaces.
4.1.2 Compute
The compute node runs the hypervisor portion of Compute that operates instances. By default, Compute uses the KVM hypervisor. The compute node also runs a Networking service agent that connects instances to virtual networks and provides firewalling services to instances via security groups. You can deploy more than one compute node. Each node requires a minimum of two network interfaces.
4.1.3 Block Storage
The optional Block Storage node contains the disks that the Block Storage and Shared File System services provision for instances. For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security. You can deploy more than one block storage node. Each node requires a minimum of one network interface.
4.1.4 Object Storage
The optional Object Storage node contain the disks that the Object Storage service uses for storing accounts, containers, and objects. For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security. This service requires two nodes. Each node requires a minimum of one network interface. You can deploy more than two object storage nodes.
4.2 Networking
Choose one of the following virtual networking options.
4.2.1 Networking Option 1: Provider networks
The provider networks option deploys the OpenStack Networking service in the simplest way possible with primarily layer-2 (bridging/switching) services and VLAN segmentation of networks. Essentially, it bridges virtual networks to physical networks and relies on physical network infrastructure for layer-3 (routing) services. Additionally, a DHCP service provides IP address information to instances.
The OpenStack user requires more information about the underlying network infrastructure to create a virtual network to exactly match the infrastructure.
Warning:This option lacks support for self-service (private) networks, layer-3 (routing) services, and advanced services such as LBaaS and FWaaS. Consider the self-service networks option below if you desire these features.
4.2.2 Networking Option 2: Self-service networks
The self-service networks option augments the provider networks option with layer-3 (routing) services that enable self-service networks using overlay segmentation methods such as VXLAN. Essentially, it routes virtual networks to physical networks using NAT. Additionally, this option provides the foundation for advanced services such as LBaaS and FWaaS.
The OpenStack user can create virtual networks without the knowledge of underlying infrastructure on the data network. This can also include VLAN networks if the layer-2 plug-in is configured accordingly.
CHAPTER FIVE
This section explains how to configure the controller node and one compute node using the example architecture.
Although most environments include Identity, Image service, Compute, at least one networking service, and the Dashboard, the Object Storage service can operate independently. If your use case only involves Object Storage, you can skip to
Object Storage Installation Guide for 2023.2 (Bobcat)
Object Storage Installation Guide for 2023.1 (Antelope)
Object Storage Installation Guide for Zed
Object Storage Installation Guide for Yoga
Object Storage Installation Guide for Stein
after configuring the appropriate nodes for it.
You must use an account with administrative privileges to configure each node. Either run the commands as the root user or configure the sudo utility.
Note:Thesystemctl enablecall on openSUSE outputs a warning message when the service uses SysV Init scripts instead of native systemd files. This warning can be ignored.
For best performance, we recommend that your environment meets or exceeds the hardware requirements in Hardware requirements.
The following minimum requirements should support a proof-of-concept environment with core services and several CirrOS instances:
As the number of OpenStack services and virtual machines increase, so do the hardware requirements for the best performance. If performance degrades after enabling additional services or virtual machines, consider adding hardware resources to your environment.
To minimize clutter and provide more resources for OpenStack, we recommend a minimal installation of your Linux distribution. Also, you must install a 64-bit version of your distribution on each node.
A single disk partition on each node works for most basic installations. However, you should consider Logical Volume Manager (LVM) for installations with optional services such as Block Storage.
For first-time installation and testing purposes, many users select to build each host as a virtual machine (VM). The primary benefits of VMs include the following:
One physical server can support multiple nodes, each with almost any number of network interfaces.
Ability to take periodic snap shots throughout the installation process and roll back to a working configuration in the event of a problem
However, VMs will reduce performance of your instances, particularly if your hypervisor and/or processor lacks support for hardware acceleration of nested VMs.
Note:If you choose to install on VMs, make sure your hypervisor provides a way to disable MAC address filtering on the provider network interface
For more information about system requirements, see the OpenStack 2023.2 (Bobcat) Administrator Guides, the OpenStack 2023.1 (Antelope) Administrator Guides, the OpenStack Zed Administrator Guides, the OpenStack Yoga Administrator Guides, or the OpenStack Stein Administrator Guides.
5.1 Security
OpenStack services support various security methods including password, policy, and encryption. Additionally, supporting services including the database server and message broker support password security.
To ease the installation process, this guide only covers password security where applicable. You can create secure passwords manually, but the database connection string in services configuration file cannot accept special characters like @. We recommend you generate them using a tool such as pwgen, or by running the following command:
$ openssl rand -hex 10
For OpenStack services, this guide uses SERVICE_PASS to reference service account passwords and SERVICE_DBPASS to reference database passwords.
The following table provides a list of services that require passwords and their associated references in the guide.
OpenStack and supporting services require administrative privileges during installation and operation. In some cases, services perform modifications to the host that can interfere with deployment automation tools such as Ansible, Chef, and Puppet. For example, some OpenStack services add a root wrapper to sudo that can interfere with security policies. See the Compute service documentation for Pike, the Compute service documentation for Queens, or the Compute service documentation for Rocky for more information.
The Networking service assumes default values for kernel network parameters and modifies firewall rules. To avoid most issues during your initial installation, we recommend using a stock deployment of a supported distribution on your hosts. However, if you choose to automate deployment of your hosts, review the configuration and policies applied to them before proceeding further.
5.2 Host networking
After installing the operating system on each node for the architecture that you choose to deploy, you must configure the network interfaces. We recommend that you disable any automated network management tools and manually edit the appropriate configuration files for your distribution. For more information on how to configure networking on your distribution, see the documentation.
See also:
Ubuntu Network Configuration
RHEL 7 or RHEL 8 Network Configuration
SLES 12 or SLES 15 or openSUSE Network Configuration
All nodes require Internet access for administrative purposes such as package installation, security updates, DNS, and NTP. In most cases, nodes should obtain Internet access through the management network interface. To highlight the importance of network separation, the example architectures use private address space for the management network and assume that the physical network infrastructure provides Internet access via NAT or other methods. The example architectures use routable IP address space for the provider (external) network and assume that the physical network infrastructure provides direct Internet access
In the provider networks architecture, all instances attach directly to the provider network. In the selfservice (private) networks architecture, instances can attach to a self-service or provider network. Selfservice networks can reside entirely within OpenStack or provide some level of external network access using NAT through the provider network.
The example architectures assume use of the following networks:
Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all nodes for administrative purposes such as package installation, security updates, DNS, and NTP.
Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to instances in your OpenStack environment.
You can modify these ranges and gateways to work with your particular network infrastructure. Network interface names vary by distribution. Traditionally, interfaces use eth followed by a sequential number. To cover all variations, this guide refers to the first interface as the interface with the lowest number and the second interface as the interface with the highest number.
Note:Ubuntu has changed the network interface naming concept. Refer Changing Network Interfaces name Ubuntu 16.04.
Unless you intend to use the exact configuration provided in this example architecture, you must modify the networks in this procedure to match your environment. Each node must resolve the other nodes by name in addition to IP address. For example, the controller name must resolve to 10.0.0.11, the IP address of the management interface on the controller node.
Warning:Reconfiguring network interfaces will interrupt network connectivity. We recommend using a local terminal session for these procedures.
Note:RHEL, CentOS and SUSE distributions enable a restrictive firewall by default. Ubuntu does not. For more information about securing your environment, refer to the OpenStack Security Guide.
5.2.1 Controller node
Configure network interfaces
1. Configure the first interface as the management interface:
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
2. The provider interface uses a special configuration without an IP address assigned to it. Configure the second interface as the provider interface:
Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.
For Ubuntu:
• Edit the /etc/network/interfaces file to contain the following:
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
For RHEL or CentOS:
Edit the /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME file to contain the following
Warning:Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry.
Note:This guide includes host entries for optional services in order to reduce complexity should you choose to deploy them.
5.2.2 Compute node
Configure network interfaces
1. Configure the first interface as the management interface:
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
Note:Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
2. The provider interface uses a special configuration without an IP address assigned to it. Configure the second interface as the provider interface:
Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.
For Ubuntu:
Edit the /etc/network/interfaces file to contain the following:
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
For RHEL or CentOS:
Edit the /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME file to contain the following:
Warning:Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry.
Note: This guide includes host entries for optional services in order to reduce complexity should you choose to deploy them.
5.2.3 Block storage node (Optional)
If you want to deploy the Block Storage service, configure one additional storage node.
Configure network interfaces
Configure the management interface:
– IP address: 10.0.0.41
– Network mask: 255.255.255.0 (or /24)
– Default gateway: 10.0.0.1
Configure name resolution
1. Set the hostname of the node to block1.
2. Edit the /etc/hosts file to contain the following:
Warning:Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry.
Note:This guide includes host entries for optional services in order to reduce complexity should you choose to deploy them.
3. Reboot the system to activate the changes.
5.2.4 Verify connectivity
We recommend that you verify network connectivity to the Internet and among the nodes before proceeding further
1. From the controller node, test access to the Internet:
# ping -c 4 docs.openstack.org
PING files02.openstack.org (23.253.125.17) 56(84) bytes of data.
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=1 ttl=43␣,→time=125 ms
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=2 ttl=43␣,→time=125 ms
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=3 ttl=43␣,→time=125 ms
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=4 ttl=43␣,→time=125 ms
--- files02.openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 125.192/125.282/125.399/0.441 ms
2. From the controller node, test access to the management interface on the compute node:
# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
3. From the compute node, test access to the Internet:
# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
4. From the compute node, test access to the management interface on the controller node:
# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
Note:RHEL, CentOS and SUSE distributions enable a restrictive firewall by default. During the installation process, certain steps will fail unless you alter or disable the firewall. For more information about securing your environment, refer to the OpenStack Security Guide Ubuntu does not enable a restrictive firewall by default. For more information about securing your environment, refer to the OpenStack Security Guide
5.3 Network Time Protocol (NTP)
To properly synchronize services among nodes, you can install Chrony, an implementation of NTP. We recommend that you configure the controller node to reference more accurate (lower stratum) servers and other nodes to reference the controller node.
5.3.1 Controller node
Perform these steps on the controller node.
Install and configure components
1. Install the packages:
For Ubuntu:
# apt install chrony
For RHEL or CentOS:
# yum install chrony
For SUSE
# zypper install chrony
2. Edit the chrony.conf file and add, change, or remove the following keys as necessary for your environment
For RHEL, CentOS, or SUSE, edit the /etc/chrony.conf file:
server NTP_SERVER iburst
For Ubuntu, edit the /etc/chrony/chrony.conf file:
server NTP_SERVER iburst
Replace NTP_SERVER with the hostname or IP address of a suitable more accurate (lower stratum) NTP server. The configuration supports multiple server keys.
Note: By default, the controller node synchronizes the time via a pool of public servers. However, you can optionally configure alternative servers such as those provided by your organization.
3. To enable other nodes to connect to the chrony daemon on the controller node, add this key to the same chrony.conf file mentioned above:
allow 10.0.0.0/24
If necessary, replace 10.0.0.0/24 with a description of your subnet.
We recommend that you verify NTP synchronization before proceeding further. Some nodes, particularly those that reference the controller node, can take several minutes to synchronize.
1. Run this command on the controller node:
# chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
␣
,→===============================================================================
^- 192.0.2.11 2 7 12 137 -2814us[-3000us] +/-,→ 43ms
^* 192.0.2.12 2 6 177 46 +17us[ -23us] +/-,→ 68ms
Contents in the Name/IP address column should indicate the hostname or IP address of one or more NTP servers. Contents in the MS column should indicate * for the server to which the NTP service is currently synchronized.
2. Run the same command on all other nodes:
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
␣
,→===============================================================================
^* controller 3 9 377 421 +15us[ -87us] +/
,→- 15ms
Contents in the Name/IP address column should indicate the hostname of the controller node.
5.4 OpenStack packages
Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.
Note:The set up of OpenStack packages described here needs to be done on all nodes: controller, compute, and Block Storage nodes.
Warning:Your hosts must contain the latest versions of base installation packages available for your distribution before proceeding further
Note:Disable or remove any automatic update services because they can impact your OpenStack environment.
5.4.1 OpenStack packages for SUSE
Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.
Note:The set up of OpenStack packages described here needs to be done on all nodes: controller, compute, and Block Storage nodes.
Warning:Your hosts must contain the latest versions of base installation packages available for your distribution before proceeding further.
Note:Disable or remove any automatic update services because they can impact your OpenStack environment.
Enable the OpenStack repository
Enable the Open Build Service repositories based on your openSUSE or SLES version, and on the version of OpenStack you want to install:
Note:The openSUSE distribution uses the concept of patterns to represent collections of packages. If you selected Minimal Server Selection (Text Mode) during the initial installation, you may be presented with a dependency conflict when you attempt to install the OpenStack packages. To avoid this, remove the minimal_base-conflicts package:
Note:If the upgrade process includes a new kernel, reboot your host to activate it
2. Install the OpenStack client:
# zypper install python-openstackclient
5.4.2 OpenStack packages for RHEL and CentOS
Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.
Warning:Starting with the Ussuri release, you will need to use either CentOS8 or RHEL 8. Previous OpenStack releases will need to use either CentOS7 or RHEL 7. Instructions are included for both distributions and versions where different.
Note:The set up of OpenStack packages described here needs to be done on all nodes: controller, compute, and Block Storage nodes.
Warning:Your hosts must contain the latest versions of base installation packages available for your distribution before proceeding further.
Note:Disable or remove any automatic update services because they can impact your OpenStack environment.
Prerequisites
Warning:We recommend disabling EPEL when using RDO packages due to updates in EPEL breaking backwards compatibility. Or, preferably pin package versions using the yum-versionlock plugin.
Note:The following steps apply to RHEL only. CentOS does not require these steps
1. When using RHEL, it is assumed that you have registered your system using Red Hat Subscription Management and that you have the rhel-7-server-rpms or rhel-8-for-x86_64-baseos-rpms repository enabled by default depending on your version.
For more information on registering a RHEL 7 system, see the Red Hat Enterprise Linux 7 System Administrators Guide.
2. In addition to rhel-7-server-rpms on a RHEL 7 system, you also need to have the rhel-7-server-optional-rpms, rhel-7-server-extras-rpms, and rhel-7-server-rh-common-rpms repositories enabled:
For more information on registering a RHEL 8 system, see the Red Hat Enterprise Linux 8 Installation Guide.
In addition to rhel-8-for-x86_64-baseos-rpms on a RHEL 8 system, you also need to have the rhel-8-for-x86_64-appstream-rpms, rhel-8-for-x86_64-supplementary-rpms, and codeready-builder-for-rhel-8-x86_64-rpms repositories enabled:
On CentOS, the extras repository provides the RPM that enables the OpenStack repository. CentOS includes the extras repository by default, so you can simply install the package to enable the OpenStack repository. For CentOS8, you will also need to enable the PowerTools repository
The RDO repository RPM installs the latest available OpenStack release
Finalize the installation
5.4.3 OpenStack packages for Ubuntu
Ubuntu releases OpenStack with each Ubuntu release. Ubuntu LTS releases are provided every two years. OpenStack packages from interim releases of Ubuntu are made available to the prior Ubuntu LTS via the Ubuntu Cloud Archive.
Note:The archive enablement described here needs to be done on all nodes that run OpenStack services.
Archive Enablement OpenStack 2023.2 Bobcat for Ubuntu 22.04 LTS:
# add-apt-repository cloud-archive:bobcat
OpenStack 2023.1 Antelope for Ubuntu 22.04 LTS:
# add-apt-repository cloud-archive:antelope
OpenStack Zed for Ubuntu 22.04 LTS:
# add-apt-repository cloud-archive:zed
OpenStack Yoga for Ubuntu 22.04 LTS:
OpenStack Yoga is available by default using Ubuntu 22.04 LTS.
OpenStack Yoga for Ubuntu 20.04 LTS:
# add-apt-repository cloud-archive:yoga
OpenStack Xena for Ubuntu 20.04 LTS:
# add-apt-repository cloud-archive:xena
OpenStack Wallaby for Ubuntu 20.04 LTS:
# add-apt-repository cloud-archive:wallaby
OpenStack Victoria for Ubuntu 20.04 LTS:
# add-apt-repository cloud-archive:victoria
OpenStack Ussuri for Ubuntu 20.04 LTS:
OpenStack Ussuri is available by default using Ubuntu 20.04 LTS.
OpenStack Ussuri for Ubuntu 18.04 LTS:
# add-apt-repository cloud-archive:ussuri
OpenStack Train for Ubuntu 18.04 LTS:
# add-apt-repository cloud-archive:train
OpenStack Stein for Ubuntu 18.04 LTS:
# add-apt-repository cloud-archive:stein
OpenStack Rocky for Ubuntu 18.04 LTS:
# add-apt-repository cloud-archive:rocky
OpenStack Queens for Ubuntu 18.04 LTS:
OpenStack Queens is available by default using Ubuntu 18.04 LTS.
Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.
Note:If you see Too many connections or Too many open files error log messages on OpenStack services, verify that maximum number of connection settings are well applied to your environment. In MariaDB, you may also need to change open_files_limit configuration
5.5.1 SQL database for SUSE
Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.
2. Create and edit the /etc/my.cnf.d/openstack.cnf file and complete the following actions:
Create a [mysqld] section, and set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
2. Secure the database service by running the mysql_secure_installation script. In particular, choose a suitable password for the database root account:
# mysql_secure_installation
5.5.2 SQL database for RHEL and CentOS
Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.
2. Create and edit the /etc/my.cnf.d/openstack.cnf file (backup existing configuration files in /etc/my.cnf.d/ if needed) and complete the following actions:
Create a [mysqld] section, and set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
2. Secure the database service by running the mysql_secure_installation script. In particular, choose a suitable password for the database root account:
# mysql_secure_installation
5.5.3 SQL database for Ubuntu
Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.
Note:As of Ubuntu 16.04, MariaDB was changed to use the unix_socket Authentication Plugin. Local authentication is now performed using the user credentials (UID), and password authentication is no longer used by default. This means that the root user no longer uses a password for local access to the server.
Note:As of Ubuntu 18.04, the mariadb-server package is no longer available from the default repository. To install successfully, enable the Universe repository on Ubunt
Install and configure components
1. Install the packages:
As of Ubuntu 20.04, install the packages:
# apt install mariadb-server python3-pymysql
As of Ubuntu 18.04 or 16.04, install the packages:
# apt install mariadb-server python-pymysq
2. Create and edit the /etc/mysql/mariadb.conf.d/99-openstack.cnf file and complete the following actions:
Create a [mysqld] section, and set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
2. Secure the database service by running the mysql_secure_installation script. In particular, choose a suitable password for the database root account:
# mysql_secure_installation
5.6 Message queue
OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.
The message queue runs on the controller node.
5.6.1 Message queue for SUSE
1. Install the package:
# zypper install rabbitmq-server
2. Start the message queue service and configure it to start when the system boots:
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
Replace RABBIT_PASS with a suitable password.
4. Permit configuration, write, and read access for the openstack user:
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
5.6.2 Message queue for RHEL and CentOS
OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.
The message queue runs on the controller node.
Install and configure components
1. Install the package:
# yum install rabbitmq-server
2. Start the message queue service and configure it to start when the system boots:
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
Replace RABBIT_PASS with a suitable password.
4. Permit configuration, write, and read access for the openstack user:
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
5.6.3 Message queue for Ubuntu
OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.
The message queue runs on the controller node.
Install and configure components
1. Install the package:
# apt install rabbitmq-server
2. Add the openstack user:
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
Replace RABBIT_PASS with a suitable password.
3. Permit configuration, write, and read access for the openstack user:
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
5.7 Memcached
The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure
5.7.1 Memcached for SUSE
The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.
The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.
Install and configure components
1. Install the packages:
For CentOS 7 and RHEL 7
# yum install memcached python-memcached
For CentOS 8 and RHEL 8
# yum install memcached python3-memcached
2. Edit the /etc/sysconfig/memcached file and complete the following actions:
Configure the service to use the management IP address of the controller node. This is to enable access by other nodes via the management network:
OPTIONS="-l 127.0.0.1,::1,controller"
Note:Change the existing line OPTIONS="-l 127.0.0.1,::1".
Finalize installation
Start the Memcached service and configure it to start when the system boots:
The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.
Install and configure components
1. Install the packages:
For Ubuntu versions prior to 18.04 use:
# apt install memcached python-memcache
For Ubuntu 18.04 and newer versions use:
# apt install memcached python3-memcache
2. Edit the /etc/memcached.conf file and configure the service to use the management IP address of the controller node. This is to enable access by other nodes via the management network:
-l 10.0.0.11
Note:Change the existing line that had -l 127.0.0.1.
Finalize installation
Restart the Memcached service:
# service memcached restart
5.8 Etcd
OpenStack services may use Etcd, a distributed reliable key-value store for distributed key locking, storing configuration, keeping track of service live-ness and other scenarios.
5.8.1 Etcd for SUSE
Right now, there is no distro package available for etcd3. This guide uses the tarball installation as a workaround until proper distro packages are available.
2. Create and edit the /etc/etcd/etcd.conf.yml file and set the initial-cluster, initial-advertise-peer-urls, advertise-client-urls, listen-client-urls to the management IP address of the controller node to enable access by other nodes via the management network:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 441, in get return self._queues[msg_id].get(block=True, timeout=timeout) File "/usr/local/lib/python3.10/dist-packages/eventlet/queue.py",
line 322, in get return waiter.wait() File "/usr/local/lib/python3.10/dist-packages/eventlet/queue.py",
line 141, in wait return get_hub().switch() File "/usr/local/lib/python3.10/dist-packages/eventlet/hubs/hub.py",
line 313, in switch return self.greenlet.switch() _queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/stack/nova/nova/conductor/manager.py",
line 1654, in schedule_and_build_instances host_lists = self._schedule_instances(context, request_specs[0], File "/opt/stack/nova/nova/conductor/manager.py",
line 942, in _schedule_instances host_lists = self.query_client.select_destinations( File "/opt/stack/nova/nova/scheduler/client/query.py",
line 41, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj, File "/opt/stack/nova/nova/scheduler/rpcapi.py",
line 160, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/rpc/client.py",
line 190, in call result = self.transport._send( File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/transport.py",
line 123, in _send return self._driver.send(target, ctxt, message, File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 689, in send return self._send(target, ctxt, message, wait_for_reply, timeout, File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 678, in _send result = self._waiter.wait(msg_id, timeout, File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 567, in wait message = self.waiters.get(msg_id, timeout=timeout) File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 443, in get raise oslo_messaging.MessagingTimeout( oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID b6a61e8d51914d4db1f834e190f146ca
== 문제가 뭘까 ?
# nova-status upgrade check
$ nova-status upgrade check
Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code.
+-------------------------------------------+
| Upgrade Check Results |
+-------------------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: Cinder API |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: Policy File JSON to YAML Migration |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: Older than N-1 computes |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: hw_machine_type unset |
| Result: Success |
| Details: None |
+-------------------------------------------+
| Check: Service User Token Configuration |
| Result: Success |
| Details: None |
+-------------------------------------------+
# mysql -u root -popenstack -h 192.168.56.30
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 12
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> CREATE USER 'keystone'@'localhost' IDENTIFIED BY 'openstack';
Query OK, 0 rows affected (0.13 sec)
mysql> CREATE USER 'keystone'@'%' IDENTIFIED BY 'openstack';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';
ERROR 1044 (42000): Access denied for user 'root'@'%' to database 'keystone'
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'root'@'%';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
오류 메시지 "ERROR 1044 (42000): 'keystone' 데이터베이스에 대한 'root'@'%' 사용자의 액세스가 거부되었습니다."는 'root' 사용자에게 'keystone' 데이터베이스에 대한 권한을 부여하는 데 필요한 권한이 없음을 나타냅니다. 지정된 사용자('keystone'@'%').
이 문제를 해결하려면 '루트' 사용자에게 권한을 부여할 수 있는 적절한 권한이 있는지 확인해야 합니다. 다음과 같이하세요:
1. MySQL에 '루트' 사용자로 로그인합니다.
# mysql -u root -p
2. 로그인한 후 'keystone' 데이터베이스에 대해 '루트' 사용자에게 필요한 권한을 부여합니다.
$ GRANT ALL PRIVILEGES ON keystone.* TO 'root'@'%'; FLUSH PRIVILEGES;
3. 권한을 부여한 후 원래 명령을 다시 실행해 보십시오
$ GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';
문제가 계속 발생하면 '루트' 사용자에게 GRANT권한 자체가 있는지 확인하세요. 그렇지 않은 경우 권한을 부여하는 데 필요한 권한이 있는 사용자로 MySQL 서버에 연결해야 할 수도 있습니다. 일반적으로 데이터베이스 액세스 관리를 위해 전용 MySQL 사용자를 생성하고 일상적인 작업을 위해 '루트' 사용자에 의존하지 않는 것이 좋습니다.
데이터베이스 관리에 'root' 사용자를 사용하지 않는 경우 'root'를 실제 사용자 이름으로 바꾸는 것을 잊지 마십시오.
# vi /etc/memcached.conf
# Specify which IP address to listen on. The default is to listen on all IP addresses
# This parameter is one of the only security measures that memcached has, so make sure
# it's listening on a firewalled interface.
# Default connection port is 11211
-p 11211
# Run the daemon as root. The start-memcached will default to running as root if no
# -u command is present in this config file
-u memcache
#-l 127.0.0.1
-l 192.168.56.30 <--- 수정
# 서비스 확인 및 재기동
# service --status-all
[ + ] memcached
# service memcached stop
# service memcached start
# service memcached status
● memcached.service - memcached daemon
Loaded: loaded (/lib/systemd/system/memcached.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2024-01-06 16:42:47 KST; 3s ago
Docs: man:memcached(1)
Main PID: 29408 (memcached)
Tasks: 10 (limit: 4537)
Memory: 4.5M
CGroup: /system.slice/memcached.service
└─29408 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 192.168.56.30 -P /var/run/memcached/memcached.pid
Jan 06 16:42:47 ubuntu.localdomain systemd[1]: Started memcached daemon.
RabbitMQ는 서로 다른 소프트웨어 시스템 간의 통신을 용이하게 하는 오픈 소스 메시지 브로커 소프트웨어입니다. 이는 메시징 미들웨어 제품군의 일부이며 AMQP(Advanced Message Queuing Protocol)를 구현합니다. RabbitMQ를 사용하면 애플리케이션이 분산되고 확장 가능한 방식으로 데이터와 정보를 교환할 수 있습니다.
RabbitMQ의 주요 기능과 개념은 다음과 같습니다.
메시지 브로커: RabbitMQ는 분산 시스템의 다양한 구성 요소 사이에서 중개자 또는 메시지 브로커 역할을 합니다. 생산자로부터 메시지를 받아 소비자에게 전달합니다.
메시지 큐: 메시지는 수신자가 사용할 때까지 메시지를 보관하는 버퍼 역할을 하는 큐에 배치됩니다. 이는 생산 및 소비 구성 요소를 분리하여 비동기 통신을 허용합니다.
교환: 생산자는 교환에 메시지를 보낸 다음 교환 유형에 의해 정의된 라우팅 규칙에 따라 적절한 대기열로 라우팅합니다. 일반적인 교환 유형에는 직접, 주제, 팬아웃 및 헤더가 포함됩니다.
바인딩: 교환과 큐 간의 연결은 바인딩을 통해 설정됩니다. 바인딩은 메시지가 교환에서 큐로 라우팅되는 방식을 결정하는 라우팅 키 또는 기준을 정의합니다.
게시/구독: RabbitMQ는 게시/구독 패턴을 지원하므로 여러 소비자가 동일한 메시지를 받을 수 있습니다. 이는 바인드된 모든 큐에 메시지를 브로드캐스트하는 팬아웃 교환을 통해 달성됩니다.
메시지 승인: RabbitMQ는 메시지 수신을 승인하는 메커니즘을 제공합니다. 이렇게 하면 메시지가 안정적으로 처리되고 메시지 손실이 방지됩니다.
내구성: RabbitMQ는 메시지와 대기열의 내구성을 지원합니다. 즉, 브로커를 다시 시작해도 메시지와 대기열이 유지될 수 있습니다. 이는 메시지 무결성과 가용성을 유지하는 데 중요합니다.
클러스터링: RabbitMQ는 가용성과 내결함성을 향상시키기 위해 클러스터 구성으로 설정할 수 있습니다. 클러스터링을 사용하면 여러 RabbitMQ 노드가 단일 논리적 브로커로 함께 작동할 수 있습니다.
플러그인 및 확장: RabbitMQ는 다양한 플러그인을 통해 확장되어 메시지 변환, 인증 및 권한 부여 메커니즘과 같은 기능을 추가할 수 있습니다.
RabbitMQ는 확장 가능하고 강력한 분산 시스템을 구축하기 위해 다양한 산업에서 널리 사용됩니다. 마이크로서비스 아키텍처 및 엔터프라이즈 통합 솔루션을 포함하여 복잡한 통신 요구 사항이 있는 애플리케이션을 위한 안정적인 메시징 인프라를 제공합니다
# rabbit MQ 설치되이 있는지 확인하기
# apt search rabbitmq-server
Sorting... Done
Full Text Search... Done
rabbitmq-server/jammy-updates,jammy-security,now 3.9.13-1ubuntu0.22.04.2 all [installed]
AMQP server written in Erlang
root@ubuntu:/#
or
# apt install rabbitmq-server
# rabbit user : openstack / openstack 계정 및 권한을 생성합니다..
# rabbitmqctl add_user openstack openstack
Adding user "openstack" ...
Done. Don't forget to grant the user permissions to some virtual hosts! See 'rabbitmqctl help set_permissions' to learn more.
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
# rabbit MQ 대쉬보스 설정 및 서비스 스타트
# rabbitmq-plugins enable rabbitmq_management
Enabling plugins on node rabbit@ubuntu:
rabbitmq_management
The following plugins have been configured:
rabbitmq_management
rabbitmq_management_agent
rabbitmq_web_dispatch
Applying plugin configuration to rabbit@ubuntu...
The following plugins have been enabled:
rabbitmq_management
rabbitmq_management_agent
rabbitmq_web_dispatch
started 3 plugins.
# service rabbitmq-server stop
# service rabbitmq-server start
# service rabbitmq-server status
● rabbitmq-server.service - RabbitMQ Messaging Server
Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2024-01-06 14:30:57 KST; 2min 33s ago
Main PID: 18157 (beam.smp)
Tasks: 23 (limit: 4537)
Memory: 125.0M
CGroup: /system.slice/rabbitmq-server.service
├─18157 /usr/lib/erlang/erts-12.2.1/bin/beam.smp -W w -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30>
├─18168 erl_child_setup 65536
├─18214 inet_gethost 4
└─18215 inet_gethost 4
Jan 06 14:30:53 ubuntu.localdomain systemd[1]: Starting RabbitMQ Messaging Server...
Jan 06 14:30:57 ubuntu.localdomain systemd[1]: Started RabbitMQ Messaging Server.
# rabbitmqctl add_user test test
Adding user "test" ...
Done. Don't forget to grant the user permissions to some virtual hosts! See 'rabbitmqctl help set_permissions' to learn more.
# rabbitmqctl set_user_tags test administrator
Setting tags for user "test" to [administrator] ...
# rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
Setting permissions for user "test" in vhost "/" ...
# su - stack
$ openstack user list
Missing value auth-url required for auth plugin password
# openstack 관리자 페이지 접속하여 openStack RC 파일을 다운로드 함
- 오른쪽 admin 클릭하면 OpenStack RV
- 다운로드된 admin-openrc.sh 파일
# 다운로드된 admin-openrc.sh 실행스크립트를 서버 아래에 생성
# su - stack
$ cd /opt/stack/
$ vi admin-openrc.sh
--- 아래와 같이 편집함 ---
#!/usr/bin/env bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 3 *Identity API* does not necessarily mean any other
# OpenStack API is version 3. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.
export OS_AUTH_URL=http://192.168.56.30/identity
# With the addition of Keystone we have standardized on the term **project**
# as the entity that owns the resources.
export OS_PROJECT_ID=249f6e9566fb44bbba10844ed6b7ca15
export OS_PROJECT_NAME="admin"
export OS_USER_DOMAIN_NAME="Default"
if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi
export OS_PROJECT_DOMAIN_ID="default"
if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi
# unset v2.0 items in case set
unset OS_TENANT_ID
unset OS_TENANT_NAME
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="admin"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# export OS_PASSWORD=openstack
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3
- 스크립트 중간에 read, export 명령이 안먹히는 경우 주석처리하고 그냥 export OS_PASSWORD=openstack
# mysql_upgrade -u root -p
The mysql_upgrade client is now deprecated. The actions executed by the upgrade cli ent are now done by the server.
To upgrade, please start the new MySQL binary with the older data directory. Repair ing user tables is done automatically. Restart is not required after upgrade.
The upgrade process automatically starts on running a new MySQL binary with an olde r data directory. To avoid accidental upgrades, please use the --upgrade=NONE optio n with the MySQL binary. The option --upgrade=FORCE is also provided to run the ser ver upgrade sequence on demand.
It may be possible that the server upgrade fails due to a number of reasons. In tha t case, the upgrade sequence will run again during the next MySQL server start. If the server upgrade fails repeatedly, the server can be started with the --upgrade=M INIMAL option to start the server without executing the upgrade sequence, thus allo wing users to manually rectify the problem.
root@ubuntu:~#
# mysql -u root -popenstack -h 192.168.56.30
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
# mysql -uroot -popenstack
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'root'@'%';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';
Query OK, 0 rows affected (0.00 sec)
mysql>
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
# MYSQL_PWD="openstack" mysql -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
# mysql_config_editor set --login-path=root --host=localhost --user=root --password --port=3306
Enter password:
# mysql -u root -popenstack -h 192.168.56.30
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 15
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
# mysql 환경정보 /etc/mysql/my.cnf
# cat /etc/mysql/my.cnf
#
# The MySQL database server configuration file.
#
#
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/
[mysqld]
max_connections = 1024
default-storage-engine = InnoDB
sql_mode = TRADITIONAL
#bind-address = 0.0.0.0
bind-address = 192.168.56.30
# mysql 서버 IP 지정하여 접속하기
$ mysql -u root -popenstack -h 192.168.56.30
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
# mysql 상태 조회
mysql> status
--------------
mysql Ver 8.0.35-0ubuntu0.22.04.1 for Linux on x86_64 ((Ubuntu))
Connection id: 16
Current database:
Current user: root@192.168.56.30
SSL: Cipher in use is TLS_AES_256_GCM_SHA384
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)
Protocol version: 10
Connection: 192.168.56.30 via TCP/IP
Server characterset: utf8mb4
Db characterset: utf8mb4
Client characterset: utf8mb4
Conn. characterset: utf8mb4
TCP port: 3306
Binary data as: Hexadecimal
Uptime: 10 min 47 sec
Threads: 2 Questions: 16 Slow queries: 0 Opens: 122 Flush tables: 3 Open tables: 41 Queries per second avg: 0.024
--------------
# service --status-all
[ + ] mysql
[ + ] postgresql
# service mysql status
● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2024-01-06 14:00:34 KST; 3h 12min ago
Process: 12768 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Main PID: 12777 (mysqld)
Status: "Server is operational"
Tasks: 38 (limit: 4537)
Memory: 366.1M
CGroup: /system.slice/mysql.service
└─12777 /usr/sbin/mysqld
Vagrant.configure("2") do |config|
config.vm.box = "alvistack/ubuntu-22.04"
end
# Vargrantfile 에 config.vm.box ="OOO" 의 내용은 https://app.vagrantup.com 사이트에 접속하여 설치하고 싶은 OS를 찾으면 됨 - 저는 오픈스택 홈페이지에서 권장하는 Ubuntu 22.04(jammy) version의 ubuntu OS 이미지를 다운로드함
# alvistack/ubuntu-22.04 이미지를 선택하면 - Vargrantfiel을 아래와 같이 설정하라고 나옴
# 파일수정후 Vargrant UP 실행 (Wiindows CMD.exe 모드에서 실행)
C:\HashiCorp\Vagrant UP
# Oracle VM을 실행하면 VitualBOX에 다음과 같이 VM이 생김 - 설치가 끝나면 아래와 같이 vm이 생성됨 ... HashCorp_defaul......
# 해당 이미지를 중단시키고 해당정보를 수정함 -- 설정에서 수행 - 일반 - 기본 - 이름(N) : alvistack-ubuntu-22.04 (본인이 알기 쉬운 이름으로 수정하면 됨,, 시스템에 영향 없음)
# 시스템 - 마더보드 - 기본 메모리 : 4096MB 로 수정 (또는 이하로 수정) (본인의 PC 사양에 맞게 메모리 수정 : default는 8012MB로 되어 있음)
# 네트워크 아댑터를 수정함 - 어뎁터 모드의 브리지 모드, NAT 모드 차이점, VM DHCP 설정은 해당 게시물 참조 ㅇ o 어뎁터 1 : 어댑터에 브리지 (무작위 모드 : 모두 허용) o 어뎁터 2 : 호스트전용어댑터 (무작위 모드 : 모두 허용)
# VARGRANT 파일 확장 사용 (예시)
# Vargrant 파일로 Oracle VM 환경설정을 할수 있으나 저는 귀찮아서 그냥 VM 만들어 놓고 수동으로 수정함 # Vargant 파일 확장 사용 예시
- login 시 root / vagrant 로 접속 (vargrant 이미지 파일의 기본 패스워드는 vagrant 임) - 일부 root로 로그인이 안될 경우 vargrant/vagrant 로 접속한후 $ sudo passwd root 명령어를 입력하여 root 패스워드를 변경함
# root로 로그인하여 몇가지 수정을 해야 됨
1. 네트워크 IP 설정 2. SSH 접속 (향후 putty.exe로 접속을 위한 sshd_conf 파일 수정 3. 날짜 수정 (시간 설정) 4. apt 업데이트 5. 방화벽 disable (설치를 위해서 일단은 중단)
1. 네트워크 설정
# vi /etc/netplan/00-installer-copnfig.yaml (파일 수정후)
# netplan apply
# 아래 2개 명령어 실행이 안되면 다음으로 넘어감 (설치되어 있을수도 있음)
# vi /etc/netplan/00-installer-copnfig.yaml (파일 수정후)
# netplan apply
- eth0 : DHCP 집(카페 등) 공유기 IP 대역 (192.168.219.16은 공유기에서 자동으로 받아옴) - eth1 : Oracle VM에 설치된 서버의 IP (192.168.56.30은 netplan 파일 수정)
2. SSH 접속 (향후 putty.exe로 접속을 위한 sshd_conf 파일 수정)
# /etc/ssh/sshd_config 파일에 다음 항목을 수정
PermitRootLogin yes로 변경
PasswordAuthentication yes로 변경
=================
Async summary
=================
Time spent in the background minus waits: 547 sec
Elapsed time: 2092 sec
Time if we did everything serially: 2639 sec
Speedup: 1.26147
Post-stack database query stats:
+------------+-----------+-------+
| db | op | count |
+------------+-----------+-------+
| keystone | SELECT | 46213 |
| keystone | INSERT | 93 |
| neutron | SELECT | 3917 |
| neutron | CREATE | 1 |
| neutron | SHOW | 4 |
| neutron | INSERT | 4111 |
| neutron | DELETE | 28 |
| neutron | UPDATE | 116 |
| placement | SELECT | 46 |
| placement | INSERT | 55 |
| placement | SET | 1 |
| nova_api | SELECT | 114 |
| nova_cell0 | SELECT | 75 |
| nova_cell1 | SELECT | 178 |
| nova_cell0 | INSERT | 5 |
| nova_cell0 | UPDATE | 6 |
| nova_cell1 | UPDATE | 42 |
| nova_cell1 | INSERT | 4 |
| cinder | SELECT | 121 |
| cinder | INSERT | 5 |
| placement | UPDATE | 3 |
| cinder | UPDATE | 3 |
| nova_api | INSERT | 20 |
| glance | SELECT | 47 |
| glance | INSERT | 6 |
| glance | UPDATE | 2 |
| cinder | DELETE | 1 |
| nova_api | SAVEPOINT | 10 |
| nova_api | RELEASE | 10 |
+------------+-----------+-------+
This is your host IP address: 192.168.56.30
This is your host IPv6 address: ::1
Horizon is now available at http://192.168.56.30/dashboard
Keystone is serving at http://192.168.56.35/identity/
The default users are: admin and demo
The password: openstack
Services are running under systemd unit files.
For more information see:
https://docs.openstack.org/devstack/latest/systemd.html
DevStack Version: 2023.2
Change: b082d3fed3fe05228dabaab31bff592dbbaccbd9 Make multiple attempts to download image 2023-12-12 08:07:39 +0000
OS Version: Ubuntu 22.04 jammy
# 설치과정 로그파일 첨부
- 설치시 여러번 설치 실패로 192.168.56.30, 192.168.56.35, 192.168.56.36 192.168.56.41 등 한 20번은 IP를 바꿔가며 설치 한것 같네요,,, 그러다 보니 서버 IP 및 로그 IP가 제 각각 입니다. (본인 IP 설정에 맞게 비교해 보시면 됩니다.)