728x90
반응형
Proxmox 란 ?
데비안을 베이스로하는 오픈소스 Type1 하이퍼바이저 운영체제로, 특이점으로 Qemu/KVM 기반 VM 이외에, 도커의 베이스가 되는 LXC를 지원한다

* 2024년 6월 현재 vmware가 브로드컴으로 인수되어 영구라이센스 정책이 구독방식으로 바뀌어 대체 솔루션으로 상용솔루션인 경우 뉴타닉스(국산의 경우 Piolink의 팝콘) 오픈소스인 경우 openstack 과 Proxmox 를 검토하는 기업들이 많음 

* 따라서 Proxmox 오픈소스를 설치 테스트 해본다.

 

설치 테스트 환경
Windows 환경에서 Oracle VM 기반에 Vagrant 사이트에서 미리 설치된 Proxmox 이미지를 설치해봄

 

# vagrantfile

Vagrant.configure("2") do |config|
  config.vm.box = "clincha/proxmox-ve-8"
end

 

  • Oracle VM 환경설정

 

Proxmox 설치 

# vagrantfile을 만들고 
# C:\Users\shim>type vagrantfile
Vagrant.configure("2") do |config|
  config.vm.box = "clincha/proxmox-ve-8"
end

# vagrant up 실행
# C:\Users\shim>vagrant up

# vagrant ssh 로 접속
#C:\Users\shim>vagrant ssh
Linux pve 6.8.4-3-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.4-3 (2024-05-02T11:55Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Jun  3 17:17:39 2024 from 10.0.2.2

# su로 로그인 초기 패스워드는 vagrant 임
# vagrant@pve:~$ su - root
Password:

# C:\Users\shim>vagrant ssh
Linux pve 6.8.4-3-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.4-3 (2024-05-02T11:55Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Jun  3 17:17:39 2024 from 10.0.2.2

# Network enp0s3은 기본적으로 설치가 되어있음
# ip addr enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:ee:b0:b6 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp0s3
       valid_lft 84923sec preferred_lft 84923sec
    inet6 fe80::f98b:8517:b612:9da8/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

# Network enp0s8은 Oracle VM 영역 IP로 설정해주면됨 (이 서버는 192.168.56.22로 설정함)
# ip addr enp0s8
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:1a:4c:96 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.22/24 brd 192.168.56.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::9a38:2f27:239e:c4c8/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
 
 # Nexbox 데몬은 8006 임
 # netstat -ntpa |grep LISTEN 
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/init
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1437/sshd: /usr/sbi
tcp        0      0 127.0.0.1:85            0.0.0.0:*               LISTEN      1672/pvedaemon
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1613/master
tcp6       0      0 ::1:25                  :::*                    LISTEN      1613/master
tcp6       0      0 :::3128                 :::*                    LISTEN      1692/spiceproxy
tcp6       0      0 :::111                  :::*                    LISTEN      1/init
tcp6       0      0 :::22                   :::*                    LISTEN      1437/sshd: /usr/sbi
tcp6       0      0 :::8006                 :::*                    LISTEN      1685/pveproxy
root@pve:/etc#

 

Network enp0s8 IP 추가,수정
# apt-get 으로 nmtui 설치
# apt-get install network-manager
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
network-manager is already the newest version (1.42.4-1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
  • # nmtui로 접속하여 enp0s8 ip를 변경하면됨

 

Proxmox web 접속
  • 192.168.56.22:8006 으로 접속 
  • 초기 패스워드는 root / vagrant 임

 

  • 로그인 후 화면

 

설치 끝.

 

Proxmox 메뉴얼 
  • Proxmox 7 영문

pve-admin-guide-7.pdf
4.57MB

  • Proxmox 8 영문

pve-admin-guide-8.2.pdf
5.23MB

 

 

 

다음에는 VM생성 및 테스트를 해봄 (준비 중)

728x90
반응형
LIST
728x90
반응형

 

# 오픈스택 릴리즈 

 

https://releases.openstack.org/#release-series

 

OpenStack Releases: OpenStack Releases

OpenStack is developed and released around 6-month cycles. After the initial release, additional stable point releases will be released in each release series. You can find the detail of the various release series here on their series page. Subscribe to th

releases.openstack.org

 

# 오픈스택 비밀

 

 - 6개월 마다 새로운 릴리즈 출시 : (비밀) 릴리즈가  A, B, C, D, E ...         ...  이런식으로 릴리즈 됨

728x90
반응형
LIST
728x90
반응형

 

 

 

 

이미지 다운로드 사이트

 

 

https://cloud-images.ubuntu.com/focal/

 

Ubuntu Cloud Images

 

cloud-images.ubuntu.com

 

 focal-server-cloudimg-amd64.img    (QCow2 파일)

 

 

 

이미지 다운받기

 

# mkdir /tmp/img/
# cd /tmp/img/
# wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img

or

# wget https://cloud.centos.org/centos/8/vagrant/x86_64/images/CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2

 

이미지 패스워드 변경

 

이미지를 서버에 다운로드 받은후

 

우분투 클라우드 이미지에는 기본 사용자 이름/비밀번호가 없습니다. 이미지에서 인스턴스를 생성하기 전에 아래 cmd를 사용하여 구성해야 합니다.

 

cmd를 얻으려면 아래 pkg를 설치해야 합니다 virt-customize.

# sudo apt install libguestfs-tools

 

# virt-customize -a focal-server-cloudimg-amd64.img --root-password password:openstack
[   0.0] Examining the guest ...
[  83.3] Setting a random seed
virt-customize: warning: random seed could not be set for this type of guest
[  83.6] Setting the machine ID in /etc/machine-id
[  83.7] Setting passwords
[  93.7] Finishing off

or 

# virt-customize -a CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2 --root-password password:openstack
[   0.0] Examining the guest ...
[  18.3] Setting a random seed
[  18.5] Setting the machine ID in /etc/machine-id
[  18.6] Setting passwords
[  26.2] Finishing off

 

# 이미지 생성    (stack 계정으로 생성)

  - # openstack 명령어가 안먹힐 때는 https://hwpform.tistory.com/90 참조

$ openstack image create "ubuntu" --file /tmp/focal-server-cloudimg-amd64.img --disk-format qcow2 --container-format bare

$ -- 실행결과
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                      |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| container_format | bare                                                                                                                                       |
| created_at       | 2024-01-13T08:26:02Z                                                                                                                       |
| disk_format      | qcow2                                                                                                                                      |
| file             | /v2/images/9a95f850-fc58-44f4-bbb7-719338ea6dd9/file                                                                                       |
| id               | 9a95f850-fc58-44f4-bbb7-719338ea6dd9                                                                                                       |
| min_disk         | 0                                                                                                                                          |
| min_ram          | 0                                                                                                                                          |
| name             | ubuntu                                                                                                                                     |
| owner            | 9ff989aca2474d0c8a484165b77ac4d3                                                                                                           |
| properties       | os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/ubuntu', owner_specified.openstack.sha256='' |
| protected        | False                                                                                                                                      |
| schema           | /v2/schemas/image                                                                                                                          |
| status           | queued                                                                                                                                     |
| tags             |                                                                                                                                            |
| updated_at       | 2024-01-13T08:26:02Z                                                                                                                       |
| visibility       | shared                                                                                                                                     |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+

or

stack@ubuntu:/tmp$ openstack image create "centos8" --file /tmp/CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2 --disk-format qcow2 --container-format bare
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                      |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| container_format | bare                                                                                                                                       |
| created_at       | 2024-01-21T06:33:42Z                                                                                                                       |
| disk_format      | qcow2                                                                                                                                      |
| file             | /v2/images/3f049468-10b7-41bf-b5b1-14476d546d52/file                                                                                       |
| id               | 3f049468-10b7-41bf-b5b1-14476d546d52                                                                                                       |
| min_disk         | 0                                                                                                                                          |
| min_ram          | 0                                                                                                                                          |
| name             | ubuntu                                                                                                                                     |
| owner            | 8d5e4ccbae274d74b0ba81a1598a0921                                                                                                           |
| properties       | os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/ubuntu', owner_specified.openstack.sha256='' |
| protected        | False                                                                                                                                      |
| schema           | /v2/schemas/image                                                                                                                          |
| status           | queued                                                                                                                                     |
| tags             |                                                                                                                                            |
| updated_at       | 2024-01-21T06:33:42Z                                                                                                                       |
| visibility       | shared                                                                                                                                     |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+

 

- 아래와 같은 내용

 

 

 

 

- 웹페이지 접속하면 다음과 같이 이미지가 생성되어 있음

 

728x90
반응형
LIST
728x90
반응형

https://docs.openstack.org/install-guide/

 

 

OpenStack Installation Guide — Installation Guide documentation

OpenStack Installation Guide

docs.openstack.org

 

#원본 PDF 

 

InstallGuide.pdf
1.50MB

 

OpenStack contributors

 

Jan 04, 2024

 

 

CONTENTS

 

 

 

CHAPTER
ONE

 

CONVENTIONS

 

The OpenStack documentation uses several typesetting conventions

 

1.1 Notices

 

Notices take these forms:

 

Note: A comment with additional information that explains a part of the text.

 

Important: Something you must be aware of before proceeding

 

Tip: An extra but helpful piece of practical advice.

 

Caution: Helpful information that prevents the user from making mistakes.

 

Warning: Critical information about the risk of data loss or security issues.

 

1.2 Command prompts

$ command

 

Any user, including the root user, can run commands that are prefixed with the $ prompt.

# command

 

The root user must run commands that are prefixed with the # prompt. You can also prefix these commands with the sudo command, if available, to run them.

 

 

 

CHAPTER
TWO

 

2.1 Abstract

 

The OpenStack system consists of several key services that are separately installed. These services work together depending on your cloud needs and include the Compute, Identity, Networking, Image, Block Storage, Object Storage, Telemetry, Orchestration, and Database services. You can install any of these projects separately and configure them stand-alone or as connected entities. Explanations of configuration options and sample configuration files are included. This guide documents the installation of OpenStack starting with the Pike release. It covers multiple releases.

 

Warning: This guide is a work-in-progress and is subject to updates frequently. Pre-release packages have been used for testing, and some instructions may not work with final versions. Please help us make this guide better by reporting any errors you encounter.

 

 

2.2 Operating systems

 

Currently, this guide describes OpenStack installation for the following Linux distributions:

 

openSUSE and SUSE Linux Enterprise Server

You can install OpenStack by using packages on openSUSE Leap 42.3, openSUSE Leap 15, SUSE Linux Enterprise Server 12 SP4, SUSE Linux Enterprise Server 15 through the Open Build Service Cloud repository.

 

Red Hat Enterprise Linux and CentOS

You can install OpenStack by using packages available on both Red Hat Enterprise Linux 7 and 8 and their derivatives through the RDO repository.

Note: OpenStack Wallaby is available for CentOS Stream 8. OpenStack Ussuri and Victoria are available for both CentOS 8 and RHEL 8. OpenStack Train and earlier are available on both CentOS 7 and RHEL 7.

 

Ubuntu

You can walk through an installation by using packages available through Canonicals Ubuntu Cloud archive repository for Ubuntu 16.04+ (LTS).

Note: The Ubuntu Cloud Archive pockets for Pike and Queens provide OpenStack packages for Ubuntu 16.04 LTS; OpenStack Queens is installable direct using Ubuntu 18.04 LTS; the Ubuntu Cloud Archive pockets for Rocky and Stein provide OpenStack packages for Ubuntu 18.04 LTS; the Ubuntu Cloud Archive pocket for Victoria provides OpenStack packages for Ubuntu 20.04 LTS.

 

 

CHAPTER
THREE

 

GET STARTED WITH OPENSTACK

 

The OpenStack project is an open source cloud computing platform for all types of clouds, which aims to be simple to implement, massively scalable, and feature rich. Developers and cloud computing technologists from around the world create the OpenStack project.

 

OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a set of interrelated services. Each service offers an Application Programming Interface (API) that facilitates this integration. Depending on your needs, you can install some or all services.

 

3.1 The OpenStack services

 

The OpenStack project navigator lets you browse the OpenStack services that make up the OpenStack architecture. The services are categorized per the service type and release series.

 

3.2 The OpenStack architecture

 

The following sections describe the OpenStack architecture in more detail:

 

 

3.2.1 Conceptual architecture

 

The following diagram shows the relationships among the OpenStack services:

 

3.2.2 Logical architecture

 

To design, deploy, and configure OpenStack, administrators must understand the logical architecture.

 

As shown in Conceptual architecture, OpenStack consists of several independent parts, named the OpenStack services. All services authenticate through a common Identity service. Individual services interact with each other through public APIs, except where privileged administrator commands are necessary.

 

Internally, OpenStack services are composed of several processes. All services have at least one API process, which listens for API requests, preprocesses them and passes them on to other parts of the service. With the exception of the Identity service, the actual work is done by distinct processes.

 

For communication between the processes of one service, an AMQP message broker is used. The services state is stored in a database. When deploying and configuring your OpenStack cloud, you can choose among several message broker and database solutions, such as RabbitMQ, MySQL, MariaDB, and SQLite.

 

Users can access OpenStack via the web-based user interface implemented by the Horizon Dashboard, via command-line clients and by issuing API requests through tools like browser plug-ins or curl. For applications, several SDKs are available. Ultimately, all these access methods issue REST API calls to the various OpenStack services.

 

The following diagram shows the most common, but not the only possible, architecture for an OpenStack cloud:

 

 

CHAPTER
FOUR

 

OVERVIEW

 

The OpenStack project is an open source cloud computing platform that supports all types of cloud environments. The project aims for simple implementation, massive scalability, and a rich set of features. Cloud computing experts from around the world contribute to the project.

 

OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a variety of complementary services. Each service offers an Application Programming Interface (API) that facilitates this integration.

 

This guide covers step-by-step deployment of the major OpenStack services using a functional example architecture suitable for new users of OpenStack with sufficient Linux experience. This guide is not intended to be used for production system installations, but to create a minimum proof-of-concept for the purpose of learning about OpenStack

 

After becoming familiar with basic installation, configuration, operation, and troubleshooting of these OpenStack services, you should consider the following steps toward deployment using a production architecture:

 

  • Determine and implement the necessary core and optional services to meet performance and redundancy requirements.
  • Increase security using methods such as firewalls, encryption, and service policies.
  • Use a deployment tool such as Ansible, Chef, Puppet, or Salt to automate deployment and management of the production environment. The OpenStack project has a couple of deployment projects with specific guides per version: - 2023.2 (Bobcat) release - 2023.1 (Antelope) release - Zed release - Yoga release - Xena release - Wallaby release - Victoria release - Ussuri release - Train release - Stein release

 

4.1 Example architecture

 

The example architecture requires at least two nodes (hosts) to launch a basic virtual machine or instance. Optional services such as Block Storage and Object Storage require additional nodes.

Important: The example architecture used in this guide is a minimum configuration, and is not intended for production system installations. It is designed to provide a minimum proof-of-concept for the purpose of learning about OpenStack. For information on creating architectures for specific use cases, or how to determine which architecture is required, see the Architecture Design Guide.

 

This example architecture differs from a minimal production architecture as follows:

  • Networking agents reside on the controller node instead of one or more dedicated network nodes.
  • Overlay (tunnel) traffic for self-service networks traverses the management network instead of a dedicated network.

For more information on production architectures for Pike, see the Architecture Design Guide, OpenStack Networking Guide for Pike, and OpenStack Administrator Guides for Pike.

 

For more information on production architectures for Queens, see the Architecture Design Guide, OpenStack Networking Guide for Queens, and OpenStack Administrator Guides for Queens.

 

For more information on production architectures for Rocky, see the Architecture Design Guide, OpenStack Networking Guide for Rocky, and OpenStack Administrator Guides for Rocky.

 

 

4.1.1 Controller

 

The controller node runs the Identity service, Image service, Placement service, management portions of Compute, management portion of Networking, various Networking agents, and the Dashboard. It also includes supporting services such as an SQL database, message queue, and NTP.

 

Optionally, the controller node runs portions of the Block Storage, Object Storage, Orchestration, and Telemetry services.

 

The controller node requires a minimum of two network interfaces.

 

4.1.2 Compute

The compute node runs the hypervisor portion of Compute that operates instances. By default, Compute uses the KVM hypervisor. The compute node also runs a Networking service agent that connects instances to virtual networks and provides firewalling services to instances via security groups. You can deploy more than one compute node. Each node requires a minimum of two network interfaces.

 

4.1.3 Block Storage

The optional Block Storage node contains the disks that the Block Storage and Shared File System services provision for instances. For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security. You can deploy more than one block storage node. Each node requires a minimum of one network interface.

 

4.1.4 Object Storage

The optional Object Storage node contain the disks that the Object Storage service uses for storing accounts, containers, and objects. For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security. This service requires two nodes. Each node requires a minimum of one network interface. You can deploy more than two object storage nodes.

 

4.2 Networking

 

Choose one of the following virtual networking options.

 

4.2.1 Networking Option 1: Provider networks

The provider networks option deploys the OpenStack Networking service in the simplest way possible with primarily layer-2 (bridging/switching) services and VLAN segmentation of networks. Essentially, it bridges virtual networks to physical networks and relies on physical network infrastructure for layer-3 (routing) services. Additionally, a DHCP service provides IP address information to instances.

 

The OpenStack user requires more information about the underlying network infrastructure to create a virtual network to exactly match the infrastructure.

Warning: This option lacks support for self-service (private) networks, layer-3 (routing) services, and advanced services such as LBaaS and FWaaS. Consider the self-service networks option below if you desire these features.

 

 

 

4.2.2 Networking Option 2: Self-service networks

The self-service networks option augments the provider networks option with layer-3 (routing) services that enable self-service networks using overlay segmentation methods such as VXLAN. Essentially, it routes virtual networks to physical networks using NAT. Additionally, this option provides the foundation for advanced services such as LBaaS and FWaaS.

 

The OpenStack user can create virtual networks without the knowledge of underlying infrastructure on the data network. This can also include VLAN networks if the layer-2 plug-in is configured accordingly.

 

 

 

CHAPTER
FIVE

 

This section explains how to configure the controller node and one compute node using the example architecture.

 

Although most environments include Identity, Image service, Compute, at least one networking service, and the Dashboard, the Object Storage service can operate independently. If your use case only involves Object Storage, you can skip to

  •  Object Storage Installation Guide for 2023.2 (Bobcat)
  •  Object Storage Installation Guide for 2023.1 (Antelope)
  •  Object Storage Installation Guide for Zed
  •  Object Storage Installation Guide for Yoga
  •  Object Storage Installation Guide for Stein

after configuring the appropriate nodes for it.

 

You must use an account with administrative privileges to configure each node. Either run the commands as the root user or configure the sudo utility.

Note: The systemctl enable call on openSUSE outputs a warning message when the service uses SysV Init scripts instead of native systemd files. This warning can be ignored.

 

For best performance, we recommend that your environment meets or exceeds the hardware requirements in Hardware requirements.

 

The following minimum requirements should support a proof-of-concept environment with core services and several CirrOS instances:

  • Controller Node: 1 processor, 4 GB memory, and 5 GB storage
  • Compute Node: 1 processor, 2 GB memory, and 10 GB storage

 

As the number of OpenStack services and virtual machines increase, so do the hardware requirements for the best performance. If performance degrades after enabling additional services or virtual machines, consider adding hardware resources to your environment.

 

To minimize clutter and provide more resources for OpenStack, we recommend a minimal installation of your Linux distribution. Also, you must install a 64-bit version of your distribution on each node.

 

A single disk partition on each node works for most basic installations. However, you should consider Logical Volume Manager (LVM) for installations with optional services such as Block Storage.

 

For first-time installation and testing purposes, many users select to build each host as a virtual machine (VM). The primary benefits of VMs include the following:

  • One physical server can support multiple nodes, each with almost any number of network interfaces. 
  • Ability to take periodic snap shots throughout the installation process and roll back to a working configuration in the event of a problem

However, VMs will reduce performance of your instances, particularly if your hypervisor and/or processor lacks support for hardware acceleration of nested VMs.

 

Note: If you choose to install on VMs, make sure your hypervisor provides a way to disable MAC address filtering on the provider network interface

 

For more information about system requirements, see the OpenStack 2023.2 (Bobcat) Administrator Guides, the OpenStack 2023.1 (Antelope) Administrator Guides, the OpenStack Zed Administrator Guides, the OpenStack Yoga Administrator Guides, or the OpenStack Stein Administrator Guides.

 

5.1 Security

 

OpenStack services support various security methods including password, policy, and encryption. Additionally, supporting services including the database server and message broker support password security.

 

To ease the installation process, this guide only covers password security where applicable. You can create secure passwords manually, but the database connection string in services configuration file cannot accept special characters like @. We recommend you generate them using a tool such as pwgen, or by running the following command:

$ openssl rand -hex 10

 

For OpenStack services, this guide uses SERVICE_PASS to reference service account passwords and SERVICE_DBPASS to reference database passwords.

 

The following table provides a list of services that require passwords and their associated references in the guide.

 

OpenStack and supporting services require administrative privileges during installation and operation. In some cases, services perform modifications to the host that can interfere with deployment automation tools such as Ansible, Chef, and Puppet. For example, some OpenStack services add a root wrapper to sudo that can interfere with security policies. See the Compute service documentation for Pike, the Compute service documentation for Queens, or the Compute service documentation for Rocky for more information.

 

The Networking service assumes default values for kernel network parameters and modifies firewall rules. To avoid most issues during your initial installation, we recommend using a stock deployment of a supported distribution on your hosts. However, if you choose to automate deployment of your hosts, review the configuration and policies applied to them before proceeding further.

 

5.2 Host networking

 

After installing the operating system on each node for the architecture that you choose to deploy, you must configure the network interfaces. We recommend that you disable any automated network management tools and manually edit the appropriate configuration files for your distribution. For more information on how to configure networking on your distribution, see the documentation.

 

See also:

  • Ubuntu Network Configuration
  • RHEL 7 or RHEL 8 Network Configuration
  • SLES 12 or SLES 15 or openSUSE Network Configuration

All nodes require Internet access for administrative purposes such as package installation, security updates, DNS, and NTP. In most cases, nodes should obtain Internet access through the management network interface. To highlight the importance of network separation, the example architectures use private address space for the management network and assume that the physical network infrastructure provides Internet access via NAT or other methods. The example architectures use routable IP address space for the provider (external) network and assume that the physical network infrastructure provides direct Internet access

 

In the provider networks architecture, all instances attach directly to the provider network. In the selfservice (private) networks architecture, instances can attach to a self-service or provider network. Selfservice networks can reside entirely within OpenStack or provide some level of external network access using NAT through the provider network.

 

 

The example architectures assume use of the following networks:

 

  • Management on 10.0.0.0/24 with gateway 10.0.0.1

This network requires a gateway to provide Internet access to all nodes for administrative purposes such as package installation, security updates, DNS, and NTP.

 

  • Provider on 203.0.113.0/24 with gateway 203.0.113.1

This network requires a gateway to provide Internet access to instances in your OpenStack environment.

 

You can modify these ranges and gateways to work with your particular network infrastructure. Network interface names vary by distribution. Traditionally, interfaces use eth followed by a sequential number. To cover all variations, this guide refers to the first interface as the interface with the lowest number and the second interface as the interface with the highest number.

Note: Ubuntu has changed the network interface naming concept. Refer Changing Network Interfaces name Ubuntu 16.04.

 

Unless you intend to use the exact configuration provided in this example architecture, you must modify the networks in this procedure to match your environment. Each node must resolve the other nodes by name in addition to IP address. For example, the controller name must resolve to 10.0.0.11, the IP address of the management interface on the controller node.

Warning: Reconfiguring network interfaces will interrupt network connectivity. We recommend using a local terminal session for these procedures.

 

Note: RHEL, CentOS and SUSE distributions enable a restrictive firewall by default. Ubuntu does not. For more information about securing your environment, refer to the OpenStack Security Guide.

 

 

5.2.1 Controller node

 

Configure network interfaces

 

1. Configure the first interface as the management interface:

 

IP address: 10.0.0.11

Network mask: 255.255.255.0 (or /24)

Default gateway: 10.0.0.1

 

2. The provider interface uses a special configuration without an IP address assigned to it. Configure the second interface as the provider interface:

 

Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.

 

For Ubuntu:

 

• Edit the /etc/network/interfaces file to contain the following:

# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down

 

For RHEL or CentOS:

  • Edit the /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME file to contain the following

Do not change the HWADDR and UUID keys.

DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"

 

For SUSE:

  • Edit the /etc/sysconfig/network/ifcfg-INTERFACE_NAME file to contain the following
STARTMODE='auto'
BOOTPROTO='static'

 

 

3. Reboot the system to activate the changes.

 

Configure name resolution

 

1. Set the hostname of the node to controller.

2. Edit the /etc/hosts file to contain the following:

# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2

 

Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry.

 

Note: This guide includes host entries for optional services in order to reduce complexity should you choose to deploy them.

 

 

5.2.2 Compute node

 

Configure network interfaces

 

1. Configure the first interface as the management interface:

IP address: 10.0.0.31

Network mask: 255.255.255.0 (or /24)

Default gateway: 10.0.0.1

Note: Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.

 

2. The provider interface uses a special configuration without an IP address assigned to it. Configure the second interface as the provider interface:

Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.

 

For Ubuntu:

  •  Edit the /etc/network/interfaces file to contain the following:
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down

 

For RHEL or CentOS:

  • Edit the /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME file to contain the following:

Do not change the HWADDR and UUID keys.

DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"

 

For SUSE:

  • Edit the /etc/sysconfig/network/ifcfg-INTERFACE_NAME file to contain the following:
STARTMODE='auto'
BOOTPROTO='static'

 

3. Reboot the system to activate the changes.

 

Configure name resolution

 

1. Set the hostname of the node to compute1.

2. Edit the /etc/hosts file to contain the following:

# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry.

 

Note: This guide includes host entries for optional services in order to reduce complexity should you choose to deploy them.

 

5.2.3 Block storage node (Optional)

 

If you want to deploy the Block Storage service, configure one additional storage node.

 

Configure network interfaces

  • Configure the management interface:

– IP address: 10.0.0.41

– Network mask: 255.255.255.0 (or /24)

– Default gateway: 10.0.0.1

 

Configure name resolution

1. Set the hostname of the node to block1.

2. Edit the /etc/hosts file to contain the following:

# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2

 

Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry.

 

Note: This guide includes host entries for optional services in order to reduce complexity should you choose to deploy them.

 

3. Reboot the system to activate the changes.

 

 

5.2.4 Verify connectivity

 

We recommend that you verify network connectivity to the Internet and among the nodes before proceeding further

 

1. From the controller node, test access to the Internet:

# ping -c 4 docs.openstack.org
PING files02.openstack.org (23.253.125.17) 56(84) bytes of data.
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=1 ttl=43␣,→time=125 ms
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=2 ttl=43␣,→time=125 ms
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=3 ttl=43␣,→time=125 ms
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=4 ttl=43␣,→time=125 ms
--- files02.openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 125.192/125.282/125.399/0.441 ms

 

2. From the controller node, test access to the management interface on the compute node:

# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms

 

3. From the compute node, test access to the Internet:

# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms

 

4. From the compute node, test access to the management interface on the controller node:

# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
Note: RHEL, CentOS and SUSE distributions enable a restrictive firewall by default. During the installation process, certain steps will fail unless you alter or disable the firewall. For more information about securing your environment, refer to the OpenStack Security Guide
Ubuntu does not enable a restrictive firewall by default. For more information about securing your environment, refer to the OpenStack Security Guide

 

 

5.3 Network Time Protocol (NTP)

 

To properly synchronize services among nodes, you can install Chrony, an implementation of NTP. We recommend that you configure the controller node to reference more accurate (lower stratum) servers and other nodes to reference the controller node.

 

5.3.1 Controller node

 

Perform these steps on the controller node.

 

Install and configure components

 

1. Install the packages:

 

For Ubuntu:

# apt install chrony

 

For RHEL or CentOS:

# yum install chrony

 

For SUSE

# zypper install chrony

 

2. Edit the chrony.conf file and add, change, or remove the following keys as necessary for your environment

 

For RHEL, CentOS, or SUSE, edit the /etc/chrony.conf file:

server NTP_SERVER iburst

 

For Ubuntu, edit the /etc/chrony/chrony.conf file:

server NTP_SERVER iburst

 

Replace NTP_SERVER with the hostname or IP address of a suitable more accurate (lower stratum) NTP server. The configuration supports multiple server keys.

Note: By default, the controller node synchronizes the time via a pool of public servers. However, you can optionally configure alternative servers such as those provided by your organization.

 

3. To enable other nodes to connect to the chrony daemon on the controller node, add this key to the same chrony.conf file mentioned above:

allow 10.0.0.0/24

 

If necessary, replace 10.0.0.0/24 with a description of your subnet.

 

4. Restart the NTP service:

 

For Ubuntu:

# service chrony restart

 

For RHEL, CentOS, or SUSE:

# systemctl enable chronyd.service
# systemctl start chronyd.service

 

 

5.3.2 Other nodes

 

Other nodes reference the controller node for clock synchronization. Perform these steps on all other nodes.

 

Install and configure components

 

1. Install the packages.

 

For Ubuntu:

# apt install chrony

 

For RHEL or CentOS:

# yum install chrony

 

For SUSE:

# zypper install chrony

 

 

2. Configure the chrony.conf file and comment out or remove all but one server key. Change it to reference the controller node.

 

For RHEL, CentOS, or SUSE, edit the /etc/chrony.conf file:

server controller iburst

 

For Ubuntu, edit the /etc/chrony/chrony.conf file:

server controller iburst

 

3. Comment out the pool 2.debian.pool.ntp.org offline iburst line.

 

4. Restart the NTP service.

 

For Ubuntu:

# service chrony restart

 

For RHEL, CentOS, or SUSE:

# systemctl enable chronyd.service
# systemctl start chronyd.service

 

 

5.3.3 Verify operation

 

We recommend that you verify NTP synchronization before proceeding further. Some nodes, particularly those that reference the controller node, can take several minutes to synchronize.

 

1. Run this command on the controller node:

# chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
␣
,→===============================================================================
^- 192.0.2.11 2 7 12 137 -2814us[-3000us] +/-,→ 43ms
^* 192.0.2.12 2 6 177 46 +17us[ -23us] +/-,→ 68ms

 

Contents in the Name/IP address column should indicate the hostname or IP address of one or more NTP servers. Contents in the MS column should indicate * for the server to which the NTP service is currently synchronized.

 

2. Run the same command on all other nodes:

# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
␣
,→===============================================================================
^* controller 3 9 377 421 +15us[ -87us] +/
,→- 15ms

 

Contents in the Name/IP address column should indicate the hostname of the controller node.

 

 

5.4 OpenStack packages

 

Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.

Note: The set up of OpenStack packages described here needs to be done on all nodes: controller, compute, and Block Storage nodes.

 

Warning: Your hosts must contain the latest versions of base installation packages available for your distribution before proceeding further

 

Note: Disable or remove any automatic update services because they can impact your OpenStack environment.

 

5.4.1 OpenStack packages for SUSE

 

Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.

 

Note: The set up of OpenStack packages described here needs to be done on all nodes: controller, compute, and Block Storage nodes.

 

Warning: Your hosts must contain the latest versions of base installation packages available for your distribution before proceeding further.

 

Note: Disable or remove any automatic update services because they can impact your OpenStack environment.

 

Enable the OpenStack repository

 

  •  Enable the Open Build Service repositories based on your openSUSE or SLES version, and on the version of OpenStack you want to install:

On openSUSE for OpenStack Ussuri:

# zypper addrepo -f obs://Cloud:OpenStack:Ussuri/openSUSE_Leap_15.1 Ussuri

 

On openSUSE for OpenStack Train:

# zypper addrepo -f obs://Cloud:OpenStack:Train/openSUSE_Leap_15.0 Train

 

On openSUSE for OpenStack Stein:

# zypper addrepo -f obs://Cloud:OpenStack:Stein/openSUSE_Leap_15.0 Stein

 

On openSUSE for OpenStack Rocky:

# zypper addrepo -f obs://Cloud:OpenStack:Rocky/openSUSE_Leap_15.0 Rocky

 

On openSUSE for OpenStack Queens:

# zypper addrepo -f obs://Cloud:OpenStack:Queens/openSUSE_Leap_42.3 Queens

 

On openSUSE for OpenStack Pike:

# zypper addrepo -f obs://Cloud:OpenStack:Pike/openSUSE_Leap_42.3 Pike

 

Note: The openSUSE distribution uses the concept of patterns to represent collections of packages. If you selected Minimal Server Selection (Text Mode) during the initial installation, you may be presented with a dependency conflict when you attempt to install the OpenStack packages. To avoid this, remove the minimal_base-conflicts package:
# zypper rm patterns-openSUSE-minimal_base-conflicts

 

On SLES for OpenStack Ussuri:

# zypper addrepo -f obs://Cloud:OpenStack:Ussuri/SLE_15_SP2 Ussuri

 

On SLES for OpenStack Train:

# zypper addrepo -f obs://Cloud:OpenStack:Train/SLE_15_SP1 Train

 

On SLES for OpenStack Stein:

# zypper addrepo -f obs://Cloud:OpenStack:Stein/SLE_15 Stein

 

On SLES for OpenStack Rocky:

# zypper addrepo -f obs://Cloud:OpenStack:Rocky/SLE_12_SP4 Rocky

 

On SLES for OpenStack Queens:

# zypper addrepo -f obs://Cloud:OpenStack:Queens/SLE_12_SP3 Queens

 

On SLES for OpenStack Pike:

# zypper addrepo -f obs://Cloud:OpenStack:Pike/SLE_12_SP3 Pike
Note: The packages are signed by GPG key D85F9316. You should verify the fingerprint of the imported GPG key before using it.
Key Name: Cloud:OpenStack OBS Project <Cloud:OpenStack@build.opensuse.org>
Key Fingerprint: 35B34E18 ABC1076D 66D5A86B 893A90DA D85F9316
Key Created: 2015-12-16T16:48:37 CET
Key Expires: 2018-02-23T16:48:37 CET

 

Finalize the installation

 

1. Upgrade the packages on all nodes:

# zypper refresh && zypper dist-upgrade
Note: If the upgrade process includes a new kernel, reboot your host to activate it

 

2. Install the OpenStack client:

# zypper install python-openstackclient

 

 

5.4.2 OpenStack packages for RHEL and CentOS

 

Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.

Warning: Starting with the Ussuri release, you will need to use either CentOS8 or RHEL 8. Previous OpenStack releases will need to use either CentOS7 or RHEL 7. Instructions are included for both distributions and versions where different.
Note: The set up of OpenStack packages described here needs to be done on all nodes: controller, compute, and Block Storage nodes.
Warning: Your hosts must contain the latest versions of base installation packages available for your distribution before proceeding further.
Note: Disable or remove any automatic update services because they can impact your OpenStack environment.

 

Prerequisites

Warning: We recommend disabling EPEL when using RDO packages due to updates in EPEL breaking backwards compatibility. Or, preferably pin package versions using the yum-versionlock plugin.
Note: The following steps apply to RHEL only. CentOS does not require these steps

 

1. When using RHEL, it is assumed that you have registered your system using Red Hat Subscription Management and that you have the rhel-7-server-rpms or rhel-8-for-x86_64-baseos-rpms repository enabled by default depending on your version.

For more information on registering a RHEL 7 system, see the Red Hat Enterprise Linux 7 System Administrators Guide.

 

2. In addition to rhel-7-server-rpms on a RHEL 7 system, you also need to have the rhel-7-server-optional-rpms, rhel-7-server-extras-rpms, and rhel-7-server-rh-common-rpms repositories enabled:

# subscription-manager repos --enable=rhel-7-server-optional-rpms \
--enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms

 

For more information on registering a RHEL 8 system, see the Red Hat Enterprise Linux 8 Installation Guide.

 

In addition to rhel-8-for-x86_64-baseos-rpms on a RHEL 8 system, you also need to have the rhel-8-for-x86_64-appstream-rpms, rhel-8-for-x86_64-supplementary-rpms, and codeready-builder-for-rhel-8-x86_64-rpms repositories enabled:

# subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms \
--enable=rhel-8-for-x86_64-supplementary-rpms --enable=codeready-builder-for-rhel-8-x86_64-rpms

 

Enable the OpenStack repository

 

  • On CentOS, the extras repository provides the RPM that enables the OpenStack repository. CentOS includes the extras repository by default, so you can simply install the package to enable the OpenStack repository. For CentOS8, you will also need to enable the PowerTools repository

When installing the Victoria release, run:

# yum install centos-release-openstack-victoria
# yum config-manager --set-enabled powertools

 

When installing the Ussuri release, run:

# yum install centos-release-openstack-ussuri
# yum config-manager --set-enabled powertools

 

When installing the Train release, run:

# yum install centos-release-openstack-train

 

When installing the Stein release, run:

# yum install centos-release-openstack-stein

 

When installing the Rocky release, run:

# yum install centos-release-openstack-rocky

 

When installing the Queens release, run:

# yum install centos-release-openstack-queens

 

When installing the Pike release, run:

# yum install centos-release-openstack-pike

 

  • On RHEL, download and install the RDO repository RPM to enable the OpenStack repository.

On RHEL 7:

The RDO repository RPM installs the latest available OpenStack release

 

On RHEL 8:

# dnf install https://www.rdoproject.org/repos/rdo-release.el8.rpm

 

The RDO repository RPM installs the latest available OpenStack release

 

Finalize the installation

 

 

 

5.4.3 OpenStack packages for Ubuntu

 

Ubuntu releases OpenStack with each Ubuntu release. Ubuntu LTS releases are provided every two years. OpenStack packages from interim releases of Ubuntu are made available to the prior Ubuntu LTS via the Ubuntu Cloud Archive.

Note: The archive enablement described here needs to be done on all nodes that run OpenStack services.

 

Archive Enablement OpenStack 2023.2 Bobcat for Ubuntu 22.04 LTS:

# add-apt-repository cloud-archive:bobcat

 

OpenStack 2023.1 Antelope for Ubuntu 22.04 LTS:

# add-apt-repository cloud-archive:antelope

 

OpenStack Zed for Ubuntu 22.04 LTS:

# add-apt-repository cloud-archive:zed

 

OpenStack Yoga for Ubuntu 22.04 LTS:

OpenStack Yoga is available by default using Ubuntu 22.04 LTS.

 

OpenStack Yoga for Ubuntu 20.04 LTS:

# add-apt-repository cloud-archive:yoga

 

OpenStack Xena for Ubuntu 20.04 LTS:

# add-apt-repository cloud-archive:xena

 

OpenStack Wallaby for Ubuntu 20.04 LTS:

# add-apt-repository cloud-archive:wallaby

 

OpenStack Victoria for Ubuntu 20.04 LTS:

# add-apt-repository cloud-archive:victoria

 

OpenStack Ussuri for Ubuntu 20.04 LTS:

OpenStack Ussuri is available by default using Ubuntu 20.04 LTS.

 

OpenStack Ussuri for Ubuntu 18.04 LTS:

# add-apt-repository cloud-archive:ussuri

 

OpenStack Train for Ubuntu 18.04 LTS:

# add-apt-repository cloud-archive:train

 

OpenStack Stein for Ubuntu 18.04 LTS:

# add-apt-repository cloud-archive:stein

 

OpenStack Rocky for Ubuntu 18.04 LTS:

# add-apt-repository cloud-archive:rocky

 

OpenStack Queens for Ubuntu 18.04 LTS:

OpenStack Queens is available by default using Ubuntu 18.04 LTS.
Note: For a full list of supported Ubuntu OpenStack releases, see Ubuntu OpenStack release cycle at https://www.ubuntu.com/about/release-cycle.

 

Sample Installation

# apt install nova-compute

 

Client Installation

# apt install python3-openstackclient

 

 

5.5 SQL database

 

Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.

Note: If you see Too many connections or Too many open files error log messages on OpenStack services, verify that maximum number of connection settings are well applied to your environment. In MariaDB, you may also need to change open_files_limit configuration

 

 

5.5.1 SQL database for SUSE

 

Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.

 

Install and configure components

 

1. Install the packages:

# zypper install mariadb-client mariadb python-PyMySQL

 

2. Create and edit the /etc/my.cnf.d/openstack.cnf file and complete the following actions:

  • Create a [mysqld] section, and set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

 

Finalize installation

 

1. Start the database service and configure it to start when the system boots:

# systemctl enable mysql.service
# systemctl start mysql.service

 

2. Secure the database service by running the mysql_secure_installation script. In particular, choose a suitable password for the database root account:

# mysql_secure_installation

 

5.5.2 SQL database for RHEL and CentOS

 

Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.

 

Install and configure components

 

1. Install the packages:

# yum install mariadb mariadb-server python2-PyMySQL

 

2. Create and edit the /etc/my.cnf.d/openstack.cnf file (backup existing configuration files in /etc/my.cnf.d/ if needed) and complete the following actions:

  •  Create a [mysqld] section, and set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

 

Finalize installation

 

1. Start the database service and configure it to start when the system boots:

# systemctl enable mariadb.service
# systemctl start mariadb.service

 

2. Secure the database service by running the mysql_secure_installation script. In particular, choose a suitable password for the database root account:

# mysql_secure_installation

 

 

5.5.3 SQL database for Ubuntu

 

Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.

 

Note: As of Ubuntu 16.04, MariaDB was changed to use the unix_socket Authentication Plugin. Local authentication is now performed using the user credentials (UID), and password authentication is no longer used by default. This means that the root user no longer uses a password for local access to the server.
Note: As of Ubuntu 18.04, the mariadb-server package is no longer available from the default repository. To install successfully, enable the Universe repository on Ubunt

 

Install and configure components

 

1. Install the packages:

  •  As of Ubuntu 20.04, install the packages:
# apt install mariadb-server python3-pymysql

 

  •  As of Ubuntu 18.04 or 16.04, install the packages:
# apt install mariadb-server python-pymysq

 

2. Create and edit the /etc/mysql/mariadb.conf.d/99-openstack.cnf file and complete the following actions:

  •  Create a [mysqld] section, and set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

 

Finalize installation

 

1. Restart the database service:

# service mysql restart

 

2. Secure the database service by running the mysql_secure_installation script. In particular, choose a suitable password for the database root account:

# mysql_secure_installation

 

5.6 Message queue

 

OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.

 

The message queue runs on the controller node.

 

5.6.1 Message queue for SUSE

 

1. Install the package:

# zypper install rabbitmq-server

 

2. Start the message queue service and configure it to start when the system boots:

# systemctl enable rabbitmq-server.service # systemctl start rabbitmq-server.service

 

3. Add the openstack user:

# rabbitmqctl add_user openstack RABBIT_PASS

Creating user "openstack" ...

Replace RABBIT_PASS with a suitable password.

 

4. Permit configuration, write, and read access for the openstack user:

# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Setting permissions for user "openstack" in vhost "/" ...

 

5.6.2 Message queue for RHEL and CentOS

 

OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.

 

The message queue runs on the controller node.

 

Install and configure components

 

1. Install the package:

# yum install rabbitmq-server

 

2. Start the message queue service and configure it to start when the system boots:

# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service

 

3. Add the openstack user:

# rabbitmqctl add_user openstack RABBIT_PASS

Creating user "openstack" ...

Replace RABBIT_PASS with a suitable password.

 

4. Permit configuration, write, and read access for the openstack user:

# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Setting permissions for user "openstack" in vhost "/" ...

 

5.6.3 Message queue for Ubuntu

 

OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.

 

The message queue runs on the controller node.

 

Install and configure components

 

1. Install the package:

# apt install rabbitmq-server

 

2. Add the openstack user:

# rabbitmqctl add_user openstack RABBIT_PASS

Creating user "openstack" ...

Replace RABBIT_PASS with a suitable password.

 

3. Permit configuration, write, and read access for the openstack user:

# rabbitmqctl add_user openstack RABBIT_PASS

Creating user "openstack" ...

 

5.7 Memcached

 

The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure

 

5.7.1 Memcached for SUSE

 

The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.

 

Install and configure components

 

1. Install the packages:

# zypper install memcached python-python-memcached

 

2. Edit the /etc/sysconfig/memcached file and complete the following actions:

  • Configure the service to use the management IP address of the controller node. This is to enable access by other nodes via the management network:
MEMCACHED_PARAMS="-l 10.0.0.11"

 

Note: Change the existing line MEMCACHED_PARAMS="-l 127.0.0.1".

 

Finalize installation

  • Start the Memcached service and configure it to start when the system boots:
# systemctl enable memcached.service
# systemctl start memcached.service

 

 

5.7.2 Memcached for RHEL and CentOS

 

The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.

 

Install and configure components

 

1. Install the packages:

For CentOS 7 and RHEL 7

# yum install memcached python-memcached

 

For CentOS 8 and RHEL 8

# yum install memcached python3-memcached

 

2. Edit the /etc/sysconfig/memcached file and complete the following actions:

  • Configure the service to use the management IP address of the controller node. This is to enable access by other nodes via the management network:
OPTIONS="-l 127.0.0.1,::1,controller"

 

Note: Change the existing line OPTIONS="-l 127.0.0.1,::1".

 

Finalize installation

 

  • Start the Memcached service and configure it to start when the system boots:
# systemctl enable memcached.service
# systemctl start memcached.service

 

5.7.3 Memcached for Ubuntu

 

The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.

 

Install and configure components

 

1. Install the packages:

For Ubuntu versions prior to 18.04 use:

# apt install memcached python-memcache

 

For Ubuntu 18.04 and newer versions use:

# apt install memcached python3-memcache

 

2. Edit the /etc/memcached.conf file and configure the service to use the management IP address of the controller node. This is to enable access by other nodes via the management network:

-l 10.0.0.11
Note: Change the existing line that had -l 127.0.0.1.

 

Finalize installation

  • Restart the Memcached service:
# service memcached restart

 

 

5.8 Etcd

 

OpenStack services may use Etcd, a distributed reliable key-value store for distributed key locking, storing configuration, keeping track of service live-ness and other scenarios.

 

5.8.1 Etcd for SUSE

 

Right now, there is no distro package available for etcd3. This guide uses the tarball installation as a workaround until proper distro packages are available.

The etcd service runs on the controller node.

 

Install and configure components

 

1. Install etcd:

  • Create etcd user:
# groupadd --system etcd
# useradd --home-dir "/var/lib/etcd" \
--system \
--shell /bin/false \
-g etcd \
etcd

 

  • Create the necessary directories:
# mkdir -p /etc/etcd
# chown etcd:etcd /etc/etcd
# mkdir -p /var/lib/etcd
# chown etcd:etcd /var/lib/etcd

 

  •  Determine your system architecture:
• Determine your system architecture:

 

  • Download and install the etcd tarball for x86_64/amd64:
# ETCD_VER=v3.2.7
# rm -rf /tmp/etcd && mkdir -p /tmp/etcd
# curl -L \
https://github.com/coreos/etcd/releases/download/${ETCD_VER}/
,→etcd-${ETCD_VER}-linux-amd64.tar.gz \
-o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
# tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz \
-C /tmp/etcd --strip-components=1
# cp /tmp/etcd/etcd /usr/bin/etcd
# cp /tmp/etcd/etcdctl /usr/bin/etcdctl

 

Or download and install the etcd tarball for arm64:

# ETCD_VER=v3.2.7
# rm -rf /tmp/etcd && mkdir -p /tmp/etcd
# curl -L \
https://github.com/coreos/etcd/releases/download/${ETCD_VER}/
,→etcd-${ETCD_VER}-linux-arm64.tar.gz \
-o /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz
# tar xzvf /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz \
-C /tmp/etcd --strip-components=1
# cp /tmp/etcd/etcd /usr/bin/etcd
# cp /tmp/etcd/etcdctl /usr/bin/etcdctl

 

2. Create and edit the /etc/etcd/etcd.conf.yml file and set the initial-cluster, initial-advertise-peer-urls, advertise-client-urls, listen-client-urls to the management IP address of the controller node to enable access by other nodes via the management network:

name: controller
data-dir: /var/lib/etcd
initial-cluster-state: 'new'
initial-cluster-token: 'etcd-cluster-01'
initial-cluster: controller=http://10.0.0.11:2380
initial-advertise-peer-urls: http://10.0.0.11:2380
advertise-client-urls: http://10.0.0.11:2379
listen-peer-urls: http://0.0.0.0:2380
listen-client-urls: http://10.0.0.11:2379

 

3. Create and edit the /usr/lib/systemd/system/etcd.service file:

[Unit]
After=network.target
Description=etcd - highly-available key value store
[Service]
# Uncomment this on ARM64.
# Environment="ETCD_UNSUPPORTED_ARCH=arm64"
LimitNOFILE=65536
Restart=on-failure
Type=notify
ExecStart=/usr/bin/etcd --config-file /etc/etcd/etcd.conf.yml
User=etcd
[Install]
WantedBy=multi-user.target

 

Reload systemd service files with:

# systemctl daemon-reload

 

Finalize installation

 

1. Enable and start the etcd service:

# systemctl enable etcd
# systemctl start etcd

 

728x90
반응형
LIST
728x90
반응형

 

# 전체 이미지 (qcow2)
https://docs.openstack.org/image-guide/obtain-images.html

 

# ubuntu 용
https://cloud-images.ubuntu.com/

 

 

728x90
반응형
LIST

'서버가상화 > openstack' 카테고리의 다른 글

openstack img 파일 생성  (0) 2024.01.13
openstack Install Guide [매뉴얼]  (0) 2024.01.13
openstack chrony(NTP, 네트워크 타임 서비스 설치)  (0) 2024.01.07
openstack apache2  (0) 2024.01.07
openstack horizon (수정중)  (1) 2024.01.07
728x90
반응형

 

# chrony 설치 

# apt install chrony

 

# chrony 프로세서 확인, 시간 확인   

# service --status-all
 [ + ]  chrony

# service chrony status
● chrony.service - chrony, an NTP client/server
     Loaded: loaded (/lib/systemd/system/chrony.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-01-07 11:51:26 KST; 7min ago
       Docs: man:chronyd(8)
             man:chronyc(1)
             man:chrony.conf(5)
    Process: 13125 ExecStart=/usr/lib/systemd/scripts/chronyd-starter.sh $DAEMON_OPTS (code=exited, status=0/SUCCESS)
   Main PID: 13135 (chronyd)
      Tasks: 2 (limit: 4537)
     Memory: 1.6M
     CGroup: /system.slice/chrony.service
             ├─13135 /usr/sbin/chronyd -F 1
             └─13136 /usr/sbin/chronyd -F 1

Jan 07 11:51:26 ubuntu.localdomain systemd[1]: Starting chrony, an NTP client/server...
Jan 07 11:51:26 ubuntu.localdomain chronyd[13135]: chronyd version 4.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
Jan 07 11:51:26 ubuntu.localdomain chronyd[13135]: Initial frequency 34.807 ppm
Jan 07 11:51:26 ubuntu.localdomain chronyd[13135]: Using right/UTC timezone to obtain leap second data
Jan 07 11:51:26 ubuntu.localdomain chronyd[13135]: Loaded seccomp filter (level 1)
Jan 07 11:51:26 ubuntu.localdomain systemd[1]: Started chrony, an NTP client/server.
Jan 07 11:51:34 ubuntu.localdomain chronyd[13135]: Selected source 193.123.243.2 (0.ubuntu.pool.ntp.org)
Jan 07 11:51:34 ubuntu.localdomain chronyd[13135]: System clock TAI offset set to 37 seconds

# timedatectl
               Local time: Sun 2024-01-07 11:58:54 KST
           Universal time: Sun 2024-01-07 02:58:54 UTC
                 RTC time: Sun 2024-01-07 02:58:53
                Time zone: Asia/Seoul (KST, +0900)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no


# chronyc sources -v

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current best, '+' = combined, '-' = not combined,
| /             'x' = may be in error, '~' = too variable, '?' = unusable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- prod-ntp-5.ntp4.ps5.cano>     2   6   377    26    +11ms[  +11ms] +/-  126ms
^- prod-ntp-3.ntp4.ps5.cano>     2   6   377    26    +12ms[  +12ms] +/-  124ms
^- alphyn.canonical.com          2   6   377    25    -11ms[  -11ms] +/-  132ms
^- prod-ntp-4.ntp4.ps5.cano>     2   6   377    27  +7438us[+7438us] +/-  120ms
^* 193.123.243.2                 2   6   377    31   +616us[ +753us] +/- 5886us
^- 175.193.3.234                 3   6   377    31  +1244us[+1244us] +/-   30ms
^- mail.innotab.com              3   6   377    29   +917us[ +917us] +/-   32ms
^- 106.247.248.106               2   6   377    27   +925us[ +925us] +/-   33ms
728x90
반응형
LIST

'서버가상화 > openstack' 카테고리의 다른 글

openstack Install Guide [매뉴얼]  (0) 2024.01.13
openstack 이미지(img) 다운로드 사이트  (0) 2024.01.07
openstack apache2  (0) 2024.01.07
openstack horizon (수정중)  (1) 2024.01.07
openstack nova (수정중)  (0) 2024.01.07
728x90
반응형
# 상태확인
# service apache2 stop
# service apache2 start
# service apache2 reload

 

#  컨피그 
# /var/www/html/index.html


/etc/apache2/
|-- apache2.conf
|       `--  ports.conf
|-- mods-enabled
|       |-- *.load
|       `-- *.conf
|-- conf-enabled
|       `-- *.conf
|-- sites-enabled
|       `-- *.conf
          </pre>
          <ul>

 

728x90
반응형
LIST
728x90
반응형

 

# 설치
# apt install openstack-dashboard

 

# 위치
# pwd
/etc/openstack-dashboard

# ls -al
drwxr-xr-x   2 root root  4096 Jan  7 09:52 .
drwxr-xr-x 141 root root 12288 Jan  7 09:42 ..
-rw-r--r--   1 root root 12789 Jan  7 09:52 local_settings.py

 

 

728x90
반응형
LIST

'서버가상화 > openstack' 카테고리의 다른 글

openstack chrony(NTP, 네트워크 타임 서비스 설치)  (0) 2024.01.07
openstack apache2  (0) 2024.01.07
openstack nova (수정중)  (0) 2024.01.07
openstack keystone  (0) 2024.01.06
openstack 캐쉬  (0) 2024.01.06
728x90
반응형

 

# 인스턴스 생성 에러

 

- 인스턴스 생성시 에러 (status 상태 - Error)

< 인스턴스를 생성하면 Status 상태가 Error로 떨어짐 >

 

- 메시지 : MessagingTimeout

- 코드 : 500

- 세부정보 

Traceback (most recent call last): 
File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 441, in get return self._queues[msg_id].get(block=True, timeout=timeout) File "/usr/local/lib/python3.10/dist-packages/eventlet/queue.py", 
line 322, in get return waiter.wait() File "/usr/local/lib/python3.10/dist-packages/eventlet/queue.py", 
line 141, in wait return get_hub().switch() File "/usr/local/lib/python3.10/dist-packages/eventlet/hubs/hub.py", 
line 313, in switch return self.greenlet.switch() _queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/stack/nova/nova/conductor/manager.py", 
line 1654, in schedule_and_build_instances host_lists = self._schedule_instances(context, request_specs[0], File "/opt/stack/nova/nova/conductor/manager.py", 
line 942, in _schedule_instances host_lists = self.query_client.select_destinations( File "/opt/stack/nova/nova/scheduler/client/query.py", 
line 41, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj, File "/opt/stack/nova/nova/scheduler/rpcapi.py", 
line 160, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/rpc/client.py", 
line 190, in call result = self.transport._send( File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/transport.py", 
line 123, in _send return self._driver.send(target, ctxt, message, File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 689, in send return self._send(target, ctxt, message, wait_for_reply, timeout, File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 678, in _send result = self._waiter.wait(msg_id, timeout, File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 567, in wait message = self.waiters.get(msg_id, timeout=timeout) File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 443, in get raise oslo_messaging.MessagingTimeout( oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID b6a61e8d51914d4db1f834e190f146ca

 

== 문제가 뭘까 ?

 

#  nova-status upgrade check
$ nova-status upgrade check

Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code.
+-------------------------------------------+
| Upgrade Check Results                     |
+-------------------------------------------+
| Check: Cells v2                           |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: Placement API                      |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: Cinder API                         |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: Policy File JSON to YAML Migration |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: Older than N-1 computes            |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: hw_machine_type unset              |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: Service User Token Configuration   |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+

 

 

 

 

728x90
반응형
LIST

'서버가상화 > openstack' 카테고리의 다른 글

openstack apache2  (0) 2024.01.07
openstack horizon (수정중)  (1) 2024.01.07
openstack keystone  (0) 2024.01.06
openstack 캐쉬  (0) 2024.01.06
openstack RabbitMQ 설치(메시지 Queus)  (0) 2024.01.06
728x90
반응형

 

 

# keystone 

 

# mysql -u root -popenstack -h 192.168.56.30


mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 12
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'keystone'@'localhost' IDENTIFIED BY 'openstack';
Query OK, 0 rows affected (0.13 sec)

mysql> CREATE USER 'keystone'@'%' IDENTIFIED BY 'openstack';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';

ERROR 1044 (42000): Access denied for user 'root'@'%' to database 'keystone'

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'root'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

 

 

 

오류 메시지 "ERROR 1044 (42000): 'keystone' 데이터베이스에 대한 'root'@'%' 사용자의 액세스가 거부되었습니다."는 'root' 사용자에게 'keystone' 데이터베이스에 대한 권한을 부여하는 데 필요한 권한이 없음을 나타냅니다. 지정된 사용자('keystone'@'%').

이 문제를 해결하려면 '루트' 사용자에게 권한을 부여할 수 있는 적절한 권한이 있는지 확인해야 합니다. 다음과 같이하세요:

 

1. MySQL에 '루트' 사용자로 로그인합니다.

# mysql -u root -p

 

2. 로그인한 후 'keystone' 데이터베이스에 대해 '루트' 사용자에게 필요한 권한을 부여합니다.

$ GRANT ALL PRIVILEGES ON keystone.* TO 'root'@'%'; FLUSH PRIVILEGES;

 

3. 권한을 부여한 후 원래 명령을 다시 실행해 보십시오

$ GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';

 

문제가 계속 발생하면 '루트' 사용자에게 GRANT권한 자체가 있는지 확인하세요. 그렇지 않은 경우 권한을 부여하는 데 필요한 권한이 있는 사용자로 MySQL 서버에 연결해야 할 수도 있습니다. 일반적으로 데이터베이스 액세스 관리를 위해 전용 MySQL 사용자를 생성하고 일상적인 작업을 위해 '루트' 사용자에 의존하지 않는 것이 좋습니다.

데이터베이스 관리에 'root' 사용자를 사용하지 않는 경우 'root'를 실제 사용자 이름으로 바꾸는 것을 잊지 마십시오.

728x90
반응형
LIST
728x90
반응형
openstack 캐시 서비스는 자주 사용하는 정보를 메모리에 저장해 두었다가 필요할때 바로 사용할수 있는 서비스
캐시 서비스느 보통 컨트롤러 노드에 설치


# 캐시 서비스 확인

# apt search memcached

-- 설치정보 나옴 (길어서 생략)

# netstat -ntpa |grep LISTEN
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      822/memcached

# find . -name memcached.conf
./etc/memcached.conf

 

# 서비스 할 IP로 수정

# vi /etc/memcached.conf 

# Specify which IP address to listen on. The default is to listen on all IP addresses
# This parameter is one of the only security measures that memcached has, so make sure
# it's listening on a firewalled interface.

# Default connection port is 11211
-p 11211

# Run the daemon as root. The start-memcached will default to running as root if no
# -u command is present in this config file
-u memcache

#-l 127.0.0.1
-l 192.168.56.30    <--- 수정

 

# 서비스 확인 및 재기동

# service --status-all

 [ + ]  memcached
 
# service memcached stop
# service memcached start
# service memcached status
● memcached.service - memcached daemon
     Loaded: loaded (/lib/systemd/system/memcached.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2024-01-06 16:42:47 KST; 3s ago
       Docs: man:memcached(1)
   Main PID: 29408 (memcached)
      Tasks: 10 (limit: 4537)
     Memory: 4.5M
     CGroup: /system.slice/memcached.service
             └─29408 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 192.168.56.30 -P /var/run/memcached/memcached.pid

Jan 06 16:42:47 ubuntu.localdomain systemd[1]: Started memcached daemon.

 

 

# 재기동 전 후 비교 

# netstat -ntpa |grep LISTEN
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      822/memcached

# netstat -an |grep LISTEN
tcp        0      0 192.168.56.30:11211     0.0.0.0:*               LISTEN      29408/memcached
728x90
반응형
LIST
728x90
반응형
RabbitMQ는 서로 다른 소프트웨어 시스템 간의 통신을 용이하게 하는 오픈 소스 메시지 브로커 소프트웨어입니다. 이는 메시징 미들웨어 제품군의 일부이며 AMQP(Advanced Message Queuing Protocol)를 구현합니다. RabbitMQ를 사용하면 애플리케이션이 분산되고 확장 가능한 방식으로 데이터와 정보를 교환할 수 있습니다.

 

RabbitMQ의 주요 기능과 개념은 다음과 같습니다.

  • 메시지 브로커: RabbitMQ는 분산 시스템의 다양한 구성 요소 사이에서 중개자 또는 메시지 브로커 역할을 합니다. 생산자로부터 메시지를 받아 소비자에게 전달합니다.
  • 메시지 큐: 메시지는 수신자가 사용할 때까지 메시지를 보관하는 버퍼 역할을 하는 큐에 배치됩니다. 이는 생산 및 소비 구성 요소를 분리하여 비동기 통신을 허용합니다.
  • 교환: 생산자는 교환에 메시지를 보낸 다음 교환 유형에 의해 정의된 라우팅 규칙에 따라 적절한 대기열로 라우팅합니다. 일반적인 교환 유형에는 직접, 주제, 팬아웃 및 헤더가 포함됩니다.
  • 바인딩: 교환과 큐 간의 연결은 바인딩을 통해 설정됩니다. 바인딩은 메시지가 교환에서 큐로 라우팅되는 방식을 결정하는 라우팅 키 또는 기준을 정의합니다.
  • 게시/구독: RabbitMQ는 게시/구독 패턴을 지원하므로 여러 소비자가 동일한 메시지를 받을 수 있습니다. 이는 바인드된 모든 큐에 메시지를 브로드캐스트하는 팬아웃 교환을 통해 달성됩니다.
  • 메시지 승인: RabbitMQ는 메시지 수신을 승인하는 메커니즘을 제공합니다. 이렇게 하면 메시지가 안정적으로 처리되고 메시지 손실이 방지됩니다.
  • 내구성: RabbitMQ는 메시지와 대기열의 내구성을 지원합니다. 즉, 브로커를 다시 시작해도 메시지와 대기열이 유지될 수 있습니다. 이는 메시지 무결성과 가용성을 유지하는 데 중요합니다.
  • 클러스터링: RabbitMQ는 가용성과 내결함성을 향상시키기 위해 클러스터 구성으로 설정할 수 있습니다. 클러스터링을 사용하면 여러 RabbitMQ 노드가 단일 논리적 브로커로 함께 작동할 수 있습니다.
  • 플러그인 및 확장: RabbitMQ는 다양한 플러그인을 통해 확장되어 메시지 변환, 인증 및 권한 부여 메커니즘과 같은 기능을 추가할 수 있습니다.

RabbitMQ는 확장 가능하고 강력한 분산 시스템을 구축하기 위해 다양한 산업에서 널리 사용됩니다. 마이크로서비스 아키텍처 및 엔터프라이즈 통합 솔루션을 포함하여 복잡한 통신 요구 사항이 있는 애플리케이션을 위한 안정적인 메시징 인프라를 제공합니다

 

# rabbit MQ 설치되이 있는지 확인하기 
# apt search rabbitmq-server
Sorting... Done
Full Text Search... Done

rabbitmq-server/jammy-updates,jammy-security,now 3.9.13-1ubuntu0.22.04.2 all [installed]
  AMQP server written in Erlang

root@ubuntu:/#

or

# apt install rabbitmq-server

 

# rabbit MQ 설치 및 위치 찾기
# find . -name rabbitmqctl
./usr/sbin/rabbitmqctl
./usr/lib/rabbitmq/bin/rabbitmqctl
./usr/lib/rabbitmq/lib/rabbitmq_server-3.9.13/escript/rabbitmqctl
./usr/lib/rabbitmq/lib/rabbitmq_server-3.9.13/sbin/rabbitmqctl

# rabbit MQ 사용자 추가 및 권한 설정

 

# rabbit MQ 사용자 추가 및 권한 설정

# rabbit user : openstack / openstack 계정 및 권한을 생성합니다..

# rabbitmqctl add_user openstack openstack
Adding user "openstack" ...
Done. Don't forget to grant the user permissions to some virtual hosts! See 'rabbitmqctl help set_permissions' to learn more.

# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...

 

 

# rabbit MQ 대쉬보스 설정 및 서비스 스타트
# rabbitmq-plugins enable rabbitmq_management

Enabling plugins on node rabbit@ubuntu:
rabbitmq_management
The following plugins have been configured:
  rabbitmq_management
  rabbitmq_management_agent
  rabbitmq_web_dispatch
Applying plugin configuration to rabbit@ubuntu...
The following plugins have been enabled:
  rabbitmq_management
  rabbitmq_management_agent
  rabbitmq_web_dispatch

started 3 plugins.


# service rabbitmq-server stop
# service rabbitmq-server start
# service rabbitmq-server status
● rabbitmq-server.service - RabbitMQ Messaging Server
     Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2024-01-06 14:30:57 KST; 2min 33s ago
   Main PID: 18157 (beam.smp)
      Tasks: 23 (limit: 4537)
     Memory: 125.0M
     CGroup: /system.slice/rabbitmq-server.service
             ├─18157 /usr/lib/erlang/erts-12.2.1/bin/beam.smp -W w -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30>
             ├─18168 erl_child_setup 65536
             ├─18214 inet_gethost 4
             └─18215 inet_gethost 4

Jan 06 14:30:53 ubuntu.localdomain systemd[1]: Starting RabbitMQ Messaging Server...
Jan 06 14:30:57 ubuntu.localdomain systemd[1]: Started RabbitMQ Messaging Server.

 

# netstat 명령으로 서비스 확인 
# netstat -ntpa |grep LISTEN

tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      19045/beam.smp

 

 

# rabbit MQ 대쉬보드접속할수 있는 계정 생성
# rabbitmqctl add_user test test
Adding user "test" ...
Done. Don't forget to grant the user permissions to some virtual hosts! See 'rabbitmqctl help set_permissions' to learn more.


# rabbitmqctl set_user_tags test administrator
Setting tags for user "test" to [administrator] ...

# rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
Setting permissions for user "test" in vhost "/" ...

 

# rabbit page 접속 (위에서 만든 test/test 계정으로 로그인)


 

 

# rabbit MQ 로그인 후 화면 

 

 

728x90
반응형
LIST

'서버가상화 > openstack' 카테고리의 다른 글

openstack keystone  (0) 2024.01.06
openstack 캐쉬  (0) 2024.01.06
openstak 오류(Missing value auth-url required for auth plugin password)  (0) 2024.01.06
04 openstack 인스턴스 생성  (0) 2024.01.02
openstack mysql 설치정보  (0) 2024.01.02
728x90
반응형

 

# openstack user list  명령어가 안먹힘 
# su - stack
$ openstack user list
Missing value auth-url required for auth plugin password

 

# openstack 관리자 페이지 접속하여 openStack RC 파일을 다운로드 함

 

  - 오른쪽 admin 클릭하면 OpenStack RV 

 

 

- 다운로드된 admin-openrc.sh 파일

 

<윈도우 다운로드 폴더에 저장됨>

 

# 다운로드된 admin-openrc.sh 실행스크립트를 서버 아래에 생성
# su - stack 
$ cd /opt/stack/
$ vi admin-openrc.sh

--- 아래와 같이 편집함 --- 
#!/usr/bin/env bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 3 *Identity API* does not necessarily mean any other
# OpenStack API is version 3. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.

export OS_AUTH_URL=http://192.168.56.30/identity
# With the addition of Keystone we have standardized on the term **project**
# as the entity that owns the resources.
export OS_PROJECT_ID=249f6e9566fb44bbba10844ed6b7ca15
export OS_PROJECT_NAME="admin"
export OS_USER_DOMAIN_NAME="Default"
if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi
export OS_PROJECT_DOMAIN_ID="default"
if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi
# unset v2.0 items in case set
unset OS_TENANT_ID
unset OS_TENANT_NAME
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="admin"
# With Keystone you pass the keystone password.

echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# export OS_PASSWORD=openstack
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3

 

- 스크립트 중간에 read, export 명령이 안먹히는 경우 주석처리하고 그냥 export OS_PASSWORD=openstack

  (오픈스택 관리자 접속 패스워드로 설정함)

#read -sr OS_PASSWORD_INPUT
#export OS_PASSWORD=$OS_PASSWORD_INPUT
export OS_PASSWORD=openstack

 

# 또는 직접편집해서 작성도 가능

- 소스를 볼수 있는 사람은 다음과 같이 수정, OS_PROJECT_ID 같은 것은 직접확인해야 됨

# su - stack 
$ cd /opt/stack/
$ vi admin-openrc.sh


export OS_AUTH_URL=http://192.168.56.30/identity
export OS_PROJECT_ID=249f6e9566fb44bbba10844ed6b7ca15
export OS_PROJECT_NAME="admin"
export OS_USER_DOMAIN_NAME="Default"
if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi
export OS_PROJECT_DOMAIN_ID="default"
if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi
unset OS_TENANT_ID
unset OS_TENANT_NAME
export OS_USERNAME="admin"
export OS_PASSWORD=openstack
export OS_REGION_NAME="RegionOne"
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3

 

 

- 실행은 . admin-openrc.sh로 하면되고 확인은 export 명령으로 확인 

  (위에 echo 를 안막아 놨기 때문에 Plase enter....   의미없는 내용 출력됨)

$ . admin-openrc.sh
Please enter your OpenStack Password for project admin as user admin:


$ export
declare -x ANSIBLE_FORCE_COLOR="1"
declare -x HOME="/opt/stack"
declare -x LANG="en_US.UTF-8"
declare -x LESSCLOSE="/usr/bin/lesspipe %s %s"
declare -x LESSOPEN="| /usr/bin/lesspipe %s"
declare -x LIBVIRT_DEFAULT_URI="qemu:///system"
declare -x LOGNAME="stack"
declare -x LS_COLORS="rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:"
declare -x MAIL="/var/mail/stack"
declare -x OLDPWD
declare -x OS_AUTH_URL="http://192.168.56.30/identity"
declare -x OS_IDENTITY_API_VERSION="3"
declare -x OS_INTERFACE="public"
declare -x OS_PASSWORD="openstack"
declare -x OS_PROJECT_DOMAIN_ID="default"
declare -x OS_PROJECT_ID="249f6e9566fb44bbba10844ed6b7ca15"
declare -x OS_PROJECT_NAME="admin"
declare -x OS_REGION_NAME="RegionOne"
declare -x OS_USERNAME="admin"
declare -x OS_USER_DOMAIN_NAME="Default"
declare -x PATH="/opt/stack/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
declare -x PIPX_BIN_DIR="/usr/local/bin"
declare -x PIPX_HOME="/usr/local/share/pipx"
declare -x PWD="/opt/stack"
declare -x PY_COLORS="1"
declare -x SHELL="/bin/bash"
declare -x SHLVL="1"
declare -x TERM="xterm"
declare -x USER="stack"

 

# 실행이 되는지 확인
$ openstack user list
+----------------------------------+-----------------+
| ID                               | Name            |
+----------------------------------+-----------------+
| b4fcb0e52f97460abe4e29636414cdb7 | admin           |
| a61e57afc80049c8a5cbb1498074d409 | demo            |
| c0352220e7794b32b3f80aa18b2a4b91 | demo_reader     |
| a000fb49071744aab535592dece3ff7f | alt_demo        |
| 0c89f5fc69b049678cb5bd8d830f362b | alt_demo_member |
| eb214183296e4dce864338307f1512be | alt_demo_reader |
| 1832d5d8350c4c5984ade31160b6a9f8 | system_member   |
| 1a86c1a158744e22a2a595bf866646fd | system_reader   |
| 3eda2d4c3c8943dea50dd1c09e4da79c | nova            |
| ba847902da4d49d5b0aa225dc563a05f | glance          |
| ad604d6a01cf4f17a5a55b25b4b97c98 | cinder          |
| adda8aaadebe483b84b2b616fcdf7b6c | neutron         |
| 586cbd552e7b451886b67e8744b5ba11 | placement       |
+----------------------------------+-----------------+

 

 

 

 

728x90
반응형
LIST

'서버가상화 > openstack' 카테고리의 다른 글

openstack 캐쉬  (0) 2024.01.06
openstack RabbitMQ 설치(메시지 Queus)  (0) 2024.01.06
04 openstack 인스턴스 생성  (0) 2024.01.02
openstack mysql 설치정보  (0) 2024.01.02
01 openstack 설치하기(★★★)  (0) 2023.12.31
728x90
반응형

 

# 프로젝트 - Computer - 인스턴스 - 인스턴스 시작


# 세부 정보 


# 소스
- 사용가능 항목중에 cirros-0.5.2-x86_64-disk   위로 할당해 준다


 


# Flavor
 - 사용가능 (11개) 항목중에 m1.nano 를 위로 할당해 준다



# 네트워크

 - 사용가능 항목중에 shared 네트워크를 위로 할당해 준다


- shared 세부내용은 다음과 같다


# 보안그룹
 - Default security group를 선택하고 인스턴스 시작 클릭



# 인스턴스 생성 화면
- builder 중 


- 생성완료 화면
- 192.168.233.38 instance 가 생성됨

# Instance Name demo 클릭


o 콘솔로 해당 instance 로그인
- id : cirros 
- pw : gocubsgo  로긴하면 루트로 로긴이 안됨


$ sudo passwd root 로 해도 안먹힘 
- segmenation fault 뜸

$ /etc/shdow 
- root로 로긴 /etc/shadow 파일도
- Permission denied 뜸

방법 확인중



# 네트워크 토폴로지 확인
 - 아직 public 네트워크와 통신이 안되어서 외부통신은 안될것 같다. (확인중)


# 라우터 생성과 외부통신 확인

(준비중)




728x90
반응형
LIST
728x90
반응형

# mysql 패스워드 변경

# mysql_upgrade -u root -p
The mysql_upgrade client is now deprecated. The actions executed by the upgrade cli                                                                                                                                                                                           ent are now done by the server.
To upgrade, please start the new MySQL binary with the older data directory. Repair                                                                                                                                                                                           ing user tables is done automatically. Restart is not required after upgrade.
The upgrade process automatically starts on running a new MySQL binary with an olde                                                                                                                                                                                           r data directory. To avoid accidental upgrades, please use the --upgrade=NONE optio                                                                                                                                                                                           n with the MySQL binary. The option --upgrade=FORCE is also provided to run the ser                                                                                                                                                                                           ver upgrade sequence on demand.
It may be possible that the server upgrade fails due to a number of reasons. In tha                                                                                                                                                                                           t case, the upgrade sequence will run again during the next MySQL server start. If                                                                                                                                                                                            the server upgrade fails repeatedly, the server can be started with the --upgrade=M                                                                                                                                                                                           INIMAL option to start the server without executing the upgrade sequence, thus allo                                                                                                                                                                                           wing users to manually rectify the problem.
root@ubuntu:~#


# mysql -u root -popenstack -h 192.168.56.30
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>




# mysql -uroot -popenstack
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'root'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql>
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)


# MYSQL_PWD="openstack" mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>






# mysql_config_editor set --login-path=root --host=localhost --user=root --password --port=3306
Enter password:



# mysql -u root -popenstack -h 192.168.56.30
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 15
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

 

 

 

 

 

 

 

 

 

# mysql  환경정보  /etc/mysql/my.cnf

 

# cat /etc/mysql/my.cnf

#
# The MySQL database server configuration file.
#
#

!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/

[mysqld]
max_connections = 1024
default-storage-engine = InnoDB
sql_mode = TRADITIONAL
#bind-address = 0.0.0.0
bind-address = 192.168.56.30

 

# mysql 서버 IP 지정하여 접속하기
$ mysql -u root -popenstack -h 192.168.56.30

mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

 

 # mysql 상태 조회
mysql> status
--------------
mysql  Ver 8.0.35-0ubuntu0.22.04.1 for Linux on x86_64 ((Ubuntu))

Connection id:          16
Current database:
Current user:           root@192.168.56.30
SSL:                    Cipher in use is TLS_AES_256_GCM_SHA384
Current pager:          stdout
Using outfile:          ''
Using delimiter:        ;
Server version:         8.0.35-0ubuntu0.22.04.1 (Ubuntu)
Protocol version:       10
Connection:             192.168.56.30 via TCP/IP
Server characterset:    utf8mb4
Db     characterset:    utf8mb4
Client characterset:    utf8mb4
Conn.  characterset:    utf8mb4
TCP port:               3306
Binary data as:         Hexadecimal
Uptime:                 10 min 47 sec

Threads: 2  Questions: 16  Slow queries: 0  Opens: 122  Flush tables: 3  Open tables: 41  Queries per second avg: 0.024
--------------

 

# mysql 재기동, 상태정보 
# netstat -ntpa |grep LISTEN
tcp        0      0 192.168.56.30:3306      0.0.0.0:*               LISTEN      12777/mysqld
# service --status-all
 [ + ]  mysql

 [ + ]  postgresql
# service mysql status
● mysql.service - MySQL Community Server
     Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2024-01-06 14:00:34 KST; 3h 12min ago
    Process: 12768 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
   Main PID: 12777 (mysqld)
     Status: "Server is operational"
      Tasks: 38 (limit: 4537)
     Memory: 366.1M
     CGroup: /system.slice/mysql.service
             └─12777 /usr/sbin/mysqld

 

 

#  오픈스텍 설치후 mysql DB를 윈도우에서 접속 방법

 

- MYSQL 접속툴인 HeidiSQL 설치

 

https://www.heidisql.com/download.php

 

Download HeidiSQL

Download HeidiSQL 12.6, released on 05 Nov 2023 Please disable your adblocker and reload the page to enable HeidiSQL downloads. Or, if you can't overcome the temptation, rightclick the anchor and click "copy link" to get it. Compatibility notes HeidiSQL ru

www.heidisql.com

 

# HeidiSQL 실행후 다음과 같이 설정함
  • 호스트명 / ip : 서버 ip
  • 포트 : 3306
  • 사용자 /암호 

 

 

 

728x90
반응형
LIST
728x90
반응형

 

# Oracle VM에서 Vargrant  OS를 다운받아서 devstack으로 All-In-One Single Machine로 OpenStack을 설치해 본다. 

- CentOS7, CentOS8 설치 실패(2023. 11월)
- Rocky9 설치 실패 (2023.12월)

- Ubuntu 20.04(jemmy) 설치실패(2023.12월)
-Ubuntu 20.04(jemmy)  OpenStack All-In-On 설치성공(2024.1.1) 

# 처음부터 차근차근 다시 

# 준비물 
- PC에 Oracle VM 설치    https://www.virtualbox.org/         VirtualBox-7.0.4-154605-Win.exe 
- PC에 Vagrant 파일 설치  https://www.vagrantup.com/    vagrant_2.3.4_windows_i686.msi 

 

# Openstack 설치 참고 사이트 -1 

 

https://docs.openstack.org/devstack/latest/

 

DevStack — DevStack documentation

DevStack DevStack is a series of extensible scripts used to quickly bring up a complete OpenStack environment based on the latest versions of everything from git master. It is used interactively as a development environment and as the basis for much of the

docs.openstack.org

 

# Openstack 설치 참고 사이트 -2  (All-In-One Single Machine)

 

https://docs.openstack.org/devstack/latest/guides/single-machine.html

 

All-In-One Single Machine — DevStack documentation

All-In-One Single Machine Things are about to get real! Using OpenStack in containers or VMs is nice for kicking the tires, but doesn’t compare to the feeling you get with hardware. Prerequisites Linux & Network Minimal Install You need to have a system

docs.openstack.org

 

# VARGRANT 파일 
# Vargrant 설치후  C:\HashiCorp\Vagrantfile 생성 및 수정
Vagrant.configure("2") do |config|
  config.vm.box = "alvistack/ubuntu-22.04"
end​

<vargrant vargrantfile>

# Vargrantfile 에 config.vm.box ="OOO" 의 내용은  https://app.vagrantup.com  사이트에 접속하여
   설치하고 싶은 OS를 찾으면 됨

  - 저는 오픈스택 홈페이지에서 권장하는 Ubuntu 22.04(jammy) version의 ubuntu OS 이미지를 다운로드함


 # alvistack/ubuntu-22.04 이미지를 선택하면 
   - Vargrantfiel을 아래와 같이 설정하라고 나옴

 

# 파일수정후 Vargrant UP 실행   (Wiindows CMD.exe 모드에서 실행)

C:\HashiCorp\Vagrant UP 

<vagrant UP하여 이미지를 다운로드 하는 화면>


# Oracle VM을 실행하면 VitualBOX에 다음과 같이 VM이 생김
  - 설치가 끝나면 아래와 같이 vm이 생성됨 ... HashCorp_defaul...... 


# 해당 이미지를 중단시키고 해당정보를 수정함  -- 설정에서 수행
   - 일반 - 기본 - 이름(N) : alvistack-ubuntu-22.04  
    (본인이 알기 쉬운 이름으로 수정하면 됨,, 시스템에 영향 없음)


 # 시스템 - 마더보드 - 기본 메모리 : 4096MB 로 수정  (또는 이하로 수정)
    (본인의 PC 사양에 맞게 메모리 수정 : default는 8012MB로 되어 있음)


# 네트워크 아댑터를 수정함
    - 어뎁터 모드의 브리지 모드, NAT 모드 차이점, VM DHCP 설정은 해당 게시물 참조    ㅇ
o 어뎁터 1 : 어댑터에 브리지  (무작위 모드 : 모두 허용)
o 어뎁터 2 : 호스트전용어댑터 (무작위 모드 : 모두 허용)  

 




 

# VARGRANT 파일 확장 사용  (예시)
# Vargrant 파일로 Oracle VM 환경설정을 할수 있으나 저는 귀찮아서 그냥 VM 만들어 놓고 수동으로 수정함
# Vargant 파일 확장 사용 예시
(기존)
Vagrant.configure("2") do |config|
  config.vm.box = "alvistack/ubuntu-22.04"
end


(응용)
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
    config.vm.define "OPENSTACK" do |cfg|
      cfg.vm.box = "alvistack/ubuntu-22.04"
      cfg.vm.provider "virtualbox" do |vb|
        vb.name = "alvistack-ubuntu-22.04"
        vb.cpus = 2
        vb.memory = 4096
      end
      cfg.vm.host_name = "openstack_svr"
      cfg.vm.network "private_network", ip: "192.168.56.30"
      cfg.vm.network "forwarded_port", guest: 22, host: 60010, auto_correct: true, id: "ssh"
      cfg.vm.provision "shell", inline: "sudo apt install git -y"
      cfg.vm.provision "shell", inline: "sudo apt install network-manager -y"
      cfg.vm.provision "shell", inline: "sudo apt install net-tools -y"
      cfg.vm.provision "shell", inline: "sudo systemctl disable ufw"
      cfg.vm.provision "shell", inline: "ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime"
    end
end

* 먹힐때도 있고 안먹힐때도 있음

 

# 생성된 Oracle VM 접속 방법 및 시스템 기본 설정 

 

# 해당 오라클 VM 을 실행하면 다음과 같은 기본 창이 뜨고 여기서 접속 

- login 시 root / vagrant 로 접속  (vargrant 이미지 파일의 기본 패스워드는 vagrant 임)
- 일부 root로 로그인이 안될 경우 vargrant/vagrant 로 접속한후
$ sudo passwd root  명령어를 입력하여 root 패스워드를 변경함


# root로 로그인하여 몇가지 수정을 해야 됨

1. 네트워크 IP 설정
2. SSH 접속 (향후 putty.exe로 접속을 위한 sshd_conf 파일 수정
3. 날짜 수정 (시간 설정)
4. apt 업데이트
5. 방화벽 disable (설치를 위해서 일단은 중단)


1. 네트워크 설정
# vi /etc/netplan/00-installer-copnfig.yaml (파일 수정후)
# netplan apply​


# 아래 2개 명령어 실행이 안되면 다음으로 넘어감 (설치되어 있을수도 있음)
# vi /etc/netplan/00-installer-copnfig.yaml (파일 수정후)
# netplan apply

- eth0 : DHCP 집(카페 등) 공유기 IP 대역   (192.168.219.16은 공유기에서 자동으로 받아옴)
- eth1 : Oracle VM에 설치된 서버의 IP  (192.168.56.30은 netplan 파일 수정)

2. SSH 접속 (향후 putty.exe로 접속을 위한 sshd_conf 파일 수정)

# /etc/ssh/sshd_config 파일에 다음 항목을 수정
PermitRootLogin yes로 변경
PasswordAuthentication yes로 변경​

 

# systemctl restart ssh 실행
이제부터 putty로 접속가능

3. 날짜 수정 (시간 설정)
- 시스템 시간을 서울 시간대로 바꿈
# ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime
# sudo timedatectl set-timezone Asia/Seoul


4.  apt 업데이트

# sudo apt update
# sudo apt upgrade -y
# sudo apt-get update
# sudo apt-get upgrade -y


5. 방화벽 disable (설치를 위해서 일단은 중단 / 향후 운영할때는 올바른 방법은 아닌것 같음)

# systemctl status ufw
# systemctl stop ufw
# systemctl disable ufw



 

# openstack  devstack으로 설치 준비

# devstack 설치를 위한 stack 계정 생성

# sudo useradd -s /bin/bash -d /opt/stack -m stack
# sudo chmod +x /opt/stack
# echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack​
# sudo -u stack -i

 


# git 파일을 설치합니다 (대부분 설치되어 있어서 이 과정은 빠져도 됨)

# sudo apt install git -y​


# stack 계정으로 변경후 git으로 devstack을 설치합니다..

# su - stack  
$ pwd        
/opt/stack/     
$ git clone https://opendev.org/openstack/devstack       
$ cd /opt/stack/devstack/ 
$ git checkout stable/2023.2

 

$ git checkout stable/ version 은  https://opendev.org/openstack/devstack/branches 에서 확인가능

 


(중요) git clone 으로 그냥 받게 되면 devstack 이 불완전하다고 하여 2023.1 버전 공식 릴리스로 사용
- git checkout stable/2023.1
(도움을 주신분 오픈스텍 코리아 조성수 님(NHN 근무)

 

# local.conf 수정 (All-In-One Single Machine)

 

- openstack 서버 ip : 192.168.56.30 (eth1)

- openstack gw : 192.168.56.1

$ cp /opt/stack/devstack/samples/local.conf /opt/stack/devstack
$ vi /opt/stack/devstack/local.conf

 

- local.conf 파일에 아래 내용을 복사해서 저장

[[local|localrc]]

# eth0 : 192.168.1.0/24   (공유기 Public IP)
# eth1 : 192.168.56.30/24 (openstack 서버 IP)

# ===== BEGIN localrc =====
HOST_IP=192.168.56.30
ADMIN_PASSWORD=openstack
DATABASE_PASSWORD=openstack
RABBIT_PASSWORD=openstack
SERVICE_PASSWORD=openstack
SERVICE_TOKEN=openstack

#PUBLIC_INTERFACE=eth0
#FLOATING_RANGE=192.168.1.0/24
#PUBLIC_NETWORK_GATEWAY=192.168.1.1
#Q_FLOATING_ALLOCATION_POOL=start=192.168.0.100,end=192.168.0.200
#FIXED_RANGE=10.0.0.0/24

GIT_BASE=https://opendev.org
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
#LOGDAYS=2
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
#enable_service rabbit
#enable_plugin neutron $GIT_BASE/openstack/neutron
#enable_service q-qos
#enable_service placement-api placement-client
#enable_plugin octavia $GIT_BASE/openstack/octavia master
#enable_plugin octavia-dashboard $GIT_BASE/openstack/octavia-dashboard
#enable_plugin ovn-octavia-provider $GIT_BASE/openstack/ovn-octavia-provider
#enable_plugin octavia-tempest-plugin $GIT_BASE/openstack/octavia-tempest-plugin
#enable_service octavia o-api o-cw o-hm o-hk o-da
#disable_service c-api c-vol c-sch
#enable_service tempest

# ===== END localrc =====

 

(참조) 설치시 에러

# 설치할때 아래 부분에서 에러가 많이 남,, 일단 주석처리 해놓고 관리자 모드(대쉬보드)에서수정해도 됨
#PUBLIC_INTERFACE=eth0
#FLOATING_RANGE=192.168.1.0/24
#PUBLIC_NETWORK_GATEWAY=192.168.1.1
#Q_FLOATING_ALLOCATION_POOL=start=192.168.0.100,end=192.168.0.200
#FIXED_RANGE=10.0.0.0/24



(참조) All-In-One Single Machine 설치시 참조사이트
https://docs.openstack.org/devstack/latest/guides/single-machine.html


DevStack 실행 
이제 구성합니다 stack.sh. DevStack에는 devstack/samples/local.conf. local.conf다음을 수행하려면 아래와 같이 생성하세요 .

  • FLOATING_RANGE로컬 네트워크에서 사용되지 않는 범위(예: 192.168.1.224/27)로 설정하세요 . 225~254로 끝나는 IP 주소를 유동 IP로 사용할 수 있도록 구성합니다.
  • FIXED_RANGE인스턴스가 사용하는 내부 주소 공간을 구성하려면 설정합니다 .
  • 관리 비밀번호를 설정합니다. 이 비밀번호는 OpenStack 사용자로 설정된 관리자 및 데모 계정 에 사용됩니다 .
  • MySQL 관리 비밀번호를 설정합니다. 여기서 기본값은 임의의 16진수 문자열로, 데이터베이스에서 무엇이든 직접 확인해야 하는 경우 불편합니다.
  • RabbitMQ 비밀번호를 설정합니다.
  • 서비스 비밀번호를 설정하세요. 이는 OpenStack 서비스(Nova, Glance 등)에서 Keystone을 인증하는 데 사용됩니다. 


# stack.sh 실행

$ /opt/stack/devstack/./stack.sh

 

# stack 실행 오류 무시하고 설치

$ /opt/stack/devstack/FORCE=yes ./stack.sh


# stack 실행오류 시 재설치

$ /opt/stack/devstack/./unstack
$ /opt/stack/devstack/./clean.sh

 

# 설치완료 

# 설치 완료 로그
=================
 Async summary
=================
 Time spent in the background minus waits: 547 sec
 Elapsed time: 2092 sec
 Time if we did everything serially: 2639 sec
 Speedup:  1.26147


Post-stack database query stats:
+------------+-----------+-------+
| db         | op        | count |
+------------+-----------+-------+
| keystone   | SELECT    | 46213 |
| keystone   | INSERT    |    93 |
| neutron    | SELECT    |  3917 |
| neutron    | CREATE    |     1 |
| neutron    | SHOW      |     4 |
| neutron    | INSERT    |  4111 |
| neutron    | DELETE    |    28 |
| neutron    | UPDATE    |   116 |
| placement  | SELECT    |    46 |
| placement  | INSERT    |    55 |
| placement  | SET       |     1 |
| nova_api   | SELECT    |   114 |
| nova_cell0 | SELECT    |    75 |
| nova_cell1 | SELECT    |   178 |
| nova_cell0 | INSERT    |     5 |
| nova_cell0 | UPDATE    |     6 |
| nova_cell1 | UPDATE    |    42 |
| nova_cell1 | INSERT    |     4 |
| cinder     | SELECT    |   121 |
| cinder     | INSERT    |     5 |
| placement  | UPDATE    |     3 |
| cinder     | UPDATE    |     3 |
| nova_api   | INSERT    |    20 |
| glance     | SELECT    |    47 |
| glance     | INSERT    |     6 |
| glance     | UPDATE    |     2 |
| cinder     | DELETE    |     1 |
| nova_api   | SAVEPOINT |    10 |
| nova_api   | RELEASE   |    10 |
+------------+-----------+-------+



This is your host IP address: 192.168.56.30
This is your host IPv6 address: ::1
Horizon is now available at http://192.168.56.30/dashboard
Keystone is serving at http://192.168.56.35/identity/
The default users are: admin and demo
The password: openstack

Services are running under systemd unit files.
For more information see:
https://docs.openstack.org/devstack/latest/systemd.html

DevStack Version: 2023.2
Change: b082d3fed3fe05228dabaab31bff592dbbaccbd9 Make multiple attempts to download image 2023-12-12 08:07:39 +0000
OS Version: Ubuntu 22.04 jammy

# 설치과정 로그파일 첨부

  - 설치시 여러번 설치 실패로 192.168.56.30, 192.168.56.35, 192.168.56.36 192.168.56.41 등 
    한 20번은 IP를 바꿔가며 설치 한것 같네요,,, 
    그러다 보니 서버 IP 및 로그 IP가 제 각각 입니다. (본인 IP 설정에 맞게 비교해 보시면 됩니다.)

    

20240113_오픈스텍_설치완료2_로그.txt
0.22MB

 

# 192.168.56.30 번 서버 설치 성공로그 

 

20240114_오픈스텍_설치완료2 36번 서버_로그.txt
0.16MB

 

 

 

192.168.56.41번으로 설정하여 설치완료된 로그

 

오픈스택-192.168.56.41번으로 설치한 로그.txt
0.22MB

 

# 오픈스텍 접속 및 로그인
# 고대하고 고대하던 오픈스택 로그인 화면이 떴다
http://192.168.56.30 접속하면

http://192.168.56.30/dashboard/auth/login/?next=/dashboard/ 로 포워딩 됨

# 사용자 이름 : admin 
# 암호는 : openstack   /opt/stack/devstack/local.conf에 설정한 패스워드 

 

 

 

# 오픈스택 코리아 조성수 님이 도움 주신내용

 

https://www.facebook.com/groups/openstack.kr?locale=ko_KR

 

Facebook에 로그인

Notice 계속하려면 로그인해주세요.

www.facebook.com

 


 

 

 

# Openstack git hub 사이트

 

https://opendev.org/openstack/devstack

 

devstack

System for quickly installing an OpenStack cloud from upstream git for testing and development.

opendev.org

 

 

 

 

 

 

 

반응형

 

728x90
반응형
LIST
728x90
반응형

 

#  서비스 상태 정보  service --status-all
# service --status-all

 [ - ]  apache-htcacheclean
 [ + ]  apache2
 [ + ]  apparmor
 [ + ]  apport
 [ + ]  binfmt-support
 [ - ]  console-setup.sh
 [ + ]  cron
 [ - ]  cryptdisks
 [ - ]  cryptdisks-early
 [ + ]  dbus
 [ - ]  grub-common
 [ + ]  guestfs-firstboot
 [ + ]  haproxy
 [ - ]  hwclock.sh
 [ + ]  irqbalance
 [ + ]  iscsid
 [ - ]  keyboard-setup.sh
 [ + ]  kmod
 [ - ]  lvm2
 [ - ]  lvm2-lvmpolld
 [ + ]  memcached
 [ + ]  mysql
 [ - ]  open-iscsi
 [ - ]  open-vm-tools
 [ + ]  openvswitch-switch
 [ + ]  pcp
 [ + ]  plymouth
 [ + ]  plymouth-log
 [ + ]  pmcd
 [ + ]  pmie
 [ + ]  pmlogger
 [ + ]  pmproxy
 [ + ]  postgresql
 [ + ]  procps
 [ + ]  rabbitmq-server
 [ - ]  rsync
 [ + ]  rtslib-fb-targetctl
 [ - ]  screen-cleanup
 [ + ]  ssh
 [ + ]  sysfsutils
 [ + ]  udev
 [ - ]  ufw
 [ + ]  unattended-upgrades
 [ + ]  uuidd
 [ + ]  uwsgi
 [ - ]  x11-common

 

 

netstat -ntpa |grep LISTEN

 

 

# ifconfig -a
인터페이스 정보
br-ex: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 92:2a:8c:f9:ee:47  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

br-int: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 66:82:f6:3b:99:80  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.96  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::a00:27ff:fe59:d482  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:59:d4:82  txqueuelen 1000  (Ethernet)
        RX packets 1055  bytes 148968 (148.9 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 37  bytes 3346 (3.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.30  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::a00:27ff:fe50:df83  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:50:df:83  txqueuelen 1000  (Ethernet)
        RX packets 1156  bytes 159058 (159.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1107  bytes 547039 (547.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 103817  bytes 33887935 (33.8 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 103817  bytes 33887935 (33.8 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovs-system: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether de:a3:cb:20:cb:fb  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:bc:a0:bc  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0​

 

 

# keystone 
(환경설정)
# /etc/keystone/keystone.conf

 

# openstack dashboard
(설치)
# apt-get install openstack-dashboard

(환경설정)
# /etc/openstack-dashboard/local_settings.py

 

# httpd
(환경설정)
#  /etc/httpd/conf/httpd.conf

# /etc/httpd/conf.d/wsgi-keystone.conf

# systemctl status httpd.service
# systemctl start httpd.service
# systemctl enable httpd.service

 

 

728x90
반응형
LIST

+ Recent posts