728x90
반응형

 

타 사용자의 소스코드 레포지토리에 접속하여 오른쪽 상단위 Fokr --> + Create a new fork 버튼 누름

 

Create fork 버튼을 누름

 

 

728x90
반응형
LIST
728x90
반응형
# doker 업그레이드 전에 yum update 한다

 

# yum update -y
# yum install dnf -y
# dnf update -y

 

#  centos7에서 docker pull error

  - 도커의 버젼이 낮아서 나타나는 현상

# docker pull nginx:stable
Trying to pull repository docker.io/library/nginx ...
missing signature key

 

#  centos7에서 docker 재설치(missing signature key)
# sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux  docker-engine-selinux docker-engine
# sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo


# 최신버젼 설치
# sudo yum install docker-ce


# docker 버젼 조회
# yum list docker-ce --showduplicates | sort -r
 yum list docker-ce --showduplicates | sort -r
 * updates: mirror.kakao.com
This system is not registered with an entitlement server. You can use subscription-manager to register.
              : manager
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-
Installed Packages
 * extras: mirror.kakao.com
 * epel: mirror-nrt.yuki.net.uk
docker-ce.x86_64            3:25.0.3-1.el7                     docker-ce-stable
docker-ce.x86_64            3:25.0.2-1.el7                     docker-ce-stable
docker-ce.x86_64            3:25.0.1-1.el7                     docker-ce-stable
docker-ce.x86_64            3:25.0.0-1.el7                     docker-ce-stable
docker-ce.x86_64            3:24.0.9-1.el7                     docker-ce-stable
docker-ce.x86_64            3:24.0.8-1.el7                     docker-ce-stable
docker-ce.x86_64            3:24.0.7-1.el7                     docker-ce-stable
docker-ce.x86_64            3:24.0.6-1.el7                     docker-ce-stable
docker-ce.x86_64            3:24.0.5-1.el7                     docker-ce-stable
docker-ce.x86_64            3:24.0.4-1.el7                     docker-ce-stable
docker-ce.x86_64            3:24.0.3-1.el7                     docker-ce-stable
docker-ce.x86_64            3:24.0.2-1.el7                     docker-ce-stable
docker-ce.x86_64            3:24.0.1-1.el7                     docker-ce-stable
docker-ce.x86_64            3:24.0.0-1.el7                     docker-ce-stable
docker-ce.x86_64            3:23.0.6-1.el7                     docker-ce-stable
docker-ce.x86_64            3:23.0.5-1.el7                     docker-ce-stable
docker-ce.x86_64            3:23.0.4-1.el7                     docker-ce-stable
docker-ce.x86_64            3:23.0.3-1.el7                     docker-ce-stable
docker-ce.x86_64            3:23.0.2-1.el7                     docker-ce-stable
docker-ce.x86_64            3:23.0.1-1.el7                     docker-ce-stable
docker-ce.x86_64            3:23.0.0-1.el7                     docker-ce-stable
docker-ce.x86_64            3:20.10.9-3.el7                    docker-ce-stable
docker-ce.x86_64            3:20.10.8-3.el7                    docker-ce-stable
docker-ce.x86_64            3:20.10.7-3.el7                    docker-ce-stable
docker-ce.x86_64            3:20.10.6-3.el7                    docker-ce-stable
docker-ce.x86_64            3:20.10.5-3.el7                    docker-ce-stable
docker-ce.x86_64            3:20.10.4-3.el7                    docker-ce-stable
docker-ce.x86_64            3:20.10.3-3.el7                    docker-ce-stable
docker-ce.x86_64            3:20.10.24-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.2-3.el7                    docker-ce-stable
docker-ce.x86_64            3:20.10.23-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.22-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.21-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.20-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.19-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.18-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.17-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.16-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.15-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.14-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.1-3.el7                    docker-ce-stable
docker-ce.x86_64            3:20.10.13-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.12-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.11-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.10-3.el7                   docker-ce-stable
docker-ce.x86_64            3:20.10.0-3.el7                    docker-ce-stable
docker-ce.x86_64            3:19.03.9-3.el7                    docker-ce-stable
docker-ce.x86_64            3:19.03.8-3.el7                    docker-ce-stable
docker-ce.x86_64            3:19.03.7-3.el7                    docker-ce-stable
docker-ce.x86_64            3:19.03.6-3.el7                    docker-ce-stable
docker-ce.x86_64            3:19.03.5-3.el7                    docker-ce-stable
docker-ce.x86_64            3:19.03.4-3.el7                    docker-ce-stable
docker-ce.x86_64            3:19.03.3-3.el7                    docker-ce-stable
docker-ce.x86_64            3:19.03.2-3.el7                    docker-ce-stable
docker-ce.x86_64            3:19.03.15-3.el7                   docker-ce-stable
docker-ce.x86_64            3:19.03.14-3.el7                   docker-ce-stable
docker-ce.x86_64            3:19.03.1-3.el7                    docker-ce-stable
docker-ce.x86_64            3:19.03.13-3.el7                   docker-ce-stable
docker-ce.x86_64            3:19.03.12-3.el7                   docker-ce-stable
docker-ce.x86_64            3:19.03.12-3.el7                   @docker-ce-stable
docker-ce.x86_64            3:19.03.11-3.el7                   docker-ce-stable
docker-ce.x86_64            3:19.03.10-3.el7                   docker-ce-stable
docker-ce.x86_64            3:19.03.0-3.el7                    docker-ce-stable
docker-ce.x86_64            3:18.09.9-3.el7                    docker-ce-stable
docker-ce.x86_64            3:18.09.8-3.el7                    docker-ce-stable
docker-ce.x86_64            3:18.09.7-3.el7                    docker-ce-stable
docker-ce.x86_64            3:18.09.6-3.el7                    docker-ce-stable
docker-ce.x86_64            3:18.09.5-3.el7                    docker-ce-stable
docker-ce.x86_64            3:18.09.4-3.el7                    docker-ce-stable
docker-ce.x86_64            3:18.09.3-3.el7                    docker-ce-stable
docker-ce.x86_64            3:18.09.2-3.el7                    docker-ce-stable
docker-ce.x86_64            3:18.09.1-3.el7                    docker-ce-stable
docker-ce.x86_64            3:18.09.0-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.3.ce-3.el7                   docker-ce-stable
docker-ce.x86_64            18.06.2.ce-3.el7                   docker-ce-stable
docker-ce.x86_64            18.06.1.ce-3.el7                   docker-ce-stable
docker-ce.x86_64            18.06.0.ce-3.el7                   docker-ce-stable
docker-ce.x86_64            18.03.1.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            18.03.0.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.12.1.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.12.0.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.09.1.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.09.0.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.06.2.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.06.1.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.06.0.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.03.3.ce-1.el7                   docker-ce-stable
docker-ce.x86_64            17.03.2.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable


# 특정버젼 설치
# yum install docker-ce-19.03.12-3.el7



# docker 버젼 확인
# docker 업그레이드 전
# docker -v
Docker version 1.13.1, build 7d71120/1.13.1

# docker 업그레이드 후
# docker -v
Docker version 25.0.2, build 29cf629
728x90
반응형
LIST

'kubernetes' 카테고리의 다른 글

Kubernetes 연결을 담당하는 서비스  (1) 2024.02.13
Kubernetes 설명  (1) 2024.02.12
Prometheus 설명  (0) 2024.02.12
728x90
반응형

# 리눅스에서 해당 디렉토리 아래 특정파일 내 문자열을 찾는 방법

  • 디렉토리 아래 특정파일 내 문자열이 localhost 가 들어가 있는 파일을 찾아 192.168.56.128으로 치환하는 방법
# ls -al
total 52
drwxr-xr-x  4 root root 4096 Mar  9 14:59 .
drwxr-xr-x 14 root root 4096 Mar  9 09:11 ..
-rw-r--r--  1 root root  125 Mar  9 14:59 4-11-prometheus.yml
-rw-r--r--  1 root root  580 Mar  9 14:59 4-12-pushgateway.py
-rw-r--r--  1 root root  166 Mar  9 14:59 4-13-graphite-bridge.py
-rw-r--r--  1 root root  233 Mar  9 14:59 4-14-parse.py
-rw-r--r--  1 root root  393 Mar  9 14:59 4-1-wsgi.py
-rw-r--r--  1 root root  529 Mar  9 14:59 4-2-twisted.py
-rw-r--r--  1 root root  123 Mar  9 14:59 4-3-config.py
-rw-r--r--  1 root root  657 Mar  9 14:59 4-4-app.py
-rw-r--r--  1 root root  619 Mar  9 14:59 4-6-example.go
drwxr-xr-x  3 root root 4096 Mar  9 14:59 4-7-java-httpserver
drwxr-xr-x  3 root root 4096 Mar  9 14:59 4-9-java-servlet

# pwd
/etc/prometheus/ep-examples-master/4

# find . -type f -print | xargs grep -i "localhost" /dev/null
./4-11-prometheus.yml:      - localhost:9091
./4-12-pushgateway.py:    pushadd_to_gateway('localhost:9091', job='batch', registry=registry)

# find . -name "*.*" -exec perl -pi -e 's/localhost/192.168.56.128/g' {} \;
Can't do inplace edit: . is not a regular file.

# find . -type f -print | xargs grep -i "localhost" /dev/null

# find . -type f -print | xargs grep -i "192.168.56.128" /dev/null
./4-11-prometheus.yml:      - 192.168.56.128:9091
./4-12-pushgateway.py:    pushadd_to_gateway('192.168.56.128:9091', job='batch', registry=registry)
# pwd
/etc/prometheus/ep-examples-master

# tree ep-examples-master
ep-examples-master
├── 10
│   ├── 10-10-prometheus.yml
│   ├── 10-2-prometheus.yml
│   ├── 10-3-haproxy.cfg
│   ├── 10-5-prometheus.yml
│   ├── 10-6-grok.yml
│   ├── 10-7-prometheus.yml
│   └── 10-9-prometheus.yml
├── 11
│   └── 11-2-prometheus.yml
├── 12
│   ├── 12-2-consul_metrics.go
│   └── 12-3-consul_metrics.py
├── 17
│   ├── 17-1-prometheus.yml
│   └── 17-2-rules.yml
├── 19
│   ├── 19-1-webhook_receiver.py
│   └── combined-alertmanager.yml
├── 2
│   ├── 2-1-prometheus.yml
│   ├── 2-2-rules.yml
│   └── 2-3-alertmanager.yml
├── 3
│   ├── 3-10-example.py
│   ├── 3-11-example.py
│   ├── 3-12-unitesting.py
│   ├── 3-1-example.py
│   ├── 3-2-prometheus.yml
│   ├── 3-3-example.py
│   ├── 3-4-example.py
│   ├── 3-5-example.py
│   ├── 3-6-example.py
│   ├── 3-7-example.py
│   ├── 3-8-example.py
│   └── 3-9-example.py
├── 4
│   ├── 4-11-prometheus.yml
│   ├── 4-12-pushgateway.py
│   ├── 4-13-graphite-bridge.py
│   ├── 4-14-parse.py
│   ├── 4-1-wsgi.py
│   ├── 4-2-twisted.py
│   ├── 4-3-config.py
│   ├── 4-4-app.py
│   ├── 4-6-example.go
│   ├── 4-7-java-httpserver
│   │   ├── pom.xml
│   │   └── src
│   │       └── main
│   │           └── java
│   │               └── io
│   │                   └── robustperception
│   │                       └── book_examples
│   │                           └── java_httpserver
│   │                               └── Example.java
│   └── 4-9-java-servlet
│       ├── pom.xml
│       └── src
│           └── main
│               └── java
│                   └── io
│                       └── robustperception
│                           └── book_examples
│                               └── java_servlet
│                                   └── Example.java
├── 5
│   └── 5-1-example.py
├── 7
│   └── 7-2-crontab
├── 8
│   ├── 8-10-prometheus.yml
│   ├── 8-11-prometheus.yml
│   ├── 8-12-prometheus.yml
│   ├── 8-13-prometheus.yml
│   ├── 8-14-prometheus.yml
│   ├── 8-15-prometheus.yml
│   ├── 8-16-prometheus.yml
│   ├── 8-17-prometheus.yml
│   ├── 8-18-prometheus.yml
│   ├── 8-19-prometheus.yml
│   ├── 8-1-prometheus.yml
│   ├── 8-20-prometheus.yml
│   ├── 8-21-prometheus.yml
│   ├── 8-22-prometheus.yml
│   ├── 8-23-prometheus.yml
│   ├── 8-24-prometheus.yml
│   ├── 8-25-prometheus.yml
│   ├── 8-3-prometheus.yml
│   ├── 8-4-filesd.json
│   ├── 8-5-prometheus.yml
│   ├── 8-7-prometheus.yml
│   ├── 8-8-prometheus.yml
│   └── 8-9-prometheus.yml
├── 9
│   ├── 9-10-prometheus.yml
│   ├── 9-1-prometheus.yml
│   ├── 9-5-prometheus.yml
│   ├── 9-6-prometheus.yml
│   ├── 9-7-prometheus.yml
│   ├── 9-8-prometheus.yml
│   ├── 9-9-prometheus.yml
│   ├── kube-state-metrics.yml
│   └── prometheus-deployment.yml
├── LICENSE
└── README.md

28 directories, 78 files


# find . -type f -print | xargs grep -i "localhost" /dev/null
./17/17-1-prometheus.yml:      - localhost:9090
./17/17-1-prometheus.yml:      - localhost:9100
./19/combined-alertmanager.yml:  smtp_smarthost: 'localhost:25'
./19/combined-alertmanager.yml:    - url: http://localhost:1234/log
./4/4-11-prometheus.yml:      - localhost:9091
./4/4-12-pushgateway.py:    pushadd_to_gateway('localhost:9091', job='batch', registry=registry)
./3/3-2-prometheus.yml:      - localhost:8000
./3/3-9-example.py:    server = http.server.HTTPServer(('localhost', 8001), MyHandler)
./3/3-11-example.py:    server = http.server.HTTPServer(('localhost', 8001), MyHandler)
./3/3-10-example.py:    server = http.server.HTTPServer(('localhost', 8001), MyHandler)
./3/3-6-example.py:    server = http.server.HTTPServer(('localhost', 8001), MyHandler)
./3/3-7-example.py:    server = http.server.HTTPServer(('localhost', 8001), MyHandler)
./3/3-5-example.py:    server = http.server.HTTPServer(('localhost', 8001), MyHandler)
./3/3-4-example.py:    server = http.server.HTTPServer(('localhost', 8001), MyHandler)
./12/12-3-consul_metrics.py:    out = urlopen("http://localhost:8500/v1/agent/metrics").read()
./8/8-23-prometheus.yml:       - localhost:9090
./8/8-25-prometheus.yml:       - localhost:1234
./8/8-22-prometheus.yml:    - server: 'localhost:8500'
./8/8-21-prometheus.yml:    - server: 'localhost:8500'
./8/8-17-prometheus.yml:    - server: 'localhost:8500'
./8/8-18-prometheus.yml:    - server: 'localhost:8500'
./8/8-1-prometheus.yml:      - localhost:9090
./8/8-7-prometheus.yml:    - server: 'localhost:8500'
./8/8-24-prometheus.yml:       - localhost:9090
./8/8-20-prometheus.yml:    - server: 'localhost:8500'
./2/2-1-prometheus.yml:       - localhost:9093
./2/2-1-prometheus.yml:       - localhost:9090
./2/2-1-prometheus.yml:       - localhost:9100
./2/2-3-alertmanager.yml:  smtp_smarthost: 'localhost:25'
./11/11-2-prometheus.yml:      - localhost:9122
./9/9-1-prometheus.yml:       - localhost:9090
./10/10-2-prometheus.yml:      - localhost:9107
./10/10-7-prometheus.yml:       - localhost:9144
./10/10-5-prometheus.yml:      - localhost:9101
./10/10-10-prometheus.yml:    - server: 'localhost:8500'
./5/5-1-example.py:    server = http.server.HTTPServer(('localhost', 8001), MyHandler)

 

 

# 리눅스에서 해당 디렉토리 아래 특정파일 내 문자열을 치환하는 방법

예) root/example 폴더 아래 각 각 실습파일.txt에  

       192.168.1.x 로 되어 있는 문자열을 192.168.56.x 바꾸고 싶을 경우 

# 예

   - /root/example/ch2/2.1.3/실습파일.txt
   - /root/example/ch3/3.1.4/실습파일.txt
   - /root/example/ch5/5.1.3/실습파일.txt
   
/root/example/
  - app
  - ch2
    - 2.1.3
    - 2.1.4
    - 2.1.5
  - ch3
    - 3.1.2
    - 3.1.4
  - ch4
    - 4.1.1
  - ch5
    - 5.1.1
    - 5.1.3

 

- 파일내 192.168.56.1. 를 찾아서 192.168.56. 으로 변경함

# cd /root/example/

# 해당 디렉토리 아래 모든 파일에서 192.168.1. 을  192.168.56.으로 변경
# find . -name "*.*" -exec perl -pi -e 's/192.168.1./192.168.56./g' {} \;

# 해당 디렉토리 아래 모든 파일에서 192.168.1. 을  192.168.56.으로 변경
# 파일확장자가 없는 경우
# find . -name "*" -exec perl -pi -e 's/192.168.1./192.168.56./g' {} \;

# 해당 디렉토리 아래 *.sh 파일에서 192.168.1. 을  192.168.56.으로 변경
# find . -name "*.sh" -exec perl -pi -e 's/192.168.1./192.168.56./g' {} \;

# 해당 디렉토리 아래 *.txt 파일에서 192.168.1. 을  192.168.56.으로 변경
# find . -name "*.txt" -exec perl -pi -e 's/192.168.1./192.168.56./g' {} \;

 

 

728x90
반응형
LIST
728x90
반응형

 

윈도우11 터미널(cmd.exe) 창 분할하기

 

  • 새로운 분할 창 : Alt+Shift+D
  • 새로운 창 분할(Pane) 생성, 가로로: Alt+Shift+- (Alt, Shift, 마이너스)
  • 새로운 창 분할(Pane) 생성, 세로로: Alt+Shift++ (Alt, Shift, 플러스)
  • 분할(Pane) 창 포커스 이동: Alt+Left, Alt+Right, Alt+Down, Alt+Up
  • 포커스된 분할(Pane) 창 크기 조절: Alt+Shift+Left, Alt+Shift+Right, Alt+Shift+Down, Alt+Shift+U
  • 분할(Pane) 창 닫기: Ctrl+Shift+W

 

728x90
반응형
LIST
728x90
반응형

 

Vagrantfile 설정

 

# -*- mode: ruby -*-
# vi: set ft=ruby :
# Prometheus & Grafana monitoring 

Vagrant.configure("2") do |config|
  config.vm.define "Prometheus_Grafana" do |cfg|
    cfg.vm.box = "alvistack/ubuntu-22.04"
    cfg.vm.provider "virtualbox" do |vb|
      vb.name = "ProGra_SVR(Prometheus_Grafana)"
      vb.cpus = 2
      vb.memory = 2048
      vb.customize ["modifyvm", :id, "--groups", "/ProGra(Prometheus_Grafana)"]
    end
    cfg.vm.host_name = "ProGra-SVR"
    cfg.vm.network "private_network", ip: "192.168.56.125"
    cfg.vm.synced_folder "../data", "/vagrant", disabled: true 
    cfg.vm.provision "shell", path: "config.sh"
  end
end

 

# vagrant 파일 설명

 

 

# -*- mode: ruby -*-
# vi: set ft=ruby :
# Prometheus & Grafana monitoring 

Vagrant.configure("2") do |config|
  config.vm.define "Prometheus_Grafana" do |cfg|    # vagrant global-status 에서 나오는 명칭

<Vagrnat name : Prometheus_Grafana>

 

    cfg.vm.box = "alvistack/ubuntu-22.04"      # vagrant 사이트에 검색해서 설치화려는 Vagrant VM box 설치명

<&nbsp; https://app.vagrantup.com/boxes/search 검색해서 설치하려는 Vagrant 설정 파일 이름 >


    cfg.vm.provider "virtualbox" do |vb|
      vb.name = "ProGra_SVR(Prometheus_Grafana)"      #  Oracle VM에 설치되는 가성 서버 이름

<Oracle VM에 설치되는 가상 서버 이름>

      vb.cpus = 2    # Oralce VM에 설정되는 CPU Core 수

<Oralce VM에 설정되는 CPU 수>

      vb.memory = 2048       # Oralce VM에 설정되는 memory 수(값)

<Oracle VM에 설치되는 메모리 수>

      vb.customize ["modifyvm", :id, "--groups", "/ProGra(Prometheus_Grafana)"]  # Oralce VM에 설정되는 가성 서버 그룹 이름

※ 그룹이름 설정시 앞에 / 표시를 꼭 해야 됨 (안하면 에러남) 

<Oracle VM에 설정되는 가상 서버 그룹 이름>


    end     #    cfg.vm.provider "virtualbox" do |vb|     do|vb|를 닫는다
    cfg.vm.host_name = "ProGra-SVR"  #   가상 서버 호스트 명
  

    cfg.vm.network "private_network", ip: "192.168.56.125"  #   가상 서버 ip

 

    cfg.vm.synced_folder "../data", "/vagrant", disabled: true 

 

    cfg.vm.provision "shell", path: "config.sh" #   가상 서버 설치시 사용되는 추가 파일 

 

  end     #    config.vm.define "Prometheus_Grafana" do |cfg|    do|cfg|를 닫는다

 

end     # Vagrant.configure("2") do |config|    #do |config|를 닫는다

 

 

# config.sh (Vagrant 파일과 같은 폴더에 설치하면 됨)
C:\Users\shim>type config.sh
# config.sh
# 수동 시간 맞추기
ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime

# centOS 8에서 필요한 내용 (YUM 레포사이트 주소변경)
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*

# net-tools 설치
yum install net-tools -y

# yum-utils 설치
yum install yum-utils -y 

# dnf 설치
yum install dnf -y 

# docker compose 설치
dnf install python3 python3-pip -y
pip3 install docker-compose

# docker repo
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo


# config DNS  
cat <<EOF > /etc/resolv.conf
nameserver 1.1.1.1 #cloudflare DNS
nameserver 8.8.8.8 #Google DNS
EOF

 

 

# 기타 설명
# https://app.vagrantup.com/boxes/search 기본 설정

Vagrant.configure("2") do |config|
  config.vm.define "ubuntu/trusty64"
  config.vm.box = "ubuntu/trusty64"
end



# config.vm.define 설정을 안해주면 기본적으로 vagrant global-status 했을때 default로 나옴 

C:\Users\shim>vagrant global-status
id       name              provider   state   directory
-----------------------------------------------------------------------------
73ef706  default           virtualbox running C:/Users/shim

73ef706  ubuntu/trusty64   virtualbox running C:/Users/shim

 

# 설치 로그 
C:\Users\shim>vagrant up
Bringing machine 'Prometheus_Grafana' up with 'virtualbox' provider...
==> Prometheus_Grafana: Importing base box 'alvistack/ubuntu-22.04'...
==> Prometheus_Grafana: Matching MAC address for NAT networking...
==> Prometheus_Grafana: Checking if box 'alvistack/ubuntu-22.04' version '20240120.1.1' is up to date...
==> Prometheus_Grafana: A newer version of the box 'alvistack/ubuntu-22.04' for provider 'virtualbox' is
==> Prometheus_Grafana: available! You currently have version '20240120.1.1'. The latest is version
==> Prometheus_Grafana: '20240202.1.1'. Run `vagrant box update` to update.
==> Prometheus_Grafana: Setting the name of the VM: ProGra_SVR(Prometheus_Grafana)
==> Prometheus_Grafana: Clearing any previously set network interfaces...
==> Prometheus_Grafana: Preparing network interfaces based on configuration...
    Prometheus_Grafana: Adapter 1: nat
    Prometheus_Grafana: Adapter 2: hostonly
==> Prometheus_Grafana: Forwarding ports...
    Prometheus_Grafana: 22 (guest) => 2222 (host) (adapter 1)
==> Prometheus_Grafana: Running 'pre-boot' VM customizations...
==> Prometheus_Grafana: Booting VM...
==> Prometheus_Grafana: Waiting for machine to boot. This may take a few minutes...
    Prometheus_Grafana: SSH address: 127.0.0.1:2222
    Prometheus_Grafana: SSH username: vagrant
    Prometheus_Grafana: SSH auth method: private key
    Prometheus_Grafana: Warning: Connection reset. Retrying...
    Prometheus_Grafana: Warning: Connection aborted. Retrying...
    Prometheus_Grafana:
    Prometheus_Grafana: Vagrant insecure key detected. Vagrant will automatically replace
    Prometheus_Grafana: this with a newly generated keypair for better security.
    Prometheus_Grafana:
    Prometheus_Grafana: Inserting generated public key within guest...
    Prometheus_Grafana: Removing insecure key from the guest if it's present...
    Prometheus_Grafana: Key inserted! Disconnecting and reconnecting using new SSH key...
==> Prometheus_Grafana: Machine booted and ready!
==> Prometheus_Grafana: Checking for guest additions in VM...
    Prometheus_Grafana: The guest additions on this VM do not match the installed version of
    Prometheus_Grafana: VirtualBox! In most cases this is fine, but in rare cases it can
    Prometheus_Grafana: prevent things such as shared folders from working properly. If you see
    Prometheus_Grafana: shared folder errors, please make sure the guest additions within the
    Prometheus_Grafana: virtual machine match the version of VirtualBox you have installed on
    Prometheus_Grafana: your host and reload your VM.
    Prometheus_Grafana:
    Prometheus_Grafana: Guest Additions Version: 6.0.0 r127566
    Prometheus_Grafana: VirtualBox Version: 7.0
==> Prometheus_Grafana: Setting hostname...
==> Prometheus_Grafana: Configuring and enabling network interfaces...
==> Prometheus_Grafana: Running provisioner: shell...
    Prometheus_Grafana: Running: C:/Users/shim/AppData/Local/Temp/vagrant-shell20240204-16936-1upqxy3.sh

C:\Users\shim>

 

728x90
반응형
LIST

'vagrant' 카테고리의 다른 글

Vagrant로 가상머신 설치(추가설치) 하기  (1) 2024.01.28
728x90
반응형

 

지리산 종주 지도 입니다.

 

구간별 이동거리, 누적거리, 누적시간 등

난이도는 개인의 상황에 따라 틀립니다. 

편집 가능하게 ppt자료도 올려놨습니다.. 

 

 

 

지리산등산지도.pptx
0.22MB

 

지리산등산지도.pdf
0.15MB

728x90
반응형
LIST
728x90
반응형

응용 소프트웨어 다운받을때

 

arm64, x86, x64 등 차이점

 

아키텍쳐 정의 주요 분야
ARM64 64비트 아키텍쳐(ARM) 모바일, 임베디스 시스템 등
x86 32비트 인텔 아키텍쳐 일반적인 데스크탑 32bit
x64(=amd64) 64비트 아키텍쳐 일반적인 데스크탑 64bit
728x90
반응형
LIST
728x90
반응형
Vagrant 라 가상머신 설치할때 여러가지 가상머신을 설치하고 싶은데 방법을 몰라서

예) centos7 가상머신 설치후 centos8 가상머신을 추가 설치할 때

vagrant_2.3.4_windows_i686.exe(Vagrant 설치파일) 삭제, 추가 반복했다.
아래는 가상머신 설치(추가설치) 하는 방법을 설명한다.

 

 

자주 쓰는 명령어

 

명령어 설명
vagrant init 프로비저닝을 위한 기초 파일을 생성
vagrant up Vagrantfile을 읽어 들여 프로비저닝을 진행
vagrant halt 가상머신 종료
vagrant destory 가상머신 삭제
vagrant provision 가상머신 변경된 설정 적
vagrant box list 
vagrant box remove [머신명]
가상머신 원본 이미지 저장 폴더 확인
가상머신 원본 이미지 삭제 

 

홈 디렉토리가  C:\Users\shim 인경우

 

C:\HashiCorp                       # vagrant가 설치되어 있는 폴더

C:\Users\shim\Vagrantfile          # vagrantfile이 저장되는 기본 폴더

C:\Users\shim\.vagrant.d\boxes     # vagrant 원본 이미지가 저장되는 폴더
                                   # vagrant init up 이후 원본 이미지가 저장되는 폴더

C:\Users\shim\VirtualBox VMs       # vagrant 가상머신이 저장되는 폴더

 

- C:\HashiCorp 


C:\Users\shim\Vagrantfile         

 


C:\Users\shim\.vagrant.d\boxes     
 

                          

C:\Users\shim\VirtualBox VMs

 

 

 

 

가상이미지를 다운받아서 설치하기 

 

o 설치하고 싶은 vagrant 파일을 찾아서 C:\Users\shim\Vagrantfile 을 수정한다.

https://app.vagrantup.com/

# C:\Users\shim\Vagrantfile file (centos/7) 수정

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
end

 

 

#  가상머신(centos/7)을 설치한다

C:\Users\shim>vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'centos/7' version '2004.01' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
C:\Users\shim>==> default: Waiting for cleanup before exiting...
==> default: Booting VM...
https://app.vagrantup.com/==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key

 

 

# 다른 가상이미지를 추가 설치하고 싶은 경우 C:\Users\shim\Vagrantfile 파일 삭제후 vagrant init 실행

   가끔 Vagrant init 파일이 안먹히는 경우가 있음

# C:\Users\shim\Vagrantfile 파일이 있으면 안먹힘

C:\Users\shim>vagrant init
`Vagrantfile` already exists in this directory. Remove it before
running `vagrant init`.

# C:\Users\shim\Vagrantfile 파일삭제후 다시 vagrant init 실행

C:\Users\shim>vagrant init
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.

 

 

#  다시 C:\Users\shim\Vagrantfile file (rockylinux/9) 수정

Vagrant.configure("2") do |config|
 config.vm.box = "rockylinux/9"
end

 

 

가상머신(rockylinux/9) 설치한다

C:\Users\shim>vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'rockylinux/9' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
==> default: Loading metadata for box 'rockylinux/9'
    default: URL: https://vagrantcloud.com/rockylinux/9
==> default: Adding box 'rockylinux/9' (v3.0.0) for provider: virtualbox
    default: Downloading: https://vagrantcloud.com/rockylinux/boxes/9/versions/3.0.0/providers/virtualbox/unknown/vagrant.box
Download redirected to host: dl.rockylinux.org
    default:
    default: Calculating and comparing box checksum...
==> default: Successfully added box 'rockylinux/9' (v3.0.0) for 'virtualbox'!
==> default: Importing base box 'rockylinux/9'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'rockylinux/9' version '3.0.0' is up to date...
==> default: Setting the name of the VM: shim_default_1706426848952_97525
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...

 

 

 

# vagrant box  list 관리 및 삭제

 

C:\Users\shim\.vagrant.d\boxes>vagrant box list
alvistack/ubuntu-22.04 (virtualbox, 20240120.1.1)
centos/7               (virtualbox, 2004.01)
rockylinux/9           (virtualbox, 3.0.0)

C:\Users\shim\.vagrant.d\boxes>vagrant box remove centos/7
Removing box 'centos/7' (v2004.01) with provider 'virtualbox'...

C:\Users\shim\.vagrant.d\boxes>vagrant box remove rockylinux/9
Removing box 'rockylinux/9' (v3.0.0) with provider 'virtualbox'...

C:\Users\shim\.vagrant.d\boxes>vagrant box list
alvistack/ubuntu-22.04 (virtualbox, 20240120.1.1)

 

 

 

 

 

 

728x90
반응형
LIST

'vagrant' 카테고리의 다른 글

Vagrant(베이그런트) Vagrantfile 설정  (0) 2024.02.01
728x90
반응형
Sessions New 또는 Edit 메뉴 

 

- Login Username : root

 - Extra PuTTY Arguments : -pw vagrant(root 패스워드) =

   (-pw는 옵션)

 

Tools -> Options -> GUI

 

 - Security 에 Allow plan test passwords on putty command line 체크 

 

 

728x90
반응형
LIST

'IT관리' 카테고리의 다른 글

윈도우11 터미널(cmd.exe) 창 분할하기  (0) 2024.02.03
CPU arm64, x86, x64 등  (0) 2024.01.30
네트워크 서브넷 마스크  (0) 2023.12.31
mysql  (0) 2023.12.30
오라클 VitualBox IP 구조 (NAT모드, 브리지 모드 차이점)  (0) 2023.12.25
728x90
반응형
궁금한 나의 미래 남편은 어떤 모습? 테스트 하면 선물까지!

 

 

 

궁금한 나의 미래 남편은 어떤 모습? 테스트 하면 선물까지!
신청하기 : http://iryan.kr/t7qkufipyb

결혼정보회사에서 나온 미래 배우자 테스트는 다르다!
선택 하나하나의 나 자신도 몰랐던 나에게 맞는 이성이 보이기 시작한다면?

재미로 시작했던 나의 미래 배우자 테스트
재미와 다양한 상품으로 실속까지 챙기는 두 마리 토끼를 챙겨가세요.

[이벤트 상세 내용]
이벤트 대상 : 27세 이상 미혼 남녀 (1993년 출생까지)
이벤트 발표 : 담당자 개별 연락
이벤트 상품 : 노블레스 수현 맞선 초대권(50명), 카카오 이모티콘(100명), 스타벅스 기프티콘(100명)

 

http://iryan.kr/t7qkufipyb

 

궁금한 나의 미래 남편은 어떤 모습? 테스트 하면 선물까지!

결혼정보회사에서 나온 미래 배우자 테스트는 다르다! 선택 하나하나의 나 자신도 몰랐던 나에게 맞는 이성이 보이기 시작한다면? 재미로 시작했던 나의 미래 배우자 테스트 재미와 다양한 상

iryan.kr

 

 

 

  • 블로그에 소문내기를 할 시 '소문내기 활동을 하면서 해당 광고업체로부터 경제적 대가를 받기로 하였다'는 점을 명시해야 합니다.
728x90
반응형
LIST
728x90
반응형
숨어있는 잔고 확인만해도? 100% 편의점 쿠폰 증정

 

 

숨어있는 잔고 확인만해도? 100% 편의점 쿠폰 증정
신청하기 : http://iryan.kr/t7qkufg7xv

숨어있는 내 잔고 확인하고, 편의점 6천 원 쿠폰 바로 챙기세요!

나도 모르게 숨어있는 잔고가 있다?, 지금 세이브캐시에서 한 번에 확인하고 찾으세요!

여기서 하나 더!
'지금' 세이프캐시를 시작하는 고객님들을 위한 보너스 이벤트!
숨은 내 돈 찾고, 세이프캐시 편의점 6천원 쿠폰 바로 챙길 수 있는 혜택을 놓치지 마세요

[서비스 기능 소개]
자산조회 기능: 분산된 나의 자산 내역 간편하게 조회 및 관리 확인하실 수 있어요
계좌위험 알림: 계좌 간편결제 위험 거래 발생 시 알림 및 출금 차단 제공해요
자동이체 조회 알림: 자동이체 등록 내역 및 이체금액 사전 알림 확인 할수 있어요
혜택 : 매월 편의점 6천 원 쿠폰을 받아 가실 수 있어요

 

숨어있는 잔고 확인만해도? 100% 편의점 쿠폰 증정

숨어있는 내 잔고 확인하고, 편의점 6천 원 쿠폰 바로 챙기세요! 나도 모르게 숨어있는 잔고가 있다?, 지금 세이브캐시에서 한 번에 확인하고 찾으세요! 여기서 하나 더! '지금' 세이프캐시를 시

iryan.kr

 

 

 

 

  • 블로그에 소문내기를 할 시 '소문내기 활동을 하면서 해당 광고업체로부터 경제적 대가를 받기로 하였다'는 점을 명시해야 합니다.

 

 

728x90
반응형
LIST
728x90
반응형

 

# 오픈스택 릴리즈 

 

https://releases.openstack.org/#release-series

 

OpenStack Releases: OpenStack Releases

OpenStack is developed and released around 6-month cycles. After the initial release, additional stable point releases will be released in each release series. You can find the detail of the various release series here on their series page. Subscribe to th

releases.openstack.org

 

# 오픈스택 비밀

 

 - 6개월 마다 새로운 릴리즈 출시 : (비밀) 릴리즈가  A, B, C, D, E ...         ...  이런식으로 릴리즈 됨

728x90
반응형
LIST
728x90
반응형

 

 

 

 

이미지 다운로드 사이트

 

 

https://cloud-images.ubuntu.com/focal/

 

Ubuntu Cloud Images

 

cloud-images.ubuntu.com

 

 focal-server-cloudimg-amd64.img    (QCow2 파일)

 

 

 

이미지 다운받기

 

# mkdir /tmp/img/
# cd /tmp/img/
# wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img

or

# wget https://cloud.centos.org/centos/8/vagrant/x86_64/images/CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2

 

이미지 패스워드 변경

 

이미지를 서버에 다운로드 받은후

 

우분투 클라우드 이미지에는 기본 사용자 이름/비밀번호가 없습니다. 이미지에서 인스턴스를 생성하기 전에 아래 cmd를 사용하여 구성해야 합니다.

 

cmd를 얻으려면 아래 pkg를 설치해야 합니다 virt-customize.

# sudo apt install libguestfs-tools

 

# virt-customize -a focal-server-cloudimg-amd64.img --root-password password:openstack
[   0.0] Examining the guest ...
[  83.3] Setting a random seed
virt-customize: warning: random seed could not be set for this type of guest
[  83.6] Setting the machine ID in /etc/machine-id
[  83.7] Setting passwords
[  93.7] Finishing off

or 

# virt-customize -a CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2 --root-password password:openstack
[   0.0] Examining the guest ...
[  18.3] Setting a random seed
[  18.5] Setting the machine ID in /etc/machine-id
[  18.6] Setting passwords
[  26.2] Finishing off

 

# 이미지 생성    (stack 계정으로 생성)

  - # openstack 명령어가 안먹힐 때는 https://hwpform.tistory.com/90 참조

$ openstack image create "ubuntu" --file /tmp/focal-server-cloudimg-amd64.img --disk-format qcow2 --container-format bare

$ -- 실행결과
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                      |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| container_format | bare                                                                                                                                       |
| created_at       | 2024-01-13T08:26:02Z                                                                                                                       |
| disk_format      | qcow2                                                                                                                                      |
| file             | /v2/images/9a95f850-fc58-44f4-bbb7-719338ea6dd9/file                                                                                       |
| id               | 9a95f850-fc58-44f4-bbb7-719338ea6dd9                                                                                                       |
| min_disk         | 0                                                                                                                                          |
| min_ram          | 0                                                                                                                                          |
| name             | ubuntu                                                                                                                                     |
| owner            | 9ff989aca2474d0c8a484165b77ac4d3                                                                                                           |
| properties       | os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/ubuntu', owner_specified.openstack.sha256='' |
| protected        | False                                                                                                                                      |
| schema           | /v2/schemas/image                                                                                                                          |
| status           | queued                                                                                                                                     |
| tags             |                                                                                                                                            |
| updated_at       | 2024-01-13T08:26:02Z                                                                                                                       |
| visibility       | shared                                                                                                                                     |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+

or

stack@ubuntu:/tmp$ openstack image create "centos8" --file /tmp/CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2 --disk-format qcow2 --container-format bare
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                      |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| container_format | bare                                                                                                                                       |
| created_at       | 2024-01-21T06:33:42Z                                                                                                                       |
| disk_format      | qcow2                                                                                                                                      |
| file             | /v2/images/3f049468-10b7-41bf-b5b1-14476d546d52/file                                                                                       |
| id               | 3f049468-10b7-41bf-b5b1-14476d546d52                                                                                                       |
| min_disk         | 0                                                                                                                                          |
| min_ram          | 0                                                                                                                                          |
| name             | ubuntu                                                                                                                                     |
| owner            | 8d5e4ccbae274d74b0ba81a1598a0921                                                                                                           |
| properties       | os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/ubuntu', owner_specified.openstack.sha256='' |
| protected        | False                                                                                                                                      |
| schema           | /v2/schemas/image                                                                                                                          |
| status           | queued                                                                                                                                     |
| tags             |                                                                                                                                            |
| updated_at       | 2024-01-21T06:33:42Z                                                                                                                       |
| visibility       | shared                                                                                                                                     |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------+

 

- 아래와 같은 내용

 

 

 

 

- 웹페이지 접속하면 다음과 같이 이미지가 생성되어 있음

 

728x90
반응형
LIST
728x90
반응형

https://docs.openstack.org/install-guide/

 

 

OpenStack Installation Guide — Installation Guide documentation

OpenStack Installation Guide

docs.openstack.org

 

#원본 PDF 

 

InstallGuide.pdf
1.50MB

 

OpenStack contributors

 

Jan 04, 2024

 

 

CONTENTS

 

 

 

CHAPTER
ONE

 

CONVENTIONS

 

The OpenStack documentation uses several typesetting conventions

 

1.1 Notices

 

Notices take these forms:

 

Note: A comment with additional information that explains a part of the text.

 

Important: Something you must be aware of before proceeding

 

Tip: An extra but helpful piece of practical advice.

 

Caution: Helpful information that prevents the user from making mistakes.

 

Warning: Critical information about the risk of data loss or security issues.

 

1.2 Command prompts

$ command

 

Any user, including the root user, can run commands that are prefixed with the $ prompt.

# command

 

The root user must run commands that are prefixed with the # prompt. You can also prefix these commands with the sudo command, if available, to run them.

 

 

 

CHAPTER
TWO

 

2.1 Abstract

 

The OpenStack system consists of several key services that are separately installed. These services work together depending on your cloud needs and include the Compute, Identity, Networking, Image, Block Storage, Object Storage, Telemetry, Orchestration, and Database services. You can install any of these projects separately and configure them stand-alone or as connected entities. Explanations of configuration options and sample configuration files are included. This guide documents the installation of OpenStack starting with the Pike release. It covers multiple releases.

 

Warning: This guide is a work-in-progress and is subject to updates frequently. Pre-release packages have been used for testing, and some instructions may not work with final versions. Please help us make this guide better by reporting any errors you encounter.

 

 

2.2 Operating systems

 

Currently, this guide describes OpenStack installation for the following Linux distributions:

 

openSUSE and SUSE Linux Enterprise Server

You can install OpenStack by using packages on openSUSE Leap 42.3, openSUSE Leap 15, SUSE Linux Enterprise Server 12 SP4, SUSE Linux Enterprise Server 15 through the Open Build Service Cloud repository.

 

Red Hat Enterprise Linux and CentOS

You can install OpenStack by using packages available on both Red Hat Enterprise Linux 7 and 8 and their derivatives through the RDO repository.

Note: OpenStack Wallaby is available for CentOS Stream 8. OpenStack Ussuri and Victoria are available for both CentOS 8 and RHEL 8. OpenStack Train and earlier are available on both CentOS 7 and RHEL 7.

 

Ubuntu

You can walk through an installation by using packages available through Canonicals Ubuntu Cloud archive repository for Ubuntu 16.04+ (LTS).

Note: The Ubuntu Cloud Archive pockets for Pike and Queens provide OpenStack packages for Ubuntu 16.04 LTS; OpenStack Queens is installable direct using Ubuntu 18.04 LTS; the Ubuntu Cloud Archive pockets for Rocky and Stein provide OpenStack packages for Ubuntu 18.04 LTS; the Ubuntu Cloud Archive pocket for Victoria provides OpenStack packages for Ubuntu 20.04 LTS.

 

 

CHAPTER
THREE

 

GET STARTED WITH OPENSTACK

 

The OpenStack project is an open source cloud computing platform for all types of clouds, which aims to be simple to implement, massively scalable, and feature rich. Developers and cloud computing technologists from around the world create the OpenStack project.

 

OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a set of interrelated services. Each service offers an Application Programming Interface (API) that facilitates this integration. Depending on your needs, you can install some or all services.

 

3.1 The OpenStack services

 

The OpenStack project navigator lets you browse the OpenStack services that make up the OpenStack architecture. The services are categorized per the service type and release series.

 

3.2 The OpenStack architecture

 

The following sections describe the OpenStack architecture in more detail:

 

 

3.2.1 Conceptual architecture

 

The following diagram shows the relationships among the OpenStack services:

 

3.2.2 Logical architecture

 

To design, deploy, and configure OpenStack, administrators must understand the logical architecture.

 

As shown in Conceptual architecture, OpenStack consists of several independent parts, named the OpenStack services. All services authenticate through a common Identity service. Individual services interact with each other through public APIs, except where privileged administrator commands are necessary.

 

Internally, OpenStack services are composed of several processes. All services have at least one API process, which listens for API requests, preprocesses them and passes them on to other parts of the service. With the exception of the Identity service, the actual work is done by distinct processes.

 

For communication between the processes of one service, an AMQP message broker is used. The services state is stored in a database. When deploying and configuring your OpenStack cloud, you can choose among several message broker and database solutions, such as RabbitMQ, MySQL, MariaDB, and SQLite.

 

Users can access OpenStack via the web-based user interface implemented by the Horizon Dashboard, via command-line clients and by issuing API requests through tools like browser plug-ins or curl. For applications, several SDKs are available. Ultimately, all these access methods issue REST API calls to the various OpenStack services.

 

The following diagram shows the most common, but not the only possible, architecture for an OpenStack cloud:

 

 

CHAPTER
FOUR

 

OVERVIEW

 

The OpenStack project is an open source cloud computing platform that supports all types of cloud environments. The project aims for simple implementation, massive scalability, and a rich set of features. Cloud computing experts from around the world contribute to the project.

 

OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a variety of complementary services. Each service offers an Application Programming Interface (API) that facilitates this integration.

 

This guide covers step-by-step deployment of the major OpenStack services using a functional example architecture suitable for new users of OpenStack with sufficient Linux experience. This guide is not intended to be used for production system installations, but to create a minimum proof-of-concept for the purpose of learning about OpenStack

 

After becoming familiar with basic installation, configuration, operation, and troubleshooting of these OpenStack services, you should consider the following steps toward deployment using a production architecture:

 

  • Determine and implement the necessary core and optional services to meet performance and redundancy requirements.
  • Increase security using methods such as firewalls, encryption, and service policies.
  • Use a deployment tool such as Ansible, Chef, Puppet, or Salt to automate deployment and management of the production environment. The OpenStack project has a couple of deployment projects with specific guides per version: - 2023.2 (Bobcat) release - 2023.1 (Antelope) release - Zed release - Yoga release - Xena release - Wallaby release - Victoria release - Ussuri release - Train release - Stein release

 

4.1 Example architecture

 

The example architecture requires at least two nodes (hosts) to launch a basic virtual machine or instance. Optional services such as Block Storage and Object Storage require additional nodes.

Important: The example architecture used in this guide is a minimum configuration, and is not intended for production system installations. It is designed to provide a minimum proof-of-concept for the purpose of learning about OpenStack. For information on creating architectures for specific use cases, or how to determine which architecture is required, see the Architecture Design Guide.

 

This example architecture differs from a minimal production architecture as follows:

  • Networking agents reside on the controller node instead of one or more dedicated network nodes.
  • Overlay (tunnel) traffic for self-service networks traverses the management network instead of a dedicated network.

For more information on production architectures for Pike, see the Architecture Design Guide, OpenStack Networking Guide for Pike, and OpenStack Administrator Guides for Pike.

 

For more information on production architectures for Queens, see the Architecture Design Guide, OpenStack Networking Guide for Queens, and OpenStack Administrator Guides for Queens.

 

For more information on production architectures for Rocky, see the Architecture Design Guide, OpenStack Networking Guide for Rocky, and OpenStack Administrator Guides for Rocky.

 

 

4.1.1 Controller

 

The controller node runs the Identity service, Image service, Placement service, management portions of Compute, management portion of Networking, various Networking agents, and the Dashboard. It also includes supporting services such as an SQL database, message queue, and NTP.

 

Optionally, the controller node runs portions of the Block Storage, Object Storage, Orchestration, and Telemetry services.

 

The controller node requires a minimum of two network interfaces.

 

4.1.2 Compute

The compute node runs the hypervisor portion of Compute that operates instances. By default, Compute uses the KVM hypervisor. The compute node also runs a Networking service agent that connects instances to virtual networks and provides firewalling services to instances via security groups. You can deploy more than one compute node. Each node requires a minimum of two network interfaces.

 

4.1.3 Block Storage

The optional Block Storage node contains the disks that the Block Storage and Shared File System services provision for instances. For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security. You can deploy more than one block storage node. Each node requires a minimum of one network interface.

 

4.1.4 Object Storage

The optional Object Storage node contain the disks that the Object Storage service uses for storing accounts, containers, and objects. For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security. This service requires two nodes. Each node requires a minimum of one network interface. You can deploy more than two object storage nodes.

 

4.2 Networking

 

Choose one of the following virtual networking options.

 

4.2.1 Networking Option 1: Provider networks

The provider networks option deploys the OpenStack Networking service in the simplest way possible with primarily layer-2 (bridging/switching) services and VLAN segmentation of networks. Essentially, it bridges virtual networks to physical networks and relies on physical network infrastructure for layer-3 (routing) services. Additionally, a DHCP service provides IP address information to instances.

 

The OpenStack user requires more information about the underlying network infrastructure to create a virtual network to exactly match the infrastructure.

Warning: This option lacks support for self-service (private) networks, layer-3 (routing) services, and advanced services such as LBaaS and FWaaS. Consider the self-service networks option below if you desire these features.

 

 

 

4.2.2 Networking Option 2: Self-service networks

The self-service networks option augments the provider networks option with layer-3 (routing) services that enable self-service networks using overlay segmentation methods such as VXLAN. Essentially, it routes virtual networks to physical networks using NAT. Additionally, this option provides the foundation for advanced services such as LBaaS and FWaaS.

 

The OpenStack user can create virtual networks without the knowledge of underlying infrastructure on the data network. This can also include VLAN networks if the layer-2 plug-in is configured accordingly.

 

 

 

CHAPTER
FIVE

 

This section explains how to configure the controller node and one compute node using the example architecture.

 

Although most environments include Identity, Image service, Compute, at least one networking service, and the Dashboard, the Object Storage service can operate independently. If your use case only involves Object Storage, you can skip to

  •  Object Storage Installation Guide for 2023.2 (Bobcat)
  •  Object Storage Installation Guide for 2023.1 (Antelope)
  •  Object Storage Installation Guide for Zed
  •  Object Storage Installation Guide for Yoga
  •  Object Storage Installation Guide for Stein

after configuring the appropriate nodes for it.

 

You must use an account with administrative privileges to configure each node. Either run the commands as the root user or configure the sudo utility.

Note: The systemctl enable call on openSUSE outputs a warning message when the service uses SysV Init scripts instead of native systemd files. This warning can be ignored.

 

For best performance, we recommend that your environment meets or exceeds the hardware requirements in Hardware requirements.

 

The following minimum requirements should support a proof-of-concept environment with core services and several CirrOS instances:

  • Controller Node: 1 processor, 4 GB memory, and 5 GB storage
  • Compute Node: 1 processor, 2 GB memory, and 10 GB storage

 

As the number of OpenStack services and virtual machines increase, so do the hardware requirements for the best performance. If performance degrades after enabling additional services or virtual machines, consider adding hardware resources to your environment.

 

To minimize clutter and provide more resources for OpenStack, we recommend a minimal installation of your Linux distribution. Also, you must install a 64-bit version of your distribution on each node.

 

A single disk partition on each node works for most basic installations. However, you should consider Logical Volume Manager (LVM) for installations with optional services such as Block Storage.

 

For first-time installation and testing purposes, many users select to build each host as a virtual machine (VM). The primary benefits of VMs include the following:

  • One physical server can support multiple nodes, each with almost any number of network interfaces. 
  • Ability to take periodic snap shots throughout the installation process and roll back to a working configuration in the event of a problem

However, VMs will reduce performance of your instances, particularly if your hypervisor and/or processor lacks support for hardware acceleration of nested VMs.

 

Note: If you choose to install on VMs, make sure your hypervisor provides a way to disable MAC address filtering on the provider network interface

 

For more information about system requirements, see the OpenStack 2023.2 (Bobcat) Administrator Guides, the OpenStack 2023.1 (Antelope) Administrator Guides, the OpenStack Zed Administrator Guides, the OpenStack Yoga Administrator Guides, or the OpenStack Stein Administrator Guides.

 

5.1 Security

 

OpenStack services support various security methods including password, policy, and encryption. Additionally, supporting services including the database server and message broker support password security.

 

To ease the installation process, this guide only covers password security where applicable. You can create secure passwords manually, but the database connection string in services configuration file cannot accept special characters like @. We recommend you generate them using a tool such as pwgen, or by running the following command:

$ openssl rand -hex 10

 

For OpenStack services, this guide uses SERVICE_PASS to reference service account passwords and SERVICE_DBPASS to reference database passwords.

 

The following table provides a list of services that require passwords and their associated references in the guide.

 

OpenStack and supporting services require administrative privileges during installation and operation. In some cases, services perform modifications to the host that can interfere with deployment automation tools such as Ansible, Chef, and Puppet. For example, some OpenStack services add a root wrapper to sudo that can interfere with security policies. See the Compute service documentation for Pike, the Compute service documentation for Queens, or the Compute service documentation for Rocky for more information.

 

The Networking service assumes default values for kernel network parameters and modifies firewall rules. To avoid most issues during your initial installation, we recommend using a stock deployment of a supported distribution on your hosts. However, if you choose to automate deployment of your hosts, review the configuration and policies applied to them before proceeding further.

 

5.2 Host networking

 

After installing the operating system on each node for the architecture that you choose to deploy, you must configure the network interfaces. We recommend that you disable any automated network management tools and manually edit the appropriate configuration files for your distribution. For more information on how to configure networking on your distribution, see the documentation.

 

See also:

  • Ubuntu Network Configuration
  • RHEL 7 or RHEL 8 Network Configuration
  • SLES 12 or SLES 15 or openSUSE Network Configuration

All nodes require Internet access for administrative purposes such as package installation, security updates, DNS, and NTP. In most cases, nodes should obtain Internet access through the management network interface. To highlight the importance of network separation, the example architectures use private address space for the management network and assume that the physical network infrastructure provides Internet access via NAT or other methods. The example architectures use routable IP address space for the provider (external) network and assume that the physical network infrastructure provides direct Internet access

 

In the provider networks architecture, all instances attach directly to the provider network. In the selfservice (private) networks architecture, instances can attach to a self-service or provider network. Selfservice networks can reside entirely within OpenStack or provide some level of external network access using NAT through the provider network.

 

 

The example architectures assume use of the following networks:

 

  • Management on 10.0.0.0/24 with gateway 10.0.0.1

This network requires a gateway to provide Internet access to all nodes for administrative purposes such as package installation, security updates, DNS, and NTP.

 

  • Provider on 203.0.113.0/24 with gateway 203.0.113.1

This network requires a gateway to provide Internet access to instances in your OpenStack environment.

 

You can modify these ranges and gateways to work with your particular network infrastructure. Network interface names vary by distribution. Traditionally, interfaces use eth followed by a sequential number. To cover all variations, this guide refers to the first interface as the interface with the lowest number and the second interface as the interface with the highest number.

Note: Ubuntu has changed the network interface naming concept. Refer Changing Network Interfaces name Ubuntu 16.04.

 

Unless you intend to use the exact configuration provided in this example architecture, you must modify the networks in this procedure to match your environment. Each node must resolve the other nodes by name in addition to IP address. For example, the controller name must resolve to 10.0.0.11, the IP address of the management interface on the controller node.

Warning: Reconfiguring network interfaces will interrupt network connectivity. We recommend using a local terminal session for these procedures.

 

Note: RHEL, CentOS and SUSE distributions enable a restrictive firewall by default. Ubuntu does not. For more information about securing your environment, refer to the OpenStack Security Guide.

 

 

5.2.1 Controller node

 

Configure network interfaces

 

1. Configure the first interface as the management interface:

 

IP address: 10.0.0.11

Network mask: 255.255.255.0 (or /24)

Default gateway: 10.0.0.1

 

2. The provider interface uses a special configuration without an IP address assigned to it. Configure the second interface as the provider interface:

 

Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.

 

For Ubuntu:

 

• Edit the /etc/network/interfaces file to contain the following:

# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down

 

For RHEL or CentOS:

  • Edit the /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME file to contain the following

Do not change the HWADDR and UUID keys.

DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"

 

For SUSE:

  • Edit the /etc/sysconfig/network/ifcfg-INTERFACE_NAME file to contain the following
STARTMODE='auto'
BOOTPROTO='static'

 

 

3. Reboot the system to activate the changes.

 

Configure name resolution

 

1. Set the hostname of the node to controller.

2. Edit the /etc/hosts file to contain the following:

# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2

 

Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry.

 

Note: This guide includes host entries for optional services in order to reduce complexity should you choose to deploy them.

 

 

5.2.2 Compute node

 

Configure network interfaces

 

1. Configure the first interface as the management interface:

IP address: 10.0.0.31

Network mask: 255.255.255.0 (or /24)

Default gateway: 10.0.0.1

Note: Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.

 

2. The provider interface uses a special configuration without an IP address assigned to it. Configure the second interface as the provider interface:

Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.

 

For Ubuntu:

  •  Edit the /etc/network/interfaces file to contain the following:
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down

 

For RHEL or CentOS:

  • Edit the /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME file to contain the following:

Do not change the HWADDR and UUID keys.

DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"

 

For SUSE:

  • Edit the /etc/sysconfig/network/ifcfg-INTERFACE_NAME file to contain the following:
STARTMODE='auto'
BOOTPROTO='static'

 

3. Reboot the system to activate the changes.

 

Configure name resolution

 

1. Set the hostname of the node to compute1.

2. Edit the /etc/hosts file to contain the following:

# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry.

 

Note: This guide includes host entries for optional services in order to reduce complexity should you choose to deploy them.

 

5.2.3 Block storage node (Optional)

 

If you want to deploy the Block Storage service, configure one additional storage node.

 

Configure network interfaces

  • Configure the management interface:

– IP address: 10.0.0.41

– Network mask: 255.255.255.0 (or /24)

– Default gateway: 10.0.0.1

 

Configure name resolution

1. Set the hostname of the node to block1.

2. Edit the /etc/hosts file to contain the following:

# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2

 

Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry.

 

Note: This guide includes host entries for optional services in order to reduce complexity should you choose to deploy them.

 

3. Reboot the system to activate the changes.

 

 

5.2.4 Verify connectivity

 

We recommend that you verify network connectivity to the Internet and among the nodes before proceeding further

 

1. From the controller node, test access to the Internet:

# ping -c 4 docs.openstack.org
PING files02.openstack.org (23.253.125.17) 56(84) bytes of data.
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=1 ttl=43␣,→time=125 ms
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=2 ttl=43␣,→time=125 ms
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=3 ttl=43␣,→time=125 ms
64 bytes from files02.openstack.org (23.253.125.17): icmp_seq=4 ttl=43␣,→time=125 ms
--- files02.openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 125.192/125.282/125.399/0.441 ms

 

2. From the controller node, test access to the management interface on the compute node:

# ping -c 4 compute1
PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms

 

3. From the compute node, test access to the Internet:

# ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms

 

4. From the compute node, test access to the management interface on the controller node:

# ping -c 4 controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
Note: RHEL, CentOS and SUSE distributions enable a restrictive firewall by default. During the installation process, certain steps will fail unless you alter or disable the firewall. For more information about securing your environment, refer to the OpenStack Security Guide
Ubuntu does not enable a restrictive firewall by default. For more information about securing your environment, refer to the OpenStack Security Guide

 

 

5.3 Network Time Protocol (NTP)

 

To properly synchronize services among nodes, you can install Chrony, an implementation of NTP. We recommend that you configure the controller node to reference more accurate (lower stratum) servers and other nodes to reference the controller node.

 

5.3.1 Controller node

 

Perform these steps on the controller node.

 

Install and configure components

 

1. Install the packages:

 

For Ubuntu:

# apt install chrony

 

For RHEL or CentOS:

# yum install chrony

 

For SUSE

# zypper install chrony

 

2. Edit the chrony.conf file and add, change, or remove the following keys as necessary for your environment

 

For RHEL, CentOS, or SUSE, edit the /etc/chrony.conf file:

server NTP_SERVER iburst

 

For Ubuntu, edit the /etc/chrony/chrony.conf file:

server NTP_SERVER iburst

 

Replace NTP_SERVER with the hostname or IP address of a suitable more accurate (lower stratum) NTP server. The configuration supports multiple server keys.

Note: By default, the controller node synchronizes the time via a pool of public servers. However, you can optionally configure alternative servers such as those provided by your organization.

 

3. To enable other nodes to connect to the chrony daemon on the controller node, add this key to the same chrony.conf file mentioned above:

allow 10.0.0.0/24

 

If necessary, replace 10.0.0.0/24 with a description of your subnet.

 

4. Restart the NTP service:

 

For Ubuntu:

# service chrony restart

 

For RHEL, CentOS, or SUSE:

# systemctl enable chronyd.service
# systemctl start chronyd.service

 

 

5.3.2 Other nodes

 

Other nodes reference the controller node for clock synchronization. Perform these steps on all other nodes.

 

Install and configure components

 

1. Install the packages.

 

For Ubuntu:

# apt install chrony

 

For RHEL or CentOS:

# yum install chrony

 

For SUSE:

# zypper install chrony

 

 

2. Configure the chrony.conf file and comment out or remove all but one server key. Change it to reference the controller node.

 

For RHEL, CentOS, or SUSE, edit the /etc/chrony.conf file:

server controller iburst

 

For Ubuntu, edit the /etc/chrony/chrony.conf file:

server controller iburst

 

3. Comment out the pool 2.debian.pool.ntp.org offline iburst line.

 

4. Restart the NTP service.

 

For Ubuntu:

# service chrony restart

 

For RHEL, CentOS, or SUSE:

# systemctl enable chronyd.service
# systemctl start chronyd.service

 

 

5.3.3 Verify operation

 

We recommend that you verify NTP synchronization before proceeding further. Some nodes, particularly those that reference the controller node, can take several minutes to synchronize.

 

1. Run this command on the controller node:

# chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
␣
,→===============================================================================
^- 192.0.2.11 2 7 12 137 -2814us[-3000us] +/-,→ 43ms
^* 192.0.2.12 2 6 177 46 +17us[ -23us] +/-,→ 68ms

 

Contents in the Name/IP address column should indicate the hostname or IP address of one or more NTP servers. Contents in the MS column should indicate * for the server to which the NTP service is currently synchronized.

 

2. Run the same command on all other nodes:

# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
␣
,→===============================================================================
^* controller 3 9 377 421 +15us[ -87us] +/
,→- 15ms

 

Contents in the Name/IP address column should indicate the hostname of the controller node.

 

 

5.4 OpenStack packages

 

Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.

Note: The set up of OpenStack packages described here needs to be done on all nodes: controller, compute, and Block Storage nodes.

 

Warning: Your hosts must contain the latest versions of base installation packages available for your distribution before proceeding further

 

Note: Disable or remove any automatic update services because they can impact your OpenStack environment.

 

5.4.1 OpenStack packages for SUSE

 

Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.

 

Note: The set up of OpenStack packages described here needs to be done on all nodes: controller, compute, and Block Storage nodes.

 

Warning: Your hosts must contain the latest versions of base installation packages available for your distribution before proceeding further.

 

Note: Disable or remove any automatic update services because they can impact your OpenStack environment.

 

Enable the OpenStack repository

 

  •  Enable the Open Build Service repositories based on your openSUSE or SLES version, and on the version of OpenStack you want to install:

On openSUSE for OpenStack Ussuri:

# zypper addrepo -f obs://Cloud:OpenStack:Ussuri/openSUSE_Leap_15.1 Ussuri

 

On openSUSE for OpenStack Train:

# zypper addrepo -f obs://Cloud:OpenStack:Train/openSUSE_Leap_15.0 Train

 

On openSUSE for OpenStack Stein:

# zypper addrepo -f obs://Cloud:OpenStack:Stein/openSUSE_Leap_15.0 Stein

 

On openSUSE for OpenStack Rocky:

# zypper addrepo -f obs://Cloud:OpenStack:Rocky/openSUSE_Leap_15.0 Rocky

 

On openSUSE for OpenStack Queens:

# zypper addrepo -f obs://Cloud:OpenStack:Queens/openSUSE_Leap_42.3 Queens

 

On openSUSE for OpenStack Pike:

# zypper addrepo -f obs://Cloud:OpenStack:Pike/openSUSE_Leap_42.3 Pike

 

Note: The openSUSE distribution uses the concept of patterns to represent collections of packages. If you selected Minimal Server Selection (Text Mode) during the initial installation, you may be presented with a dependency conflict when you attempt to install the OpenStack packages. To avoid this, remove the minimal_base-conflicts package:
# zypper rm patterns-openSUSE-minimal_base-conflicts

 

On SLES for OpenStack Ussuri:

# zypper addrepo -f obs://Cloud:OpenStack:Ussuri/SLE_15_SP2 Ussuri

 

On SLES for OpenStack Train:

# zypper addrepo -f obs://Cloud:OpenStack:Train/SLE_15_SP1 Train

 

On SLES for OpenStack Stein:

# zypper addrepo -f obs://Cloud:OpenStack:Stein/SLE_15 Stein

 

On SLES for OpenStack Rocky:

# zypper addrepo -f obs://Cloud:OpenStack:Rocky/SLE_12_SP4 Rocky

 

On SLES for OpenStack Queens:

# zypper addrepo -f obs://Cloud:OpenStack:Queens/SLE_12_SP3 Queens

 

On SLES for OpenStack Pike:

# zypper addrepo -f obs://Cloud:OpenStack:Pike/SLE_12_SP3 Pike
Note: The packages are signed by GPG key D85F9316. You should verify the fingerprint of the imported GPG key before using it.
Key Name: Cloud:OpenStack OBS Project <Cloud:OpenStack@build.opensuse.org>
Key Fingerprint: 35B34E18 ABC1076D 66D5A86B 893A90DA D85F9316
Key Created: 2015-12-16T16:48:37 CET
Key Expires: 2018-02-23T16:48:37 CET

 

Finalize the installation

 

1. Upgrade the packages on all nodes:

# zypper refresh && zypper dist-upgrade
Note: If the upgrade process includes a new kernel, reboot your host to activate it

 

2. Install the OpenStack client:

# zypper install python-openstackclient

 

 

5.4.2 OpenStack packages for RHEL and CentOS

 

Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.

Warning: Starting with the Ussuri release, you will need to use either CentOS8 or RHEL 8. Previous OpenStack releases will need to use either CentOS7 or RHEL 7. Instructions are included for both distributions and versions where different.
Note: The set up of OpenStack packages described here needs to be done on all nodes: controller, compute, and Block Storage nodes.
Warning: Your hosts must contain the latest versions of base installation packages available for your distribution before proceeding further.
Note: Disable or remove any automatic update services because they can impact your OpenStack environment.

 

Prerequisites

Warning: We recommend disabling EPEL when using RDO packages due to updates in EPEL breaking backwards compatibility. Or, preferably pin package versions using the yum-versionlock plugin.
Note: The following steps apply to RHEL only. CentOS does not require these steps

 

1. When using RHEL, it is assumed that you have registered your system using Red Hat Subscription Management and that you have the rhel-7-server-rpms or rhel-8-for-x86_64-baseos-rpms repository enabled by default depending on your version.

For more information on registering a RHEL 7 system, see the Red Hat Enterprise Linux 7 System Administrators Guide.

 

2. In addition to rhel-7-server-rpms on a RHEL 7 system, you also need to have the rhel-7-server-optional-rpms, rhel-7-server-extras-rpms, and rhel-7-server-rh-common-rpms repositories enabled:

# subscription-manager repos --enable=rhel-7-server-optional-rpms \
--enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms

 

For more information on registering a RHEL 8 system, see the Red Hat Enterprise Linux 8 Installation Guide.

 

In addition to rhel-8-for-x86_64-baseos-rpms on a RHEL 8 system, you also need to have the rhel-8-for-x86_64-appstream-rpms, rhel-8-for-x86_64-supplementary-rpms, and codeready-builder-for-rhel-8-x86_64-rpms repositories enabled:

# subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms \
--enable=rhel-8-for-x86_64-supplementary-rpms --enable=codeready-builder-for-rhel-8-x86_64-rpms

 

Enable the OpenStack repository

 

  • On CentOS, the extras repository provides the RPM that enables the OpenStack repository. CentOS includes the extras repository by default, so you can simply install the package to enable the OpenStack repository. For CentOS8, you will also need to enable the PowerTools repository

When installing the Victoria release, run:

# yum install centos-release-openstack-victoria
# yum config-manager --set-enabled powertools

 

When installing the Ussuri release, run:

# yum install centos-release-openstack-ussuri
# yum config-manager --set-enabled powertools

 

When installing the Train release, run:

# yum install centos-release-openstack-train

 

When installing the Stein release, run:

# yum install centos-release-openstack-stein

 

When installing the Rocky release, run:

# yum install centos-release-openstack-rocky

 

When installing the Queens release, run:

# yum install centos-release-openstack-queens

 

When installing the Pike release, run:

# yum install centos-release-openstack-pike

 

  • On RHEL, download and install the RDO repository RPM to enable the OpenStack repository.

On RHEL 7:

The RDO repository RPM installs the latest available OpenStack release

 

On RHEL 8:

# dnf install https://www.rdoproject.org/repos/rdo-release.el8.rpm

 

The RDO repository RPM installs the latest available OpenStack release

 

Finalize the installation

 

 

 

5.4.3 OpenStack packages for Ubuntu

 

Ubuntu releases OpenStack with each Ubuntu release. Ubuntu LTS releases are provided every two years. OpenStack packages from interim releases of Ubuntu are made available to the prior Ubuntu LTS via the Ubuntu Cloud Archive.

Note: The archive enablement described here needs to be done on all nodes that run OpenStack services.

 

Archive Enablement OpenStack 2023.2 Bobcat for Ubuntu 22.04 LTS:

# add-apt-repository cloud-archive:bobcat

 

OpenStack 2023.1 Antelope for Ubuntu 22.04 LTS:

# add-apt-repository cloud-archive:antelope

 

OpenStack Zed for Ubuntu 22.04 LTS:

# add-apt-repository cloud-archive:zed

 

OpenStack Yoga for Ubuntu 22.04 LTS:

OpenStack Yoga is available by default using Ubuntu 22.04 LTS.

 

OpenStack Yoga for Ubuntu 20.04 LTS:

# add-apt-repository cloud-archive:yoga

 

OpenStack Xena for Ubuntu 20.04 LTS:

# add-apt-repository cloud-archive:xena

 

OpenStack Wallaby for Ubuntu 20.04 LTS:

# add-apt-repository cloud-archive:wallaby

 

OpenStack Victoria for Ubuntu 20.04 LTS:

# add-apt-repository cloud-archive:victoria

 

OpenStack Ussuri for Ubuntu 20.04 LTS:

OpenStack Ussuri is available by default using Ubuntu 20.04 LTS.

 

OpenStack Ussuri for Ubuntu 18.04 LTS:

# add-apt-repository cloud-archive:ussuri

 

OpenStack Train for Ubuntu 18.04 LTS:

# add-apt-repository cloud-archive:train

 

OpenStack Stein for Ubuntu 18.04 LTS:

# add-apt-repository cloud-archive:stein

 

OpenStack Rocky for Ubuntu 18.04 LTS:

# add-apt-repository cloud-archive:rocky

 

OpenStack Queens for Ubuntu 18.04 LTS:

OpenStack Queens is available by default using Ubuntu 18.04 LTS.
Note: For a full list of supported Ubuntu OpenStack releases, see Ubuntu OpenStack release cycle at https://www.ubuntu.com/about/release-cycle.

 

Sample Installation

# apt install nova-compute

 

Client Installation

# apt install python3-openstackclient

 

 

5.5 SQL database

 

Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.

Note: If you see Too many connections or Too many open files error log messages on OpenStack services, verify that maximum number of connection settings are well applied to your environment. In MariaDB, you may also need to change open_files_limit configuration

 

 

5.5.1 SQL database for SUSE

 

Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.

 

Install and configure components

 

1. Install the packages:

# zypper install mariadb-client mariadb python-PyMySQL

 

2. Create and edit the /etc/my.cnf.d/openstack.cnf file and complete the following actions:

  • Create a [mysqld] section, and set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

 

Finalize installation

 

1. Start the database service and configure it to start when the system boots:

# systemctl enable mysql.service
# systemctl start mysql.service

 

2. Secure the database service by running the mysql_secure_installation script. In particular, choose a suitable password for the database root account:

# mysql_secure_installation

 

5.5.2 SQL database for RHEL and CentOS

 

Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.

 

Install and configure components

 

1. Install the packages:

# yum install mariadb mariadb-server python2-PyMySQL

 

2. Create and edit the /etc/my.cnf.d/openstack.cnf file (backup existing configuration files in /etc/my.cnf.d/ if needed) and complete the following actions:

  •  Create a [mysqld] section, and set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

 

Finalize installation

 

1. Start the database service and configure it to start when the system boots:

# systemctl enable mariadb.service
# systemctl start mariadb.service

 

2. Secure the database service by running the mysql_secure_installation script. In particular, choose a suitable password for the database root account:

# mysql_secure_installation

 

 

5.5.3 SQL database for Ubuntu

 

Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.

 

Note: As of Ubuntu 16.04, MariaDB was changed to use the unix_socket Authentication Plugin. Local authentication is now performed using the user credentials (UID), and password authentication is no longer used by default. This means that the root user no longer uses a password for local access to the server.
Note: As of Ubuntu 18.04, the mariadb-server package is no longer available from the default repository. To install successfully, enable the Universe repository on Ubunt

 

Install and configure components

 

1. Install the packages:

  •  As of Ubuntu 20.04, install the packages:
# apt install mariadb-server python3-pymysql

 

  •  As of Ubuntu 18.04 or 16.04, install the packages:
# apt install mariadb-server python-pymysq

 

2. Create and edit the /etc/mysql/mariadb.conf.d/99-openstack.cnf file and complete the following actions:

  •  Create a [mysqld] section, and set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

 

Finalize installation

 

1. Restart the database service:

# service mysql restart

 

2. Secure the database service by running the mysql_secure_installation script. In particular, choose a suitable password for the database root account:

# mysql_secure_installation

 

5.6 Message queue

 

OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.

 

The message queue runs on the controller node.

 

5.6.1 Message queue for SUSE

 

1. Install the package:

# zypper install rabbitmq-server

 

2. Start the message queue service and configure it to start when the system boots:

# systemctl enable rabbitmq-server.service # systemctl start rabbitmq-server.service

 

3. Add the openstack user:

# rabbitmqctl add_user openstack RABBIT_PASS

Creating user "openstack" ...

Replace RABBIT_PASS with a suitable password.

 

4. Permit configuration, write, and read access for the openstack user:

# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Setting permissions for user "openstack" in vhost "/" ...

 

5.6.2 Message queue for RHEL and CentOS

 

OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.

 

The message queue runs on the controller node.

 

Install and configure components

 

1. Install the package:

# yum install rabbitmq-server

 

2. Start the message queue service and configure it to start when the system boots:

# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service

 

3. Add the openstack user:

# rabbitmqctl add_user openstack RABBIT_PASS

Creating user "openstack" ...

Replace RABBIT_PASS with a suitable password.

 

4. Permit configuration, write, and read access for the openstack user:

# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Setting permissions for user "openstack" in vhost "/" ...

 

5.6.3 Message queue for Ubuntu

 

OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.

 

The message queue runs on the controller node.

 

Install and configure components

 

1. Install the package:

# apt install rabbitmq-server

 

2. Add the openstack user:

# rabbitmqctl add_user openstack RABBIT_PASS

Creating user "openstack" ...

Replace RABBIT_PASS with a suitable password.

 

3. Permit configuration, write, and read access for the openstack user:

# rabbitmqctl add_user openstack RABBIT_PASS

Creating user "openstack" ...

 

5.7 Memcached

 

The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure

 

5.7.1 Memcached for SUSE

 

The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.

 

Install and configure components

 

1. Install the packages:

# zypper install memcached python-python-memcached

 

2. Edit the /etc/sysconfig/memcached file and complete the following actions:

  • Configure the service to use the management IP address of the controller node. This is to enable access by other nodes via the management network:
MEMCACHED_PARAMS="-l 10.0.0.11"

 

Note: Change the existing line MEMCACHED_PARAMS="-l 127.0.0.1".

 

Finalize installation

  • Start the Memcached service and configure it to start when the system boots:
# systemctl enable memcached.service
# systemctl start memcached.service

 

 

5.7.2 Memcached for RHEL and CentOS

 

The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.

 

Install and configure components

 

1. Install the packages:

For CentOS 7 and RHEL 7

# yum install memcached python-memcached

 

For CentOS 8 and RHEL 8

# yum install memcached python3-memcached

 

2. Edit the /etc/sysconfig/memcached file and complete the following actions:

  • Configure the service to use the management IP address of the controller node. This is to enable access by other nodes via the management network:
OPTIONS="-l 127.0.0.1,::1,controller"

 

Note: Change the existing line OPTIONS="-l 127.0.0.1,::1".

 

Finalize installation

 

  • Start the Memcached service and configure it to start when the system boots:
# systemctl enable memcached.service
# systemctl start memcached.service

 

5.7.3 Memcached for Ubuntu

 

The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.

 

Install and configure components

 

1. Install the packages:

For Ubuntu versions prior to 18.04 use:

# apt install memcached python-memcache

 

For Ubuntu 18.04 and newer versions use:

# apt install memcached python3-memcache

 

2. Edit the /etc/memcached.conf file and configure the service to use the management IP address of the controller node. This is to enable access by other nodes via the management network:

-l 10.0.0.11
Note: Change the existing line that had -l 127.0.0.1.

 

Finalize installation

  • Restart the Memcached service:
# service memcached restart

 

 

5.8 Etcd

 

OpenStack services may use Etcd, a distributed reliable key-value store for distributed key locking, storing configuration, keeping track of service live-ness and other scenarios.

 

5.8.1 Etcd for SUSE

 

Right now, there is no distro package available for etcd3. This guide uses the tarball installation as a workaround until proper distro packages are available.

The etcd service runs on the controller node.

 

Install and configure components

 

1. Install etcd:

  • Create etcd user:
# groupadd --system etcd
# useradd --home-dir "/var/lib/etcd" \
--system \
--shell /bin/false \
-g etcd \
etcd

 

  • Create the necessary directories:
# mkdir -p /etc/etcd
# chown etcd:etcd /etc/etcd
# mkdir -p /var/lib/etcd
# chown etcd:etcd /var/lib/etcd

 

  •  Determine your system architecture:
• Determine your system architecture:

 

  • Download and install the etcd tarball for x86_64/amd64:
# ETCD_VER=v3.2.7
# rm -rf /tmp/etcd && mkdir -p /tmp/etcd
# curl -L \
https://github.com/coreos/etcd/releases/download/${ETCD_VER}/
,→etcd-${ETCD_VER}-linux-amd64.tar.gz \
-o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
# tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz \
-C /tmp/etcd --strip-components=1
# cp /tmp/etcd/etcd /usr/bin/etcd
# cp /tmp/etcd/etcdctl /usr/bin/etcdctl

 

Or download and install the etcd tarball for arm64:

# ETCD_VER=v3.2.7
# rm -rf /tmp/etcd && mkdir -p /tmp/etcd
# curl -L \
https://github.com/coreos/etcd/releases/download/${ETCD_VER}/
,→etcd-${ETCD_VER}-linux-arm64.tar.gz \
-o /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz
# tar xzvf /tmp/etcd-${ETCD_VER}-linux-arm64.tar.gz \
-C /tmp/etcd --strip-components=1
# cp /tmp/etcd/etcd /usr/bin/etcd
# cp /tmp/etcd/etcdctl /usr/bin/etcdctl

 

2. Create and edit the /etc/etcd/etcd.conf.yml file and set the initial-cluster, initial-advertise-peer-urls, advertise-client-urls, listen-client-urls to the management IP address of the controller node to enable access by other nodes via the management network:

name: controller
data-dir: /var/lib/etcd
initial-cluster-state: 'new'
initial-cluster-token: 'etcd-cluster-01'
initial-cluster: controller=http://10.0.0.11:2380
initial-advertise-peer-urls: http://10.0.0.11:2380
advertise-client-urls: http://10.0.0.11:2379
listen-peer-urls: http://0.0.0.0:2380
listen-client-urls: http://10.0.0.11:2379

 

3. Create and edit the /usr/lib/systemd/system/etcd.service file:

[Unit]
After=network.target
Description=etcd - highly-available key value store
[Service]
# Uncomment this on ARM64.
# Environment="ETCD_UNSUPPORTED_ARCH=arm64"
LimitNOFILE=65536
Restart=on-failure
Type=notify
ExecStart=/usr/bin/etcd --config-file /etc/etcd/etcd.conf.yml
User=etcd
[Install]
WantedBy=multi-user.target

 

Reload systemd service files with:

# systemctl daemon-reload

 

Finalize installation

 

1. Enable and start the etcd service:

# systemctl enable etcd
# systemctl start etcd

 

728x90
반응형
LIST
728x90
반응형

 

# 전체 이미지 (qcow2)
https://docs.openstack.org/image-guide/obtain-images.html

 

# ubuntu 용
https://cloud-images.ubuntu.com/

 

 

728x90
반응형
LIST

'서버가상화 > openstack' 카테고리의 다른 글

openstack img 파일 생성  (0) 2024.01.13
openstack Install Guide [매뉴얼]  (0) 2024.01.13
openstack chrony(NTP, 네트워크 타임 서비스 설치)  (0) 2024.01.07
openstack apache2  (0) 2024.01.07
openstack horizon (수정중)  (1) 2024.01.07
728x90
반응형

 

# chrony 설치 

# apt install chrony

 

# chrony 프로세서 확인, 시간 확인   

# service --status-all
 [ + ]  chrony

# service chrony status
● chrony.service - chrony, an NTP client/server
     Loaded: loaded (/lib/systemd/system/chrony.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-01-07 11:51:26 KST; 7min ago
       Docs: man:chronyd(8)
             man:chronyc(1)
             man:chrony.conf(5)
    Process: 13125 ExecStart=/usr/lib/systemd/scripts/chronyd-starter.sh $DAEMON_OPTS (code=exited, status=0/SUCCESS)
   Main PID: 13135 (chronyd)
      Tasks: 2 (limit: 4537)
     Memory: 1.6M
     CGroup: /system.slice/chrony.service
             ├─13135 /usr/sbin/chronyd -F 1
             └─13136 /usr/sbin/chronyd -F 1

Jan 07 11:51:26 ubuntu.localdomain systemd[1]: Starting chrony, an NTP client/server...
Jan 07 11:51:26 ubuntu.localdomain chronyd[13135]: chronyd version 4.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
Jan 07 11:51:26 ubuntu.localdomain chronyd[13135]: Initial frequency 34.807 ppm
Jan 07 11:51:26 ubuntu.localdomain chronyd[13135]: Using right/UTC timezone to obtain leap second data
Jan 07 11:51:26 ubuntu.localdomain chronyd[13135]: Loaded seccomp filter (level 1)
Jan 07 11:51:26 ubuntu.localdomain systemd[1]: Started chrony, an NTP client/server.
Jan 07 11:51:34 ubuntu.localdomain chronyd[13135]: Selected source 193.123.243.2 (0.ubuntu.pool.ntp.org)
Jan 07 11:51:34 ubuntu.localdomain chronyd[13135]: System clock TAI offset set to 37 seconds

# timedatectl
               Local time: Sun 2024-01-07 11:58:54 KST
           Universal time: Sun 2024-01-07 02:58:54 UTC
                 RTC time: Sun 2024-01-07 02:58:53
                Time zone: Asia/Seoul (KST, +0900)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no


# chronyc sources -v

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current best, '+' = combined, '-' = not combined,
| /             'x' = may be in error, '~' = too variable, '?' = unusable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- prod-ntp-5.ntp4.ps5.cano>     2   6   377    26    +11ms[  +11ms] +/-  126ms
^- prod-ntp-3.ntp4.ps5.cano>     2   6   377    26    +12ms[  +12ms] +/-  124ms
^- alphyn.canonical.com          2   6   377    25    -11ms[  -11ms] +/-  132ms
^- prod-ntp-4.ntp4.ps5.cano>     2   6   377    27  +7438us[+7438us] +/-  120ms
^* 193.123.243.2                 2   6   377    31   +616us[ +753us] +/- 5886us
^- 175.193.3.234                 3   6   377    31  +1244us[+1244us] +/-   30ms
^- mail.innotab.com              3   6   377    29   +917us[ +917us] +/-   32ms
^- 106.247.248.106               2   6   377    27   +925us[ +925us] +/-   33ms
728x90
반응형
LIST

'서버가상화 > openstack' 카테고리의 다른 글

openstack Install Guide [매뉴얼]  (0) 2024.01.13
openstack 이미지(img) 다운로드 사이트  (0) 2024.01.07
openstack apache2  (0) 2024.01.07
openstack horizon (수정중)  (1) 2024.01.07
openstack nova (수정중)  (0) 2024.01.07
728x90
반응형
# 상태확인
# service apache2 stop
# service apache2 start
# service apache2 reload

 

#  컨피그 
# /var/www/html/index.html


/etc/apache2/
|-- apache2.conf
|       `--  ports.conf
|-- mods-enabled
|       |-- *.load
|       `-- *.conf
|-- conf-enabled
|       `-- *.conf
|-- sites-enabled
|       `-- *.conf
          </pre>
          <ul>

 

728x90
반응형
LIST
728x90
반응형

 

# 설치
# apt install openstack-dashboard

 

# 위치
# pwd
/etc/openstack-dashboard

# ls -al
drwxr-xr-x   2 root root  4096 Jan  7 09:52 .
drwxr-xr-x 141 root root 12288 Jan  7 09:42 ..
-rw-r--r--   1 root root 12789 Jan  7 09:52 local_settings.py

 

 

728x90
반응형
LIST

'서버가상화 > openstack' 카테고리의 다른 글

openstack chrony(NTP, 네트워크 타임 서비스 설치)  (0) 2024.01.07
openstack apache2  (0) 2024.01.07
openstack nova (수정중)  (0) 2024.01.07
openstack keystone  (0) 2024.01.06
openstack 캐쉬  (0) 2024.01.06
728x90
반응형

 

# 인스턴스 생성 에러

 

- 인스턴스 생성시 에러 (status 상태 - Error)

< 인스턴스를 생성하면 Status 상태가 Error로 떨어짐 >

 

- 메시지 : MessagingTimeout

- 코드 : 500

- 세부정보 

Traceback (most recent call last): 
File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 441, in get return self._queues[msg_id].get(block=True, timeout=timeout) File "/usr/local/lib/python3.10/dist-packages/eventlet/queue.py", 
line 322, in get return waiter.wait() File "/usr/local/lib/python3.10/dist-packages/eventlet/queue.py", 
line 141, in wait return get_hub().switch() File "/usr/local/lib/python3.10/dist-packages/eventlet/hubs/hub.py", 
line 313, in switch return self.greenlet.switch() _queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/stack/nova/nova/conductor/manager.py", 
line 1654, in schedule_and_build_instances host_lists = self._schedule_instances(context, request_specs[0], File "/opt/stack/nova/nova/conductor/manager.py", 
line 942, in _schedule_instances host_lists = self.query_client.select_destinations( File "/opt/stack/nova/nova/scheduler/client/query.py", 
line 41, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj, File "/opt/stack/nova/nova/scheduler/rpcapi.py", 
line 160, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/rpc/client.py", 
line 190, in call result = self.transport._send( File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/transport.py", 
line 123, in _send return self._driver.send(target, ctxt, message, File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 689, in send return self._send(target, ctxt, message, wait_for_reply, timeout, File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 678, in _send result = self._waiter.wait(msg_id, timeout, File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 567, in wait message = self.waiters.get(msg_id, timeout=timeout) File "/usr/local/lib/python3.10/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 443, in get raise oslo_messaging.MessagingTimeout( oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID b6a61e8d51914d4db1f834e190f146ca

 

== 문제가 뭘까 ?

 

#  nova-status upgrade check
$ nova-status upgrade check

Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code.
+-------------------------------------------+
| Upgrade Check Results                     |
+-------------------------------------------+
| Check: Cells v2                           |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: Placement API                      |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: Cinder API                         |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: Policy File JSON to YAML Migration |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: Older than N-1 computes            |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: hw_machine_type unset              |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+
| Check: Service User Token Configuration   |
| Result: Success                           |
| Details: None                             |
+-------------------------------------------+

 

 

 

 

728x90
반응형
LIST

'서버가상화 > openstack' 카테고리의 다른 글

openstack apache2  (0) 2024.01.07
openstack horizon (수정중)  (1) 2024.01.07
openstack keystone  (0) 2024.01.06
openstack 캐쉬  (0) 2024.01.06
openstack RabbitMQ 설치(메시지 Queus)  (0) 2024.01.06
728x90
반응형

 

 

# keystone 

 

# mysql -u root -popenstack -h 192.168.56.30


mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 12
Server version: 8.0.35-0ubuntu0.22.04.1 (Ubuntu)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'keystone'@'localhost' IDENTIFIED BY 'openstack';
Query OK, 0 rows affected (0.13 sec)

mysql> CREATE USER 'keystone'@'%' IDENTIFIED BY 'openstack';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';

ERROR 1044 (42000): Access denied for user 'root'@'%' to database 'keystone'

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'root'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

 

 

 

오류 메시지 "ERROR 1044 (42000): 'keystone' 데이터베이스에 대한 'root'@'%' 사용자의 액세스가 거부되었습니다."는 'root' 사용자에게 'keystone' 데이터베이스에 대한 권한을 부여하는 데 필요한 권한이 없음을 나타냅니다. 지정된 사용자('keystone'@'%').

이 문제를 해결하려면 '루트' 사용자에게 권한을 부여할 수 있는 적절한 권한이 있는지 확인해야 합니다. 다음과 같이하세요:

 

1. MySQL에 '루트' 사용자로 로그인합니다.

# mysql -u root -p

 

2. 로그인한 후 'keystone' 데이터베이스에 대해 '루트' 사용자에게 필요한 권한을 부여합니다.

$ GRANT ALL PRIVILEGES ON keystone.* TO 'root'@'%'; FLUSH PRIVILEGES;

 

3. 권한을 부여한 후 원래 명령을 다시 실행해 보십시오

$ GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';

 

문제가 계속 발생하면 '루트' 사용자에게 GRANT권한 자체가 있는지 확인하세요. 그렇지 않은 경우 권한을 부여하는 데 필요한 권한이 있는 사용자로 MySQL 서버에 연결해야 할 수도 있습니다. 일반적으로 데이터베이스 액세스 관리를 위해 전용 MySQL 사용자를 생성하고 일상적인 작업을 위해 '루트' 사용자에 의존하지 않는 것이 좋습니다.

데이터베이스 관리에 'root' 사용자를 사용하지 않는 경우 'root'를 실제 사용자 이름으로 바꾸는 것을 잊지 마십시오.

728x90
반응형
LIST

+ Recent posts