728x90
반응형
윈도우의 하위 디렉토리에 있는 파일의 여러 문자열을 찾아 치완하는 방법

예) _Book_Ansible-main 디렉토리 아래 파일에

       192.168.1.    문자열을
       192.168.56.   으로 문자열을 치완하고 싶을 경우
      
      app
      ch2
          2.1.1
          2.1.1
      ch3 
         3.1.1
         3.1.1
 .. .
 ... 


예제

 

 

AcroEdit Tool을 설치합니다. 

 

http://www.acrosoft.pe.kr/board/download

 

다운로드

브라우저를 닫더라도 로그인이 계속 유지될 수 있습니다. 로그인 유지 기능을 사용할 경우 다음 접속부터는 로그인할 필요가 없습니다. 단, 게임방, 학교 등 공공장소에서 이용 시 개인정보가

www.acrosoft.pe.kr

 

  • ArcroEdit Tool 실행후 파일에서 바꾸기 실행

 

  • 찾을 문자열 : 192.168.56.1  / 바꿀 문자열 192.168.56.  / 해당 디렉토리 지정 / 하위 디렉토리도 검색 하여 바꾸기 하면 됨

728x90
반응형
LIST
728x90
반응형
RockyOS 9, Nagios Core 설치하기
OS : RockyOS9
Vagrant file 로 설치 : generic/rocky9
IP : 192.168.56.135
  • Vagrant 설치파일 
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.define "rocky9nagios" do |cfg|
    cfg.vm.box = "generic/rocky9"
    cfg.vm.provider "virtualbox" do |vb|
      vb.name = "rocky9nagios"
      vb.cpus = 4
      vb.memory = 4096
     vb.customize ["modifyvm", :id, "--groups", "/default_group"]
    end
    cfg.vm.host_name = "rocky9nagios"
    cfg.vm.network "private_network", ip: "192.168.56.135"
    cfg.vm.network "forwarded_port", guest: 22, host: 60135, auto_correct: true, id: "ssh"
    cfg.vm.synced_folder "../data", "/vagrant", disabled: true 
#   cfg.vm.provision "shell", path: "config.sh"\
#   cfg.vm.provision "shell", path: "install_pkg.sh", args: [ Ver, "Main" ]
#   cfg.vm.provision "shell", path: "master_node.sh"\
  end
end
Nagios 패키지 설치
@ 컴파일러 설치
# yum group install "development tools"


@ Nagios 설치에 필요한 패키지 설치
# dnf install httpd 
# dnf install php 
# dnf install php-cli 
# dnf install net-snmp 
# dnf install net-snmp-utils 
# dnf install epel-release postfix

 

Nagios 다운로드
# wget https://github.com/NagiosEnterprises/nagioscore/releases/download/nagios-4.4.6/nagios-4.4.6.tar.gz

 

Nagios 컴파일 및 설치
@ 소스코드 압축 풀기
# tar xvzpf nagios-4.4.6.tar.gz

@ /opt/nagios에 설치
# cd nagios-4.4.6
# ./configure --prefix=/opt/nagios

@ 컴파일 실행
# make all

@ 그룹 및 사용자 생성
# groupadd nagios
# useradd -g nagios nagios

@ 컴파일된 nagios 바이너리 설치
# make install

@ 퍼미션 조정
# make install-commandmode
/usr/bin/install -c -m 775 -o nagios -g nagios -d /opt/nagios/var/rw
chmod g+s /opt/nagios/var/rw

@ 예제 파일 설정
# make install-config

@ 아파치 설정 파일을 설치
# make install-webconf
/usr/bin/install -c -m 644 sample-config/httpd.conf /etc/httpd/conf.d/nagios.conf
if [ 0 -eq 1 ]; then \
        ln -s /etc/httpd/conf.d/nagios.conf /etc/apache2/sites-enabled/nagios.conf; \
fi

*** Nagios/Apache conf file installed ***

@ systemd 가 nagios 서비스를 제어할수 있도록 unit 파일을 만들고 확인
# make install-daemoninit

@ 상태 확인
# systemctl status nagios
● nagios.service - Nagios Core 4.4.6
   Loaded: loaded (/usr/lib/systemd/system/nagios.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: https://www.nagios.org/documentation

@ 웹페이지 계정 생성
# htpasswd -c /opt/nagios/etc/htpasswd.users nagios
New password:
Re-type new password:
Adding password for user nagios

@ apache 웹서버를 재실행하고, 재부팅
# systemctl restart httpd
# systemctl enable httpd


@ Nagios 플러그인 설치 
# wget https://nagios-plugins.org/download/nagios-plugins-2.3.3.tar.gz

@압축을 풀고, 압축푼 디렉토리로 이동한다.
# tar xvzpf nagios-plugins-2.3.3.tar.gz
# cd nagios-plugins-2.3.3
# ./configure --prefix=/opt/nagios
# make
# make install

@ nagios 서비스 실행
# systemctl start nagios

 

설치확인

 

728x90
반응형
LIST
728x90
반응형

https://prometheus.io/docs/instrumenting/exporters/

 

Exporters and integrations | Prometheus

An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.

prometheus.io

 

 

https://github.com/mindprince/nvidia_gpu_prometheus_exporter

 

GitHub - mindprince/nvidia_gpu_prometheus_exporter: NVIDIA GPU Prometheus Exporter

NVIDIA GPU Prometheus Exporter. Contribute to mindprince/nvidia_gpu_prometheus_exporter development by creating an account on GitHub.

github.com

 

 

# go 언어 설치
# apt-get install golang

# go version
go version go1.18.1 linux/amd64

 

 

# nvidia-docker 설치
  • GPG키와 저장소 추가
# distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
   && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
  • nvidia-docker 설치
# sudo apt-get update
# sudo apt-get install -y nvidia-docker2

# docker restart

# nvidia-docker

Usage:  docker [OPTIONS] COMMAND

A self-sufficient runtime for containers

Common Commands:
  run         Create and run a new container from an image
  exec        Execute a command in a running container
  ps          List containers
  build       Build an image from a Dockerfile
  pull        Download an image from a registry
  push        Upload an image to a registry
  images      List images
  login       Log in to a registry
  logout      Log out from a registry
  search      Search Docker Hub for images
  version     Show the Docker version information
  info        Display system-wide information

 

 

 

728x90
반응형
LIST
728x90
반응형

 

PushGateway ?
서버의 일괄처리 작업(batch job)은 시간 단위나 일 단위 등의 방식으로 정기적인 일정에 따라 수행된다. 이와 같이 일괄처리 작업은 시작되고, 무언가 작업을 수행 한뒤, 종료된다. 지속적으로 동작하지 않기 때문에, 프로메테우스는 이러한 작업에 대한 정보를 정확하게 수집할수 없다. 그렇게 때문에 푸시게이트웨이가 필요하다.

푸시게이트웨이는 서비스 레벨의 일괄처리 작업에 대한 메트릭 캐시다.

 

https://prometheus.io/download/

 

Download | Prometheus

An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.

prometheus.io

 

# pushgateway 다운로드 및 설치 
# pwd
/etc/prometheus

# wget https://github.com/prometheus/pushgateway/releases/download/v1.7.0/pushgateway-1.7.0.linux-amd64.tar.gz

# ls -al
-rw-r--r--   1 root       root       10273763 Jan 19 22:30 pushgateway-1.7.0.linux-amd64.tar.gz

# tar -zxvf pushgateway-1.7.0.linux-amd64.tar.gz
pushgateway-1.7.0.linux-amd64/
pushgateway-1.7.0.linux-amd64/LICENSE
pushgateway-1.7.0.linux-amd64/pushgateway
pushgateway-1.7.0.linux-amd64/NOTICE

# ls
alertmanager  console_libraries  consoles  ep-examples-master  prometheus.yml  prometheus.yml.20240225  pushgateway-1.7.0.linux-amd64  pushgateway-1.7.0.linux-amd64.tar.gz

# mv pushgateway-1.7.0.linux-amd64 pushgateway

# rm pushgateway-1.7.0.linux-amd64.tar.gz

 

# 리눅스 서비스 등록
  • pushgateway 실행파일 /usr/local/bin/ 복사
# cd etc/prometheus/pushgateway

# ls -al
total 17736
drwxr-xr-x 2       1001       1002     4096 Jan 19 22:30 .
drwxr-xr-x 7 prometheus prometheus     4096 Mar  9 14:19 ..
-rw-r--r-- 1       1001       1002    11357 Jan 19 22:29 LICENSE
-rw-r--r-- 1       1001       1002      487 Jan 19 22:29 NOTICE
-rwxr-xr-x 1       1001       1002 18135918 Jan 19 22:29 pushgateway

# cp pushgateway /usr/local/bin/

# cd /usr/local/bin/

# chown prometheus:prometheus pushgateway

# ls -al
total 317708
drwxr-xr-x  2 root         root              4096 Mar  9 14:23 .
drwxr-xr-x 10 root         root              4096 Aug 10  2023 ..
-rwxr-xr-x  1 alertmanager alertmanager  37345962 Mar  3 09:49 alertmanager
-rwxr-xr-x  1 root         root          17031320 Feb 25 14:58 docker-compose
-rwxr-xr-x  1         1001         1002  19925095 Nov 13 08:54 node_exporter
-rwxr-xr-x  1 prometheus   prometheus   119902884 Sep 30 06:13 prometheus
-rwxr-xr-x  1 prometheus   prometheus   112964537 Sep 30 06:15 promtool
-rwxr-xr-x  1 prometheus   prometheus    18135918 Mar  9 14:23 pushgateway
  • pushgateway.service 파일 생성
# cd /etc/systemd/system

# vi pushgateway.service   # (아래 내용 추가)

[Unit]
Description=Push_Gateway
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/pushgateway

[Install]
WantedBy=multi-user.target
  • service 확인
# systemctl daemon-reload

# systemctl enable pushgateway
Created symlink /etc/systemd/system/multi-user.target.wants/pushgateway.service → /etc/systemd/system/pushgateway.service.

# systemctl start pushgateway

# systemctl status pushgateway
● pushgateway.service - Push_Gateway
     Loaded: loaded (/etc/systemd/system/pushgateway.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2024-03-09 14:33:08 KST; 5s ago
   Main PID: 2134 (pushgateway)
      Tasks: 6 (limit: 2219)
     Memory: 4.5M
        CPU: 63ms
     CGroup: /system.slice/pushgateway.service
             └─2134 /usr/local/bin/pushgateway

Mar 09 14:33:08 servidor systemd[1]: Started Push_Gateway.
Mar 09 14:33:08 servidor pushgateway[2134]: ts=2024-03-09T05:33:08.625Z caller=main.go:86 level=info msg="starting pushgateway" version="(version=1.7.0, branch=HEAD, revision=109280c17d29059623c6f5dbf1d6babab34166cf)"
Mar 09 14:33:08 servidor pushgateway[2134]: ts=2024-03-09T05:33:08.626Z caller=main.go:87 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@c05cb3457dcb, date=20240119-13:28:37, tags=unknown)"
Mar 09 14:33:08 servidor pushgateway[2134]: ts=2024-03-09T05:33:08.642Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9091
Mar 09 14:33:08 servidor pushgateway[2134]: ts=2024-03-09T05:33:08.642Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9091

# netstat -ntpa |grep LISTEN
tcp6       0      0 :::9091                 :::*                    LISTEN      2134/pushgateway
  • 웹접속 확인 192.168.56.128:9091

 

prometheus 서버 prometheus.yml 파일에 등록함

 

# cd /etc/prometheus

# pwd
/etc/prometheus
  • 아래 소스는 본인 prometheus.yml 샘플 파일이며  scrape_configs: 아래에 아래 내용만 추가
- job_name: pushgateway
  honor_labels: true
  static_configs:
    - targets: ['192.168.56.128:9091']
# vi prometheus.yml

# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          - 192.168.56.128:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  - "/etc/prometheus/alertmanager/rules/test_rule.yml"
  - "/etc/prometheus/alertmanager/rules/alert_rules.yml"
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.


scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.

  - job_name: "node_exporter"
    static_configs:
      - targets: ["192.168.56.128:9100"]
      - targets: ["192.168.56.130:9100"]

  - job_name: 'PostgreSQL_exporter'
    static_configs:
      - targets: ['192.168.56.130:9187', '192.168.56.128:9187']
        #      - targets: ['192.168.56.128:9187']

  - job_name: 'jmx_exporter'
    scrape_interval: 5s
    static_configs:
      - targets: ['192.168.56.130:8081']

  - job_name: 'kubernetes_exporter'
    static_configs:
      - targets: ['192.168.56.10:9100']
      - targets: ['192.168.56.101:9100']
      - targets: ['192.168.56.102:9100']
      - targets: ['192.168.56.103:9100']

  - job_name: 'example'
    static_configs:
      - targets: ['192.168.56.128:8000']

  - job_name: pushgateway
    honor_labels: true
    static_configs:
      - targets: ['192.168.56.128:9091']
  • prometeus 서버 pushgateway 동작 확인

 

 

# pushgateway 저장 확인
  • pushgateway 테스트 python sample 파일작성 및 실행
# cat 4-12-pushgateway.py  (python 실행파일 생성 vi로 편집)

from prometheus_client import CollectorRegistry, Gauge, pushadd_to_gateway

registry = CollectorRegistry()
duration = Gauge('my_job_duration_seconds',
        'Duration of my batch job in seconds', registry=registry)
try:
    with duration.time():
        # Your code here.
        pass

    # This only runs if there wasn't an exception.
    g = Gauge('my_job_last_success_seconds',
            'Last time my batch job successfully finished', registry=registry)
    g.set_to_current_time()
finally:
    pushadd_to_gateway('192.168.56.128:9091', job='batch', registry=registry)
    
    
# python3 4-12-pushgateway.py  (python 실행)
  • 동작확인

 

728x90
반응형
LIST
728x90
반응형
python 파일  (3-1example.py)
# cat 3-1-example.py

import http.server
from prometheus_client import start_http_server

class MyHandler(http.server.BaseHTTPRequestHandler):
    def do_GET(self):
        self.send_response(200)
        self.end_headers()
        self.wfile.write(b"Hello World")


# http 서버 서비스 포트는 8001
# prometheus matric 수집 포트는 8000
if __name__ == "__main__":
    start_http_server(8000)
    server = http.server.HTTPServer(('192.168.56.128', 8001), MyHandler)
    server.serve_forever()

 

# PIP3  설치 및 실행 (에러)
  • pip3 및 prometheus_client 설치가 안되어 있다고 함
# python3 3-1-example.py
Traceback (most recent call last):
  File "/etc/prometheus/ep-examples-master/3/3-1-example.py", line 2, in <module>
    from prometheus_client import start_http_server
ModuleNotFoundError: No module named 'prometheus_client'

# pip3 --version
-bash: pip3: command not found
  • pip3 및 prometheus_client 설치
# apt update

# apt install python3-pip
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  build-essential cpp cpp-11 dpkg-dev fakeroot g++ g++-11 gcc gcc-11 gcc-11-base javascript-common libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl libasan6 libatomic1 libc-dev-bin libc-devtools libc6 libc6-dev libcc1-0 libcrypt-dev
  libdeflate0 libdpkg-perl libexpat1-dev libfakeroot libfile-fcntllock-perl libgcc-11-dev libgd3 libgomp1 libisl23 libitm1 libjbig0 libjpeg-turbo8 libjpeg8 libjs-jquery libjs-sphinxdoc libjs-underscore liblsan0 libmpc3 libnsl-dev libpython3-dev libpython3.10
  libpython3.10-dev libpython3.10-minimal libpython3.10-stdlib libquadmath0 libstdc++-11-dev libtiff5 libtirpc-dev libtsan0 libubsan1 libwebp7 libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxpm4 linux-libc-dev lto-disabled-list make manpages-dev python3-dev
  python3-wheel python3.10 python3.10-dev python3.10-minimal rpcsvc-proto zlib1g-dev
..
..
..
..
  
  
# pip3 install prometheus_client
Collecting prometheus_client
  Downloading prometheus_client-0.20.0-py3-none-any.whl (54 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.5/54.5 KB 1.2 MB/s eta 0:00:00
Installing collected packages: prometheus_client
Successfully installed prometheus_client-0.20.0

# pip3 -V
pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)

 

# python 파일  실행 (3-1example.py)
# python3 3-1-example.py
192.168.56.1 - - [09/Mar/2024 09:05:08] "GET / HTTP/1.1" 200 -
192.168.56.1 - - [09/Mar/2024 09:05:08] "GET /favicon.ico HTTP/1.1" 200 -

# python3 3-1-example.py &

# netstat -ntpa |grep LISTEN
tcp        0      0 192.168.56.128:8001     0.0.0.0:*               LISTEN      3295/python3
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      3299/python3

# ps -ef |grep python3
root        3299    1387  0 09:21 pts/0    00:00:00 python3 3-1-example.py


# 3-1-example.py 중지 또는 kill

# kill -9 3299

 

# 동작 확인

 

# python3 3-1-example.py
192.168.56.1 - - [09/Mar/2024 09:05:08] "GET / HTTP/1.1" 200 -
192.168.56.1 - - [09/Mar/2024 09:05:08] "GET /favicon.ico HTTP/1.1" 200 -

 

 

# prometheus에 clinet 추가하기 
# pwd
/etc/prometheus

# ls
alertmanager  console_libraries  consoles  ep-examples-master  prometheus.yml  prometheus.yml.20240225

 

  • prometheus.yml 파일에 등록 및 prometheus 재기동  (주의 : 8000번 포트로 등록)
# vi prometheus.yml

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.

  - job_name: 'example'
    static_configs:
      - targets: ['192.168.56.128:8000']
      
# systemctl restart prometheus

# systemctl status prometheus
● prometheus.service - Prometheus
     Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2024-03-09 09:34:02 KST; 7s ago
   Main PID: 3320 (prometheus)
      Tasks: 8 (limit: 2219)
     Memory: 63.8M
        CPU: 1.698s
     CGroup: /system.slice/prometheus.service
             └─3320 /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path />

 

  • Prometheus -> Status -> Targets 에서 확인

  • 192.168.56.128:8000/metrics 값 확인

  • python_info 값 입력후 Execute 확인
python_info{implementation="CPython", instance="192.168.56.128:8000", job="example", major="3", minor="10", patchlevel="12", version="3.10.12"}
1

 

728x90
반응형
LIST
728x90
반응형

 

PPT로 2박3일 구간 거리 측정한 자료 입니다.

 

제주도자전거길.pptx
1.31MB
제주환상자전거길안내도-1029.pdf
3.65MB
제주환상자전거길안내도-10292.pdf
3.65MB

728x90
반응형
LIST
728x90
반응형
# Alertmanager 컨피그 수정

 

# pwd
/etc/prometheus/alertmanager


# cat alertmanager.yml
# global: 아래 slack_api_uri은 slack 사이트에서 생성한 주소임(아래 설명 참조)
global:
  slack_api_url: https://hooks.slack.com/services/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

receivers:

- name: slack-notifier
  slack_configs:
  # channel : slack 사이트에서 생성한 slack channel 주소
  - channel: #prometeus-slack
    send_resolved: true
    title: '[{{.Status | toUpper}}] {{ .CommonLabels.alertname }}'
    text: >-
      *Description:* {{ .CommonAnnotations.description }}
      *summary* {{ .CommonAnnotations.instance  }}

route:
  group_wait: 10s
  group_interval: 1m
  repeat_interval: 1m
  receiver: slack-notifier

 

# Slack 회원가입 및 환경설정

 

https://slack.com/

 

Slack은 생산성 플랫폼입니다

Slack은 팀과 커뮤니케이션할 수 있는 새로운 방법입니다. 이메일보다 빠르고, 더 조직적이며, 훨씬 안전합니다.

slack.com

 

  • 회원가입후 왼쪽 메뉴에서 채널 추가를 선택

  • 채널 추가 (이름 생성) ---> prometeus-slack 은 이미 있으므로 화면 오류표시는 무시하면됨

# 위에 alertmanager.yml 파일의 요부분임 

  slack_configs:
  - channel: #prometeus-slack

  • 앱추가 (앱 -> 관리 -> 앱 찾아보기)

  • 앱 검색 및 추가 (incomming webhooks)

  • 수신 웹후크 Slack에 추가 

  • 수신 웹후크 구성편집  (위에서 생성한 prometeus-slack )

  • 채널에 포스트 (위에서 설정한 #prometeus-slack,  웹후크 URL 복사) 및 설정저장 

  • alertmanager.yml 파일 재확인
# cd /etc/prometheus/alertmanager

/etc/prometheus/alertmanager

# vi alertmanager.yml 
global:
# slack 홈페이지에 복사한 웹후크 url을 복사한다
  slack_api_url: https://hooks.slack.com/services/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  <--
receivers:
- name: slack-notifier
  slack_configs:
# slackr 홈페이지에서 생성한 channel 이름을 입력한다
  - channel: #prometeus-slack   <--
    send_resolved: true
    title: '[{{.Status | toUpper}}] {{ .CommonLabels.alertname }}'
    text: >-
      *Description:* {{ .CommonAnnotations.description }}
      *summary* {{ .CommonAnnotations.instance  }}
route:
  group_wait: 10s
  group_interval: 1m
  repeat_interval: 1m
  receiver: slack-notifier

 

# slack 문자가 slack 홈페이지에 수신(receivers:) 되는지 확인
  • alertmanager.yml 파일에  text 아래 Description, summary  이후 {{  값  }}  문구가 안먹힘 (확인중)
    text: >-
      *Description:* {{ .CommonAnnotations.description }}
      *summary* {{ .CommonAnnotations.instance  }}
  • slack 홈페이지에 에러 문자 확인

  • 실제로는 서버에서 나오는 아래 정보가 나와야 됨

  • 스마트폰 slack 앱에서 문자수신 확인

 

  • Windows slack Desktop에서 문자수신 확인

 

# slack에 수신되는 Description, Summary 세부 정보가 안나오는 것 확인중에 있음 (2024. 3. 3)
728x90
반응형
LIST
728x90
반응형
# 상황설명 (Ubuntu22.04 기준)

 

일반 파일로 다운로드 받아 설치 및 실행(데몬) 파일을 리눅스 서비스로 등록하는 방법

예) /etc/prometheus/alertmanager/ alertmanager 파일 실행시

# cd /etc/prometheus/alertmanager

# pwd
/etc/prometheus/alertmanager

# ls
alertmanager  alertmanager.yml  amtool  data  LICENSE  NOTICE  rules

# ./alertmanager &  

# netstat -ntpa |grep LISTEN
tcp6       0      0 :::9093                 :::*                    LISTEN      13856/./alertmanage
tcp6       0      0 :::9094                 :::*                    LISTEN      13856/./alertmanage

# ps -ef |grep alertmanager
root       13856   11451  0 11:03 pts/0    00:00:00 ./alertmanager
root       13876   11451  0 11:04 pts/0    00:00:00 grep --color=auto alertmanager

# kill -9 13856

 

파일 실행 (*.9093/*.9094 LISTEN, node_exporter) 은 되지만 서버  리부팅 등 이후 수동으로 재기동 해줘야 됨

 

# 프로세서를 리눅스 서비스로 등록하는 방법(systemctl)
  • 해당 alertmanager 파일 관리계정 및 파일 복사 준비
# cd /etc/prometheus/alertmanager/

# pwd
/etc/prometheus/alertmanager/

# ls -al
total 65932
drwxr-xr-x 4 prometheus prometheus     4096 Mar  3 10:05 .
drwxr-xr-x 5 prometheus prometheus     4096 Mar  3 07:31 ..
-rwxr-xr-x 1 prometheus prometheus 37345962 Feb 28 20:52 alertmanager     <-- 이 실행파일을 리눅스 서비스로 만든다
-rw-r--r-- 1 prometheus prometheus      356 Feb 28 20:55 alertmanager.yml
-rwxr-xr-x 1 prometheus prometheus 30130103 Feb 28 20:52 amtool
drwxr-xr-x 2 root       root           4096 Mar  3 09:41 data
-rw-r--r-- 1 prometheus prometheus    11357 Feb 28 20:55 LICENSE
-rw-r--r-- 1 prometheus prometheus      457 Feb 28 20:55 NOTICE
drwxr-xr-x 2 prometheus prometheus     4096 Mar  3 09:41 rules

# User 추가 
# useradd -M -r -s /bin/false alertmanager

# User 추가 확인
# cat /etc/passwd 
alertmanager:x:995:994::/home/alertmanager:/bin/false

# 실행 파일을 /usr/local/bin으로 경로 이동
# cp alertmanager /usr/local/bin/

# 유저, 그룹 권한 추가
# cd /usr/local/bin
# chown alertmanager:alertmanager /usr/local/bin/alertmanager
  • 리눅스 서비스 등록 
# cd /etc/systemd/system

# vi alertmanager.service
# 아래 내용을 추가
[Unit]
Description=alertmanager
Wants=network-online.target
After=network-online.target

[Service]
User=alertmanager
Group=alertmanager
Type=simple
ExecStart=/usr/local/bin/alertmanager

[Install]
WantedBy=multi-user.target


# 파일 퍼미션 변경
# chmod 744 alertmanager.service
  • 리눅스 서비스 동작상태 확인 
# systemctl daemon-reload

# systemctl stop alertmanager.service

# systemctl enable alertmanager.service

# systemctl start alertmanager.service

# systemctl status alertmanager.service
× alertmanager.service - alertmanager
     Loaded: loaded (/etc/systemd/system/alertmanager.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Sun 2024-03-03 10:09:13 KST; 3min 29s ago
   Main PID: 12922 (code=exited, status=1/FAILURE)
        CPU: 66ms

Mar 03 10:09:13 servidor alertmanager[12922]: ts=2024-03-03T01:09:13.098Z caller=main.go:182 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@22cd11f671e9, date=20240228-11:51:20, tags=netgo)"
Mar 03 10:09:13 servidor alertmanager[12922]: ts=2024-03-03T01:09:13.104Z caller=cluster.go:186 level=info component=cluster msg="setting advertise address explicitly" addr=10.0.2.15 port=9094
Mar 03 10:09:13 servidor alertmanager[12922]: ts=2024-03-03T01:09:13.110Z caller=cluster.go:683 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Mar 03 10:09:13 servidor alertmanager[12922]: ts=2024-03-03T01:09:13.130Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=alertmanager.yml
Mar 03 10:09:13 servidor alertmanager[12922]: ts=2024-03-03T01:09:13.130Z caller=coordinator.go:118 level=error component=configuration msg="Loading configuration file failed" file=alertmanager.yml err="open alertmanager.yml: no such file or directory"
Mar 03 10:09:13 servidor alertmanager[12922]: ts=2024-03-03T01:09:13.130Z caller=cluster.go:692 level=info component=cluster msg="gossip not settled but continuing anyway" polls=0 elapsed=20.021084ms
Mar 03 10:09:13 servidor alertmanager[12922]: ts=2024-03-03T01:09:13.130Z caller=silence.go:442 level=info component=silences msg="Creating shutdown snapshot failed" err="open data/silences.51ab1e5945c48bff: permission denied"
Mar 03 10:09:13 servidor alertmanager[12922]: ts=2024-03-03T01:09:13.131Z caller=nflog.go:362 level=error component=nflog msg="Creating shutdown snapshot failed" err="open data/nflog.5acef5dc6432c333: permission denied"
Mar 03 10:09:13 servidor systemd[1]: alertmanager.service: Main process exited, code=exited, status=1/FAILURE
Mar 03 10:09:13 servidor systemd[1]: alertmanager.service: Failed with result 'exit-code'.
  • 서비스 faild 떨어짐 (환경변수 파일을 찾을수 없어 수정)
# failed 로그를 보면 alertmanager.yml 파일을 찾을수 없다고 나옴

Mar 03 10:09:13 servidor alertmanager[12922]: ts=2024-03-03T01:09:13.130Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=alertmanager.yml


# servie 파일 다시 수정
# vi alertmanager.service
[Unit]
Description=alertmanager
Wants=network-online.target
After=network-online.target

[Service]
User=alertmanager
Group=alertmanager
Type=simple
ExecStart=/usr/local/bin/alertmanager \
    --config.file /etc/prometheus/alertmanager/alertmanager.yml          <--- yml 파일 추가

[Install]
WantedBy=multi-user.target
  • 서비스 재확인
# systemctl stop alertmanager.service
# systemctl start alertmanager.service
# systemctl enable alertmanager.service
# systemctl status alertmanager.service
● alertmanager.service - alertmanager
     Loaded: loaded (/etc/systemd/system/alertmanager.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-03-03 11:19:33 KST; 16s ago
   Main PID: 14007 (alertmanager)
      Tasks: 7 (limit: 2219)
     Memory: 13.1M
        CPU: 146ms
     CGroup: /system.slice/alertmanager.service
             └─14007 /usr/local/bin/alertmanager --config.file /etc/prometheus/alertmanager/alertmanager.yml

Mar 03 11:19:33 servidor alertmanager[14007]: ts=2024-03-03T02:19:33.083Z caller=main.go:181 level=info msg="Starting Alertmanager" version="(version=0.27.0, branch=HEAD, revision=0aa3c2aad14cff039931923ab16b26b7481783b5)"
Mar 03 11:19:33 servidor alertmanager[14007]: ts=2024-03-03T02:19:33.083Z caller=main.go:182 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@22cd11f671e9, date=20240228-11:51:20, tags=netgo)"
Mar 03 11:19:33 servidor alertmanager[14007]: ts=2024-03-03T02:19:33.091Z caller=cluster.go:186 level=info component=cluster msg="setting advertise address explicitly" addr=10.0.2.15 port=9094
Mar 03 11:19:33 servidor alertmanager[14007]: ts=2024-03-03T02:19:33.094Z caller=cluster.go:683 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Mar 03 11:19:33 servidor alertmanager[14007]: ts=2024-03-03T02:19:33.120Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/prometheus/alertmanager/alertmanager.yml
Mar 03 11:19:33 servidor alertmanager[14007]: ts=2024-03-03T02:19:33.121Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/prometheus/alertmanager/alertmanager.yml
Mar 03 11:19:33 servidor alertmanager[14007]: ts=2024-03-03T02:19:33.123Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9093
Mar 03 11:19:33 servidor alertmanager[14007]: ts=2024-03-03T02:19:33.123Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9093
Mar 03 11:19:35 servidor alertmanager[14007]: ts=2024-03-03T02:19:35.096Z caller=cluster.go:708 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.002291107s
Mar 03 11:19:43 servidor alertmanager[14007]: ts=2024-03-03T02:19:43.124Z caller=cluster.go:700 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.029680225s

 

728x90
반응형
LIST
728x90
반응형
# 상황설명 (CentOS8 기준)

 

일반 파일로 다운로드 받아 설치 및 실행(데몬) 파일을 리눅스 서비스로 등록하는 방법

예) root/node_exporter-1.5.0/node_exporter 파일 실행시

# cd /root/node_exporter-1.5.0

# pwd
/root/node_exporter-1.5.0

# ls
LICENSE  node_exporter  NOTICE

# ./node_exporter &  

# netstat -ntpa |grep LISTEN
tcp6       0      0 :::9100                 :::*                    LISTEN      5085/node_exporter

 

파일 실행 (*.9100 LISTEN, node_exporter) 은 되지만 서버  리부팅 등 이후 수동으로 재기동 해줘야 됨

 

# 프로세서를 리눅스 서비스로 등록하는 방법(systemctl)
  • 해당 node_exporter 파일 관리계정 및 파일 복사 준비
# cd /root/node_exporter-1.5.0

# pwd
/root/node_exporter-1.5.0

# ls -al
total 19340
-rwxr-xr-x. 1 3434 3434 19779640 Nov 30  2022 node_exporter

# User 추가 
# useradd -M -r -s /bin/false node_exporter

# User 추가 확인
# cat /etc/passwd 
node_exporter:x:993:987::/home/node_exporter:/bin/false

# 실행 파일을 /usr/local/bin으로 경로 이동
# cp node_exporter /usr/local/bin/

# 유저, 그룹 권한 추가
# cd /usr/local/bin
# chown node_exporter:node_exporter /usr/local/bin/node_exporter
  • 리눅스 서비스 등록 
# cd /etc/systemd/system

# vi node_exporter.service
# 아래 내용 넣기
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
#ExecStart=/root/prometheus/node_exporter/node_exporter
#ExecStart=/root/node_exporter-1.5.0/node_exporter
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target

# 파일 퍼미션 변경
# chmod 744 node_exporter.service
  • 리눅스 서비스 동작상태 확인 
# systemctl daemon-reload

# systemctl stop node_exporter.service

# systemctl enable node_exporter.service

# systemctl start node_exporter.service

# systemctl status node_exporter.service
● node_exporter.service - Node Exporter
   Loaded: loaded (/etc/systemd/system/node_exporter.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2024-03-03 09:15:16 KST; 22min ago
 Main PID: 5085 (node_exporter)
    Tasks: 6 (limit: 24909)
   Memory: 11.9M
   CGroup: /system.slice/node_exporter.service
           └─5085 /usr/local/bin/node_exporter

Mar 03 09:15:16 centos8 node_exporter[5085]: ts=2024-03-03T00:15:16.538Z caller=node_exporter.go:117 level=info collector=thermal_zone
Mar 03 09:15:16 centos8 node_exporter[5085]: ts=2024-03-03T00:15:16.538Z caller=node_exporter.go:117 level=info collector=time
Mar 03 09:15:16 centos8 node_exporter[5085]: ts=2024-03-03T00:15:16.538Z caller=node_exporter.go:117 level=info collector=timex
Mar 03 09:15:16 centos8 node_exporter[5085]: ts=2024-03-03T00:15:16.538Z caller=node_exporter.go:117 level=info collector=udp_queues
Mar 03 09:15:16 centos8 node_exporter[5085]: ts=2024-03-03T00:15:16.538Z caller=node_exporter.go:117 level=info collector=uname
Mar 03 09:15:16 centos8 node_exporter[5085]: ts=2024-03-03T00:15:16.538Z caller=node_exporter.go:117 level=info collector=vmstat
Mar 03 09:15:16 centos8 node_exporter[5085]: ts=2024-03-03T00:15:16.538Z caller=node_exporter.go:117 level=info collector=xfs
Mar 03 09:15:16 centos8 node_exporter[5085]: ts=2024-03-03T00:15:16.538Z caller=node_exporter.go:117 level=info collector=zfs
Mar 03 09:15:16 centos8 node_exporter[5085]: ts=2024-03-03T00:15:16.539Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Mar 03 09:15:16 centos8 node_exporter[5085]: ts=2024-03-03T00:15:16.539Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100

# netstat -ntpa |grep LISTEN
tcp6       0      0 :::9100                 :::*                    LISTEN      5275/node_exporter

 

728x90
반응형
LIST
728x90
반응형

 

# [CentOS] chrony 설치 
# dnf install chrony

# systemctl status chronyd
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2024-03-02 16:36:03 KST; 15h ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
 Main PID: 4034 (chronyd)
    Tasks: 1 (limit: 24909)
   Memory: 1.6M
   CGroup: /system.slice/chronyd.service
           └─4034 /usr/sbin/chronyd

# systemctl enable chronyd

# vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#pool 2.centos.pool.ntp.org iburst
server time.bora.net iburst              <----- 추가
server send.mx.cdnetworks.com iburst     <----- 추가

# systemctl restart chronyd

# chronyc sources -v

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current best, '+' = combined, '-' = not combined,
| /             'x' = may be in error, '~' = too variable, '?' = unusable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* time.bora.net                 2   6   377    63   +816us[+1809us] +/-  648ms
^? i0-h0-s333.p28-nrt.cdngp>     0   8     0     -     +0ns[   +0ns] +/-    0ns

 

# [Ubuntu] Chrony 설치
# apt install chrony

# systemctl status chronyd
● chrony.service - chrony, an NTP client/server
     Loaded: loaded (/lib/systemd/system/chrony.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-03-03 08:40:50 KST; 11s ago
       Docs: man:chronyd(8)
             man:chronyc(1)
             man:chrony.conf(5)
    Process: 10627 ExecStart=/usr/lib/systemd/scripts/chronyd-starter.sh $DAEMON_OPTS (code=exited, status=0/SUC>
   Main PID: 10637 (chronyd)
      Tasks: 2 (limit: 2219)
     Memory: 1.3M
        CPU: 97ms
     CGroup: /system.slice/chrony.service
             ├─10637 /usr/sbin/chronyd -F 1
             └─10638 /usr/sbin/chronyd -F 1

# systemctl enable chrony
Synchronizing state of chrony.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable chrony

# vi /etc/chrony/chrony.conf
# See http://www.pool.ntp.org/join.html for more information.
pool ntp.ubuntu.com        iburst maxsources 4
pool 0.ubuntu.pool.ntp.org iburst maxsources 1
pool 1.ubuntu.pool.ntp.org iburst maxsources 1
pool 2.ubuntu.pool.ntp.org iburst maxsources 2
server time.bora.net iburst                     <--- 추가
server send.mx.cdnetworks.com iburst            <--- 추가

# systemctl restart chronyd

# chronyc sources -v

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current best, '+' = combined, '-' = not combined,
| /             'x' = may be in error, '~' = too variable, '?' = unusable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- prod-ntp-3.ntp4.ps5.cano>     2   6    17     6  -9258us[-9258us] +/-  143ms
^- alphyn.canonical.com          2   6    33     3    -11ms[  -11ms] +/-  133ms
^- prod-ntp-5.ntp1.ps5.cano>     2   6    17     6    -11ms[  -11ms] +/-  145ms
^- prod-ntp-4.ntp1.ps5.cano>     2   6    17     6  -7299us[-7299us] +/-  145ms
^- 121.174.142.81                3   6    17     7   -224us[ -224us] +/-   43ms
^* 106.247.248.106               2   6    17     7  +1773us[+5022us] +/-   28ms
^- ec2-13-209-84-50.ap-nort>     2   6    17     5  +1679us[+1679us] +/- 7298us
^+ time.bora.net                 3   6    17     5  -3772us[-3772us] +/-   56ms
^? i0-h0-s333.p28-nrt.cdngp>     0   7     0     -     +0ns[   +0ns] +/-    0ns
728x90
반응형
LIST
728x90
반응형
Docker 확인
# docker ps -a
CONTAINER ID   IMAGE                                           COMMAND                  CREATED          STATUS         PORTS                                       NAMES
77d5896ee529   quay.io/prometheus/alertmanager                 "/bin/alertmanager -…"   42 minutes ago   Up 7 minutes   0.0.0.0:9093->9093/tcp, :::9093->9093/tcp   alertmanager
d5e072461359   quay.io/prometheuscommunity/postgres-exporter   "/bin/postgres_expor…"   43 hours ago     Up 2 hours     0.0.0.0:9187->9187/tcp, :::9187->9187/tcp   postgres-exporter
e841da551b71   postgres                                        "docker-entrypoint.s…"   5 days ago       Up 2 hours     0.0.0.0:5432->5432/tcp, :::5432->5432/tcp   postgres

 

Docker 세부정보 확인
# docker 계정정보 확인  (컨테이너 id가 77d5896ee529 ---> alertmanager docker)

# docker exec 77d5896ee529 cat /etc/passwd
root:x:0:0:root:/root:/bin/sh
daemon:x:1:1:daemon:/usr/sbin:/bin/false
bin:x:2:2:bin:/bin:/bin/false
sys:x:3:3:sys:/dev:/bin/false
sync:x:4:100:sync:/bin:/bin/sync
mail:x:8:8:mail:/var/spool/mail:/bin/false
www-data:x:33:33:www-data:/var/www:/bin/false
operator:x:37:37:Operator:/var:/bin/false
nobody:x:65534:65534:nobody:/home:/bin/false

# docker home 디렉토리 확인
/# docker exec 77d5896ee529 pwd
/alertmanager

# docker env 정보 확인
# docker exec 77d5896ee529 env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=77d5896ee529
HOME=/home

# docker exec 77d5896ee529 uname -a
Linux 77d5896ee529 5.15.0-83-generic #92-Ubuntu SMP Mon Aug 14 09:30:42 UTC 2023 x86_64 GNU/Linux

# # docker exec 77d5896ee529 uname -s
Linux

# docker exec 77d5896ee529 uname -r
5.15.0-83-generic

# docker exec 77d5896ee529 uname -v
92-Ubuntu SMP Mon Aug 14 09:30:42 UTC 2023

# docker exec 77d5896ee529 cat /proc/version
Linux version 5.15.0-83-generic (buildd@lcy02-amd64-027) (gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #92-Ubuntu SMP Mon Aug 14 09:30:42 UTC 2023

 

Docker 컨테이너 접속시 Error
  • /bin/bash 지원하지 않을 경우 /bin/sh로 접속
# docker exec -it 77d5896ee529 /bin/bash
OCI runtime exec failed: exec failed: unable to start container process: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown

# docker exec 77d5896ee529 env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=77d5896ee529
HOME=/home

# docker exec 77d5896ee529 ls /bin/bash       # env로 볼때 /bin/bash가 없음
ls: /bin/bash: No such file or directory

# docker exec 77d5896ee529 ls /bin/sh         # sh는 있는 지 확인
/bin//sh

# docker exec -it 77d5896ee529 /bin/sh         # sh로 접속
/alertmanager $

 

728x90
반응형
LIST

'docker' 카테고리의 다른 글

Docker Desktop에 Grafana 연결 및 설치  (0) 2024.06.07
Docker 개요 및 명령어  (0) 2024.02.11
728x90
반응형

 

https://prometheus.io/download/

 

Download | Prometheus

An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.

prometheus.io

 

(설치방법 1) # AlertManager 설치 (바이너리 설치 권장)
# wget https://github.com/prometheus/alertmanager/releases/download/v0.27.0/alertmanager-0.27.0.linux-amd64.tar.gz

# tar xvzf alertmanager-0.27.0.linux-amd64.tar.gz

# mv alertmanager-0.27.0.linux-amd64 /etc/prometheus/alertmanager

# /etc/prometheus/alertmanager/./alertmanager &         

  -- alertmanager를 백그라운드로 실행함
  -- alertmanager.service를 만들어서 systemctl start, restart, disable 로 관리해도 됨

# netstat -ntpa |grep LISTEN
tcp6       0      0 :::9093                 :::*                    LISTEN      6101/./alertmanager
tcp6       0      0 :::9094                 :::*                    LISTEN      6101/./alertmanager

 

# alertmanager를 systemctl {restart, start, disable, enable} 등으로 관리하려면 https://hwpform.tistory.com/134 참조

 

또는 (설치방법 2)  # Docker AlertManager 설치
  • docker 확인
# docker run --name alertmanager -d -p 9093:9093 quay.io/prometheus/alertmanager

# docker ps -a
CONTAINER ID   IMAGE                                           COMMAND                  CREATED         STATUS         PORTS                                       NAMES
77d5896ee529   quay.io/prometheus/alertmanager                 "/bin/alertmanager -…"   5 minutes ago   Up 5 minutes   0.0.0.0:9093->9093/tcp, :::9093->9093/tcp   alertmanager

# docmer images
REPOSITORY                                      TAG       IMAGE ID       CREATED        SIZE
quay.io/prometheus/alertmanager                 latest    11f11916f8cd   2 days ago     70.3MB
  • web페이지 확인 http://192.168.56.128:9093

 

# prometheus 서버에 alertmanager 설정
# cat /etc/prometheus/prometheus.yml

# my global config
global:
  scrape_interval: 15s 
  evaluation_interval: 15s 

# Alertmanager configuration 
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          - 192.168.56.128:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  - "/etc/prometheus/alertmanager/rules/test_rule.yml"
  - "/etc/prometheus/alertmanager/rules/alert_rules.yml"

 

 

Alerting rules | Prometheus

An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.

prometheus.io

# cat /etc/prometheus/alertmanager/rules/test_rule.yml

groups:
- name: example
  rules:

  # Alert for any instance that is unreachable for >5 minutes.
  - alert: InstanceDown
    expr: up == 0
    for: 5m
    labels:
      severity: page
    annotations:
      summary: "Instance {{ $labels.instance }} down"
      description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes."

  # Alert for any instance that has a median request latency >1s.
  - alert: APIHighRequestLatency
    expr: api_http_request_latencies_second{quantile="0.5"} > 1
    for: 10m
    annotations:
      summary: "High request latency on {{ $labels.instance }}"
      description: "{{ $labels.instance }} has a median request latency above 1s (current value: {{ $value }}s)"
# cat alert_rules.yml

groups:
- name: alert.rules
  rules:
  - alert: InstanceDown
    expr: up == 0
    for: 1m
    labels:
      severity: "critical"
    annotations:
      summary: "Endpoint {{ $labels.instance }} down"
      description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 1 minutes."

  - alert: HostOutOfMemory
    expr: node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes * 100 < 10
    for: 2m
    labels:
      severity: warning
    annotations:
      summary: "Host out of memory (instance {{ $labels.instance }})"
      description: "Node memory is filling up (< 10% left)\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}"

  - alert: HostMemoryUnderMemoryPressure
    expr: rate(node_vmstat_pgmajfault[1m]) > 1000
    for: 2m
    labels:
      severity: warning
    annotations:
      summary: "Host memory under memory pressure (instance {{ $labels.instance }})"
      description: "The node is under heavy memory pressure. High rate of major page faults\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}"
  # Please add ignored mountpoints in node_exporter parameters like
  # "--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|run)($|/)".
  # Same rule using "node_filesystem_free_bytes" will fire when disk fills for non-root users.
  - alert: HostOutOfDiskSpace
    expr: (node_filesystem_avail_bytes * 100) / node_filesystem_size_bytes < 10 and ON (instance, device, mountpoint) node_filesystem_readonly == 0
    for: 2m
    labels:
      severity: warning
    annotations:
      summary: "Host out of disk space (instance {{ $labels.instance }})"
      description: "Disk is almost full (< 10% left)\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}"

  - alert: HostHighCpuLoad
    expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[2m])) * 100) > 80
    for: 0m
    labels:
      severity: warning
    annotations:
      summary: "Host high CPU load (instance {{ $labels.instance }})"
      description: "CPU load is > 80%\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}"

 

  • prometheus 서버 재기동
# systemctl restart prometheus

 

Prometheus 및 Alertmanager 동작확인
  • 설치기본 정보 확인
# netstat -ntpa |grep LISTEN
tcp6       0      0 :::9090                 :::*                    LISTEN      8085/prometheus
tcp6       0      0 :::9093                 :::*                    LISTEN      6101/./alertmanager
tcp6       0      0 :::9094                 :::*                    LISTEN      6101/./alertmanager
포트 프로세서 명 설치위치 및 파일 web 접속
9090 prometheus /etc/prometheus/prometheus.yml http://192.168.56.128:9090
9093
9094
alertmanager /etc/prometheus/alertmanager/alertmanager
/etc/prometheus/alertmanager/alertmanager/rules
http://192.168.56.128:9093
  • prometheus 동작 확인 http://192.168.56.128:9090  

 

  • AlertManager 동작확인

728x90
반응형
LIST
728x90
반응형
Tomcat 설치 
  • 다운로드 사이트

https://tomcat.apache.org/download-10.cgi

 

Apache Tomcat® - Apache Tomcat 10 Software Downloads

Welcome to the Apache Tomcat® 10.x software download page. This page provides download links for obtaining the latest version of Tomcat 10.1.x software, as well as links to the archives of older releases. Unsure which version you need? Specification versi

tomcat.apache.org

# wget https://dlcdn.apache.org/tomcat/tomcat-10/v10.1.19/bin/apache-tomcat-10.1.19.tar.gz

# tar -xvf apache-tomcat-10.1.19.tar.gz

# mv /root/apache-tomcat-10.1.19 /root/tomcat

# /root/tomcat/bin/./start.sh   --> 8080 포트 LISTEN 확인

# netstat -ntpa |grep 8080
tcp6       0      0 :::8080                 :::*                    LISTEN      9743/java

 

  • 웹페이지 접속확인(default page) 

  • Server Status, Manager App, Host Manager 기능은 보안 문제로 기본적으로 막혀 있음, 접속시 403 Access Denied 해결
  • 중단부분에 Tomcat Setup, JDBC DataSources 등 doc, sample 등도 클릭하면 403 Access Denied 

 

# 1. conf 디렉토리 tomcat-users.xms 설정 변경

# vi /root/tomcat/conf/tomcat-users.xml 

  - 제일 아래 username / password 도 admin/amdin 으로 변경함



<tomcat-users xmlns="http://tomcat.apache.org/xml"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"
              version="1.0">

<role rolename="admin"/>
<role rolename="admin-gui"/>
<role rolename="admin-script"/>
<role rolename="manager"/>
<role rolename="manager-gui"/>
<role rolename="manager-script"/>
<role rolename="manager-jmx"/>
<role rolename="manager-status"/>
<user username="admin" password="admin" roles="admin,manager,admin-gui,admin-script,manager-gui,manager-script,manager-jmx,manager-status" />

 

# 2. /root/tomcat/webapp/아래 context.xml  설정 변경

# ls /root/tomcat/webapps/
context.xml  docs  examples  host-manager  manager  ROOT  SampleWebApp  SampleWebApp.war

# /root/tomcat/webapps/ 아래 context.xml 파일을 찾아서 다 바꿔줘야 됨

# find . -name context.xml
./docs/META-INF/context.xml
./examples/META-INF/context.xml
./host-manager/META-INF/context.xml
./manager/META-INF/context.xml
./SampleWebApp/META-INF/context.xml
./context.xml


# tree docs
docs
└── META-INF
    └── context.xml
    
exaples
└── META-INF
    └── context.xml
    
host-manager
└── META-INF
    └── context.xml
    
manager
└── META-INF
    └── context.xml

SampleWebApp
└── META-INF
    └── context.xml
  • 변경 예 
# vi /root/tomcat/webapps/manager/META-INF/context.xml

<?xml version="1.0" encoding="UTF-8"?>

<Context antiResourceLocking="false" privileged="true" >
  <CookieProcessor className="org.apache.tomcat.util.http.Rfc6265CookieProcessor"
                   sameSiteCookies="strict" />
  <Valve className="org.apache.catalina.valves.RemoteAddrValve"
#------>   allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />
  <Manager sessionAttributeValueClassNameFilter="java\.lang\.(?:Boolean|Integer|Long|Number|String)|org\.apache\.catalina\.filters\.CsrfPreventionFilter\$LruCache(?:\$1)?|java\.util\.(?:Linked)?HashMap"/>
</Context>

# allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" /> 를
# allow=".*" /> 으로 변경

 

# 3. Tomcat 재기동후  접속확인 (8080 포트가 정상적으로 올라왔는지 확인)

# /root/tomcat/bin/./shutdown.sh
Using CATALINA_BASE:   /root/tomcat
Using CATALINA_HOME:   /root/tomcat
Using CATALINA_TMPDIR: /root/tomcat/temp
Using JRE_HOME:        /usr/lib/jvm/java-17-openjdk-17.0.1.0.12-2.el8_5.x86_64
Using CLASSPATH:       /root/tomcat/bin/bootstrap.jar:/root/tomcat/bin/tomcat-juli.jar
Using CATALINA_OPTS:    -javaagent:/root/tomcat/jmx_exporter/jmx_prometheus_javaagent-0.17.0.jar=8081:/root/tomcat/jmx_exporter/config.yaml

# /root/tomcat/bin/./startup.sh
Using CATALINA_BASE:   /root/tomcat
Using CATALINA_HOME:   /root/tomcat
Using CATALINA_TMPDIR: /root/tomcat/temp
Using JRE_HOME:        /usr/lib/jvm/java-17-openjdk-17.0.1.0.12-2.el8_5.x86_64
Using CLASSPATH:       /root/tomcat/bin/bootstrap.jar:/root/tomcat/bin/tomcat-juli.jar
Using CATALINA_OPTS:    -javaagent:/root/tomcat/jmx_exporter/jmx_prometheus_javaagent-0.17.0.jar=8081:/root/tomcat/jmx_exporter/config.yaml
Tomcat started.

# netstat -ntpa |grep LISTEN
tcp6       0      0 127.0.0.1:8005          :::*                    LISTEN      12226/java
tcp6       0      0 :::8080                 :::*                    LISTEN      12226/java
tcp6       0      0 :::8081                 :::*                    LISTEN      12226/java

 

  • Server Status

  • Manager App

  • Host Manager 

728x90
반응형
LIST

'programming > java' 카테고리의 다른 글

JAVA Version 업그레이드  (0) 2024.02.29
728x90
반응형
가상화 서버(VM)  2대 ( Prometheus, Grafana 서버 1대, 연동 클라이언트 서버 1대) 를 설치하고 수집데이터를 Prometheus로 연동하고 연동한 데이터를 가지고 Grafana로 대쉬보드를 만들어 본다
- node_exporter (서버, 쿠버네티스, 도커 등) 데이터 수집 
- postgresql_exporter : postgresql 데이터 수집
- jmx_exporter  : tomcat 데이터 수집

 

기본 용어
  • Exporter ? prometheus  Exporter는 메트릭 정보를 수집하는 모티터링 할 서버, 에이전트, 데몬 등을 대상시스템에서 메트릭을 수집하고 HTTP 엔드포인트(default: /metrics)에 노출시키는 소프트웨어 
  • 대표적인 Exporter 종류

      - node-exporter

      - mysql-exporter

      - wmi-expoter(windwos server)

      - postgre-exporter

      - redis-exporter

      - kafra-exporter

      - jmx-exporter 

      ※ Prometheus 및 각종 exporter 설치파일 사이트  https://prometheus.io/download/

 

Download | Prometheus

An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.

prometheus.io

 

    여기에서는 node-exporter, postgre-exporter, jmx-exporter만 다룬다(연동해 본다)

 

  • 메트릭이란 ? : 메트릭(Metric) 은 현재 시스템의 상태를 알수 있는 측정값이다. 인프라 환경에서는 크게 2가지 상태롤 메트릭을 구분.  CPU와 메모리 사용량을 나타내는 시스템 메트릭(System Metric), HTTP 상태 코드 같은 서비스 상태를 나태내는 지표인 서비스 메트릭(Service Metri) 
  • 시계열 데이터베이스 ? 시계열 데이터베이스는 시간을 축(키)으로 시간의 흐름에 따라 발생하는 데이터를 저장하는 데 최적화된 데이터베이스. 예를 들어 네트워크 흐름을 알 수 있는 패킷과 각종 기기로부터 전달받는 IoT  센서 값, 이벤트 로그 등이 있다.  

 

목표시스템 구성도

 

20240229_프로메테우스-그라파타목표시스템_ver0.1.pptx
0.69MB

 

Prometheus - Grafana 서버 설치 (vagrant로 설치)
  • OracleVM 과 Vagrant 는 미리 설치되어 있어야 함 (서버 설치는 Vagrant 로 설치)

      - Vargrant 를 통한 가상화 서버 설치방법은 https://hwpform.tistory.com/111 참조

용도 IP 설치 Pkg 포트
Prometheus, Grafana 서버 192.168.56.128
(Ubuntu 22.04)
Prometheus
Grafana
Node_exporter
PostgreSQL(Docker)
PostgreSQL_Exporter(Docker)
9090
3000
9100
5432
9187
Prometheus, Grafana
연동 클라이언트 테스트
서버
192.168.56.130
(CentOS8)
Node_exporter
PortgreSQL(Docker)
PostgreSQL_Exporter(Docker)
Tomcat
jmx_exporter(Tomcat)
9100
5432
9187
8080
8081

 

Vagrant로 설치할 서버 정보 https://app.vagrantup.com/boxes/search
  • Prometheus, Grafana 서버

  • Prometheus, Grafana 연동 클라이언트 테스트 서버

  • Vagrnafile 및 설치 방법
# Ubuntu 22.04
Vagrant.configure("2") do |config|
  config.vm.box = "davidurbano/prometheus-grafana"
end

# CentOS 8
Vagrant.configure("2") do |config|
  config.vm.box = "centos/8"
end

 

  • Vargrant로 설치된 Oracle VM 서버 

< 192.168.56.128 >
< 192.168.56.130 >

설치된 Prometheus 정보

< 192.168.56.128:9090 >

 

설치된 Grafana  예시 : Node_exporterl (서버관제)

< 192.168.56.128:3000 >

설치된 Grafana  예시 : Postgresql_exporterl (DB관제)

< 192.168.56.128:3000 >

 

설치된 Grafana  예시 : jmx_exporterl (Tomcat 관제)

< 192.168.56.128:3000 >

Prometheus, Grafana 서버(192.168.56.128) 
# Vagrnat를 이용하여 설치된 서버(192.168.56.128) config.vm.box = "davidurbano/prometheus-grafana"는  
  기본적으로 Prometeus(9090), Grafana(3000), Node_expoter(9100)가 설치되어 있음 
  
# netstat -ntpa |grep LISTEN
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      2632/systemd-resolv
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      735/sshd: /usr/sbin
tcp6       0      0 :::22                   :::*                    LISTEN      735/sshd: /usr/sbin
tcp6       0      0 :::9090                 :::*                    LISTEN      4369/prometheus
tcp6       0      0 :::9100                 :::*                    LISTEN      670/node_exporter
tcp6       0      0 :::3000                 :::*                    LISTEN      666/grafana
  • prometheus 서버 : http://192.168.56.128:9090
# ls -al
total 24
drwxr-xr-x   4 prometheus prometheus 4096 Feb 29 13:47 .
drwxr-xr-x 102 root       root       4096 Feb 29 11:29 ..
drwxr-xr-x   2 prometheus prometheus 4096 Sep 29 21:42 console_libraries
drwxr-xr-x   2 prometheus prometheus 4096 Sep 29 21:42 consoles
-rw-r--r--   1 vagrant    vagrant    1385 Feb 29 13:47 prometheus.yml
-rw-r--r--   1 root       root        934 Feb 25 00:00 prometheus.yml.20240225

# pwd
/etc/prometheus
  • prometheus 컨피그 설정파일 (수집하고 싶은 targets 서버 : port를 지정해 주면됨)
# cat /etc/prometheus/prometheus.yml

scrape_configs:

# Node_exporter (서버 관제용)

  - job_name: "node_exporter"
    static_configs:
      - targets: ["192.168.56.128:9100"]
      - targets: ["192.168.56.130:9100"]

# PostgreSQL_exporter (DB 관제용)

  - job_name: 'PostgreSQL_exporter'
    static_configs:
      - targets: ['192.168.56.130:9187']
      - targets: ['192.168.56.128:9187']

# jmx_exporter (tomcat 관제용)

  - job_name: 'jmx_exporter'
    scrape_interval: 5s
    static_configs:
      - targets: ['192.168.56.130:8081']

# kubernetes_exporter (쿠버네티스 서버 관제용)

  - job_name: 'kubernetes_exporter'
    static_configs:
      - targets: ['192.168.56.10:9100']
      - targets: ['192.168.56.101:9100']
      - targets: ['192.168.56.102:9100']
      - targets: ['192.168.56.103:9100']

 

  • Prometheus 서버 : http://192.168.56.128:9090

 

  • Promethus -> Status -> Targets 클릭하면 위의 /etc/prometheus/prometheus.yml 설정된 값이 동일함

 

  •   연동 metrics 정보 (대쉬보드로 표현 가능한 함수값)

node_exporter.txt
0.07MB
PostgreSQL_exporter.txt
0.08MB
jmx_exporter.txt
0.24MB

 

 

  • Grafana서버 : http://192.168.56.128:3000

 

  • Node Exporter : http://192.168.56.128:9100

 

1. 1부는 여기까지 .. 2부에서 계속

728x90
반응형
LIST
728x90
반응형

 

CentOS8에서 자바 버전 1.11 에서 1.17로 변경

 

1. 현재 자바 버젼 확인

# java -version
openjdk version "11.0.13" 2021-10-19 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.13+8-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.13+8-LTS, mixed mode, sharing)

 

2. 설치 가능한 java 버젼 확인

# yum list java*jdk-devel
Last metadata expiration check: 1:45:54 ago on Thu 29 Feb 2024 05:22:45 PM KST.
Installed Packages
java-11-openjdk-devel.x86_64                                                            1:11.0.13.0.8-4.el8_5                                                           @appstream
Available Packages
java-1.8.0-openjdk-devel.x86_64                                                         1:1.8.0.312.b07-2.el8_5                                                         appstream
java-17-openjdk-devel.x86_64                                                            1:17.0.1.0.12-2.el8_5                                                           appstream

 

3. 원하는 버젼 설치

# yum install -y java-17-openjdk-devel.x86_64
Last metadata expiration check: 1:47:33 ago on Thu 29 Feb 2024 05:22:45 PM KST.
Dependencies resolved.
==================================================================================================================================================================================
 Package                                            Architecture                     Version                                            Repository                           Size
==================================================================================================================================================================================
Installing:
 java-17-openjdk-devel                              x86_64                           1:17.0.1.0.12-2.el8_5                              appstream                           5.1 M
Installing dependencies:
 java-17-openjdk                                    x86_64                           1:17.0.1.0.12-2.el8_5                              appstream                           244 k
 java-17-openjdk-headless                           x86_64                           1:17.0.1.0.12-2.el8_5                              appstream                            41 M

Transaction Summary
==================================================================================================================================================================================
Install  3 Packages

..
..
..
Installed:
  java-17-openjdk-1:17.0.1.0.12-2.el8_5.x86_64          java-17-openjdk-devel-1:17.0.1.0.12-2.el8_5.x86_64          java-17-openjdk-headless-1:17.0.1.0.12-2.el8_5.x86_64

Complete!

 

4. Default Java 변경하기

# /usr/sbin/alternatives --config java

There are 2 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
*+ 1           java-11-openjdk.x86_64 (/usr/lib/jvm/java-11-openjdk-11.0.13.0.8-4.el8_5.x86_64/bin/java)
   2           java-17-openjdk.x86_64 (/usr/lib/jvm/java-17-openjdk-17.0.1.0.12-2.el8_5.x86_64/bin/java)

Enter to keep the current selection[+], or type selection number: 2

 

5. 환경변수 재설정

# java -version
openjdk version "11.0.13" 2021-10-19 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.13+8-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.13+8-LTS, mixed mode, sharing)

# echo $JAVA_HOME
/usr/bin/javac

# vi /etc/profile
--> 삭제 JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.275.b01-0.el7_9.i386
JAVA_HOME=/usr/lib/jvm/java-17-openjdk-17.0.1.0.12-2.el8_5.x86_64
export JAVA_HOME
PATH=$PATH:$JAVA_HOME/bin
export PATH

 

6. 자바 버젼 확인

# su - root
Last login: Thu Feb 29 19:44:10 KST 2024 on pts/0

# echo $JAVA_HOME
/usr/lib/jvm/java-17-openjdk-17.0.1.0.12-2.el8_5.x86_64

# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/lib/jvm/java-17-openjdk-17.0.1.0.12-2.el8_5.x86_64/bin:/root/bin

# java -version
openjdk version "17.0.1" 2021-10-19 LTS
OpenJDK Runtime Environment 21.9 (build 17.0.1+12-LTS)
OpenJDK 64-Bit Server VM 21.9 (build 17.0.1+12-LTS, mixed mode, sharing)
728x90
반응형
LIST

'programming > java' 카테고리의 다른 글

Tomcat 설치 및 Manager 설정(403 Access Denied)  (0) 2024.03.01
728x90
반응형
  • 노드포트(NodePort)
  • 인그레스(Ingress)
# 노드포스(NodePort) 생성
# 마스터 노드 및 워크노드 정보 

# kubectl get nodes -o wide
NAME     STATUS   ROLES    AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
m-k8s    Ready    master   3d5h   v1.18.4   192.168.56.10    <none>        CentOS Linux 7 (Core)   3.10.0-1160.90.1.el7.x86_64   docker://18.9.9
w1-k8s   Ready    <none>   3d4h   v1.18.4   192.168.56.101   <none>        CentOS Linux 7 (Core)   3.10.0-1160.90.1.el7.x86_64   docker://18.9.9
w2-k8s   Ready    <none>   3d4h   v1.18.4   192.168.56.102   <none>        CentOS Linux 7 (Core)   3.10.0-1160.90.1.el7.x86_64   docker://18.9.9
w3-k8s   Ready    <none>   3d4h   v1.18.4   192.168.56.103   <none>        CentOS Linux 7 (Core)   3.10.0-1160.90.1.el7.x86_64   docker://18.9.9

 

# 파드 오브젝트 스펙 노드포트 서비스 

# cat nodeport.yaml

apiVersion: v1
kind: Service               # kind : Service

metadata:                   # Metadata 서비스의 이름
  name: np-svc
spec:                       # Spec 셀렉터의 레이블 지정
  selector:
    app: np-pods
  ports:                    # 사용할 프토토콜과 포트들을 지정
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30000
  type: NodePort            # 서비스 타입을 설정

 

# 파드 생성 

# kubectl create deployment np-pods --image=sysnet4admin/echo-hname
deployment.apps/np-pods created

# 파드 노드포트 서비스 생성
# kubectl create -f nodeport.yaml

# kubectl get pods -o wide
NAME                                             READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
np-pods-5767d54d4b-txwm4                         1/1     Running   0          10m     172.16.103.169   w2-k8s   <none>           <none>

# kubectl get services
NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
np-svc                          NodePort       10.104.110.226   <none>          80:30000/TCP   7m32s

 

 

 

# 인그레스(Ingress) 생성

 

# cat ingress-config.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path:
        backend:
          serviceName: hname-svc-default
          servicePort: 80
      - path: /ip
        backend:
          serviceName: ip-svc
          servicePort: 80
      - path: /your-directory
        backend:
          serviceName: your-svc
          servicePort: 80

 

# cat ingress-nginx.yaml
# All of sources From https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
# clone from above to sysnet4admin

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container

 

# cat ingress.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30100
  - name: https
    protocol: TCP
    port: 443
    targetPort: 443
    nodePort: 30101
  selector:
    app.kubernetes.io/name: ingress-nginx
  type: NodePort

 

 

# kubectl create deployment in-hname-pod --image=sysnet4admin/echo-hname
deployment.apps/in-hname-pod created

# kubectl create deployment in-ip-pod --image=sysnet4admin/echo-ip
deployment.apps/in-ip-pod created

# kubectl apply -f ingress-nginx.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created

# kubectl apply -f ingress-config.yaml
ingress.networking.k8s.io/ingress-nginx configured

# kubectl apply -f ingress.yaml
service/nginx-ingress-controller created

# kubectl expose deployment in-hname-pod --name=hname-svc-default --port=80,443
service/hname-svc-default exposed

# kubectl expose deployment in-ip-pod --name=ip-svc --port=80,443
service/ip-svc exposed
# kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-5bb8fb4bb6-mnx88   1/1     Running   0          23s

# kubectl get ingress
NAME            CLASS    HOSTS   ADDRESS   PORTS   AGE
ingress-nginx   <none>   *                 80      40m

# kubectl get services -n ingress-nginx
NAME                       TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
nginx-ingress-controller   NodePort   10.101.20.235   <none>        80:30100/TCP,443:30101/TCP   20s

# kubectl get services
NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)          AGE
hname-svc-default               ClusterIP      10.97.228.75     <none>          80/TCP,443/TCP   18s
ip-svc                          ClusterIP      10.108.49.235    <none>          80/TCP,443/TCP   9s

# kubectl get pods
NAME                                             READY   STATUS    RESTARTS   AGE
in-hname-pod-8565c86448-d8q9h                    1/1     Running   0          2m40s
in-ip-pod-76bf6989d-j7pdk                        1/1     Running   0          2m30s

 

 

 

728x90
반응형
LIST

'kubernetes' 카테고리의 다른 글

Kubernetes 설명  (1) 2024.02.12
Prometheus 설명  (0) 2024.02.12
centos7에서 docker 재설치(missing signature key)  (0) 2024.02.04
728x90
반응형
# 쿠버네티스의 이해

 

  • 쿠버네티스는 컨테이너 오케스트레이션(Orchestration)이란 복잡한 단계를 관리하고 요소들의 유기적인 관계를 미리 정의해 손쉽게 사용하도록 서비스를 제공. 다수의 컨테이너를 유기적으로 연결, 실행, 종료할 뿐만 아니라 상태를 추적하고 보존하는 등 컨테이너를 안정적으로 사용할수 있게 만들어 주는것

 

# kubectl get nodes

# kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
m-k8s    Ready    master   2d4h   v1.18.4
w1-k8s   Ready    <none>   2d4h   v1.18.4
w2-k8s   Ready    <none>   2d3h   v1.18.4
w3-k8s   Ready    <none>   2d3h   v1.18.4

 

# kubectl get pods --all-namespaces

# kubectl get pods --all-namespaces
NAMESPACE              NAME                                             READY   STATUS    RESTARTS   AGE
kube-system            calico-kube-controllers-99c9b6f64-f2hgs          1/1     Running   7          2d4h
kube-system            calico-node-5j69x                                1/1     Running   12         2d4h
kube-system            calico-node-c4grb                                1/1     Running   11         2d3h
kube-system            calico-node-dmlgz                                1/1     Running   10         2d3h
kube-system            calico-node-r5w6f                                1/1     Running   7          2d4h
kube-system            coredns-66bff467f8-26xcv                         1/1     Running   7          2d4h
kube-system            coredns-66bff467f8-b5v8m                         1/1     Running   7          2d4h
kube-system            etcd-m-k8s                                       1/1     Running   7          2d4h
kube-system            kube-apiserver-m-k8s                             1/1     Running   8          2d4h
kube-system            kube-controller-manager-m-k8s                    1/1     Running   14         2d4h
kube-system            kube-proxy-b48gp                                 1/1     Running   11         2d3h
kube-system            kube-proxy-b8sl2                                 1/1     Running   11         2d4h
kube-system            kube-proxy-c4frz                                 1/1     Running   7          2d4h
kube-system            kube-proxy-jdsdn                                 1/1     Running   10         2d3h
kube-system            kube-scheduler-m-k8s                             1/1     Running   17         2d4h
kubernetes-dashboard   dashboard-metrics-scraper-68fc77645b-lxhkc       1/1     Running   5          24h
kubernetes-dashboard   kubernetes-dashboard-7f9d757bdb-k4h6l            1/1     Running   5          24h

 

# 쿠버네티스 컴포넌트

<출처 :&nbsp; https://kubernetes.io/ko/docs/concepts/overview/components/ >
<출처 : https://arisu1000.tistory.com/27827>

 

# Master Node

  • kubectl : 쿠버네티스 클러스터에 명령을 내리는 역할. 
  • API 서버 : 쿠버네티스 클러스터의 중심 역할을 하는 통로
  • etcd : 구성 요소들의 상태값이 모두 저장되는 곳
  • 컨트롤러 매니저(c-m) : 쿠버네티스 클러스터의 오브젝트 상태를 관리(워크노드 통신이 안되는 경우 상태 체크와 복구는 컨트롤러 매니저에 속한 노트 컨트롤러에서 이루어짐
  • 스케줄러(sched) : 노드의 상태와 자원, 레이블, 요구 조건 등 고려해 파드를 어떤 워커 노드에 생성할것인지 결정

# Work Node

  • kubelet : 파드의 구성내용(PodSpec)을 받아서 컨테이너 런타임으로 전달하고, 파드 안의 컨테이너들이 정상적으로 작동하는지 모니터링 (파드의 상태를 관리   # systemctl start kubelet 
  • 컨테이너 런타임(CRI, Continer Runtime Interface) : 파드를 이루는 컨테이너의 실행을 담당
  • 파드(pod) : 한 개이상의 컨테이너로 단일 목적의 일을 하기 위해서 모인 단위(컨테이너의 묶음)

# 파드의 상태관리와 통신관리

  • kubelet : 파드의 생성과 상태 관리 및 복구 
  • kube-proxy : 파드의 통신을 담당 

 

# 파드의 생성 과 관리
# docker images nginx
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
nginx               latest              b690f5f0a2d5        3 months ago        187MB
nginx               stable              3a8963c304a2        10 months ago       142MB

# (그냥 파드로 생성)
# kubectl run nginx-pod --image=nginx  

# (deployment 파드로 생성)
# kubectl create deployment dpy-nginx --image=nginx

# kubectl get pods
NAME                                             READY   STATUS    RESTARTS   AGE
nginx-pod                                        1/1     Running   0          5m30s
dpy-nginx-c8d778df-4tsmz                         1/1     Running   0          4m59s


# kubectl scale pod nginx-pod --replicas=3
Error from server (NotFound): the server could not find the requested resource
(일반 파드로 생성되었기 때문에 deployment replicas 명령이 안먹힘)


# kubectl scale deployment dpy-nginx --replicas=3
(deployment 파드로 생성되었기 때문에 deployment replicas 명령이 먹힘)

# kubectl get pods 
NAME                                             READY   STATUS    RESTARTS   AGE
dpy-nginx-c8d778df-4tsmz                         1/1     Running   0          16m
dpy-nginx-c8d778df-6qzjj                         1/1     Running   0          77s
dpy-nginx-c8d778df-mtwsz                         1/1     Running   0          77s
nginx-pod                                        1/1     Running   0          17m

 

# 스펙을 지정해 오브젝트 생성하기 (야믈)
# cat echo-hname.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo-hname
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: echo-hname
        image: sysnet4admin/echo-hname


# 야믈 파일실행
# kubectl create -f echo-hname.yaml
deployment.apps/echo-hname created


# kubectl get pods
NAME                                             READY   STATUS    RESTARTS   AGE
echo-hname-7894b67f-cndkd                        1/1     Running   0          44s
echo-hname-7894b67f-gbxk5                        1/1     Running   0          44s
echo-hname-7894b67f-tv7rs                        1/1     Running   0          44s
nginx-pod                                        1/1     Running   0          30m

 

- 야믈 파일의 image : sysnet4admin/echo-hname 은 도커허브에 있는 이미지 사용

< spec image : https://hub.docker.com/r/sysnet4admin/echo-hname >

 

# 스펙의 오브젝트를 변경하려고 할때
# pod의 갯수를 6개로 변경하여 다시 생성
(아래 spec: replicas 3 ---> 6으로 변경후)

# cat echo-hname.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo-hname
  labels:
    app: nginx
spec:
  replicas: 6
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: echo-hname
        image: sysnet4admin/echo-hname


# kubectl create -f echo-hname.yaml
Error from server (AlreadyExists): error when creating "echo-hname.yaml": deployments.apps "echo-hname" already exists
# 파드 갯수가 3개에서 6개로 적용이 안됨

# kubectl apply -f echo-hname.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/echo-hname configured


# kubectl get pods
NAME                                             READY   STATUS    RESTARTS   AGE
echo-hname-7894b67f-cndkd                        1/1     Running   0          7m4s
echo-hname-7894b67f-frrrl                        1/1     Running   0          38s
echo-hname-7894b67f-gbxk5                        1/1     Running   0          7m4s
echo-hname-7894b67f-s9pv4                        1/1     Running   0          38s
echo-hname-7894b67f-trwcb                        1/1     Running   0          38s
echo-hname-7894b67f-tv7rs                        1/1     Running   0          7m4s
nginx-pod                                        1/1     Running   0          37m

 

오브젝트 생성 명령어 비교
구분 Run Create Apply
명령 실행 제한적임 가능함 안됨
파일 실행 안 됨 가능함 가능함
변경 가능 안 됨 안 됨 가능함
실행 편의성 매우 좋음 매우 좋음 좋음
기능 유지 제한적임 지원됨 다양하게 지원

 

파드의 내부 접속 방법
# kubectl get pods
NAME                                             READY   STATUS    RESTARTS   AGE
echo-hname-7894b67f-cndkd                        1/1     Running   0          14m
echo-hname-7894b67f-frrrl                        1/1     Running   0          7m41s
echo-hname-7894b67f-gbxk5                        1/1     Running   0          14m
echo-hname-7894b67f-s9pv4                        1/1     Running   0          7m41s
echo-hname-7894b67f-trwcb                        1/1     Running   0          7m41s
echo-hname-7894b67f-tv7rs                        1/1     Running   0          14m
nginx-pod                                        1/1     Running   0          44m

# kubectl exec -it echo-hname-7894b67f-cndkd -- /bin/bash
root@echo-hname-7894b67f-cndkd:/# exit
exit

# kubectl exec -it nginx-pod -- /bin/bash
root@nginx-pod:/# exit
exit

 

# 파드의 삭제
# nginx-pod는 삭제, echo-hname-xx 는 삭제후 재생성됨 

# kubectl delete pods nginx-pod

# kubectl delete pods echo-hname-7894b67f-tv7rs

# kubectl get pods
NAME                                             READY   STATUS    RESTARTS   AGE
echo-hname-7894b67f-c4pxc                        1/1     Running   0          36s
echo-hname-7894b67f-cndkd                        1/1     Running   1          15h
echo-hname-7894b67f-frrrl                        1/1     Running   1          14h
echo-hname-7894b67f-gbxk5                        1/1     Running   1          15h
echo-hname-7894b67f-s9pv4                        1/1     Running   1          14h
echo-hname-7894b67f-trwcb                        1/1     Running   1          14h

# echo-hname 완전 삭제
# kubectl delete deployments.apps echo-hname

 

 

728x90
반응형
LIST

'kubernetes' 카테고리의 다른 글

Kubernetes 연결을 담당하는 서비스  (1) 2024.02.13
Prometheus 설명  (0) 2024.02.12
centos7에서 docker 재설치(missing signature key)  (0) 2024.02.04
728x90
반응형

 

# 용어설명

 

  • 메트릭스(Metric) : 현재 시스템의 상태를 알수 있는 측정값, 컨테이너 인프라 환경에서는 크게 2가지 상태로 메트릭을 구분, 시스템 메트릭스(system Metric) : 파드 같은 오프젝트에서 측정되는 CPU와 메모리 사용량, 서비스 메트릭스(service Metric) : HTTP 상태 코드 같은 서비스 상태를 나타내는 지표
  • 시계열 데이터베이스 : 시간을 축(키)으로 시간의 흐름에 따라 발생하는 데이터를 저장하는데 최적화된 데이터베이스

 

# 프로세서 설명
# kubectl get services
NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
grafana                         LoadBalancer   10.105.83.88     192.168.56.13   80:31702/TCP   40h
jenkins                         LoadBalancer   10.110.209.109   192.168.56.11   80:31590/TCP   40h
jenkins-agent                   ClusterIP      10.103.100.52    <none>          50000/TCP      40h
kubernetes                      ClusterIP      10.96.0.1        <none>          443/TCP        45h
prometheus-kube-state-metrics   ClusterIP      10.102.2.36      <none>          8080/TCP       40h
prometheus-node-exporter        ClusterIP      None             <none>          9100/TCP       40h
prometheus-server               LoadBalancer   10.109.71.59     192.168.56.12   80:32365/TCP   40h
[root@m-k8s ~]#
# Node 정보

Node      IP                Host
======  =============      ========
Master  192.168.56.10      m-k8s
Work#1  192.168.56.101     w1-k8s
Work#2  192.168.56.102     w1-k8s
Work#3  192.168.56.103     w1-k8s


# kubectl get pods -o wide
NAME                                             READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
grafana-86b96cd9c6-brs7f                         1/1     Running   4          40h   172.16.221.138   w1-k8s   <none>           <none>
prometheus-kube-state-metrics-7bc49db5c5-wv7kh   1/1     Running   2          40h   172.16.221.139   w1-k8s   <none>           <none>
prometheus-node-exporter-mdjgp                   1/1     Running   3          40h   192.168.56.10    m-k8s    <none>           <none>
prometheus-node-exporter-nbprf                   1/1     Running   3          40h   192.168.56.101   w1-k8s   <none>           <none>
prometheus-node-exporter-qjtk8                   1/1     Running   3          17h   192.168.56.103   w3-k8s   <none>           <none>
prometheus-node-exporter-zk6zq                   1/1     Running   3          40h   192.168.56.102   w2-k8s   <none>           <none>
prometheus-server-6d77896bb4-zpmqv               2/2     Running   4          13h   172.16.132.26    w3-k8s   <none>           <none>

 

  • prometheus-server : 노드의 메트릭을 수집해 오는 수집기, 수집한 시계열 메트릭 데이터를 저장하는 시계열 데이터베이스, 저장된 데이터를 질의하거나 수집 대상의 상태를 확인할수 있는 웹 UI (현재 w3-k8s에 설치되어 있음)
  • prometheus node-exporter  : 노드의 시스템 메트릭 정보를 HTTP 로 공개하는 역할, 설치된 노드에서 특정 파일들을 읽고, 이를 프로메테우스 서버가 수집할수 있는 메트릭 데이터로 변환한 후에 노드 익스포터에서 HTTP 서버로 공개. 공개된 내용을 프로메테우스 서버에서 수집해 가게 됨 (현재 m-k8s, w1-k8s, w2-k8s, w3-k8s에 설치되어 있음) 
  • prometheus kube-state-metrics : API 서버로 쿠버네티스 클러스터의 여러 메트릭 데이터를 수집 한후, 이를 프로메테우스 서바가 수집할수 있는 메티릭 데이터로 변환해 공개하는 역할, 프로메테우스가 쿠버네티스 클러스터의 여러 정보를 손쉽게 획득할수 있는 것은 쿠버 스테이트 메트릭  (현재 w1-k8s에 설치되어 있음)
  • 얼럿매니저(alertmanager) : 프로메테우스 경보(alert) 규칙을 설정하고, 경보 이벤트가 발생하면 설정된 경보메시지를 대상에게 전할하는 기능을 제공
  • 푸시게이트웨이(pushgateway) : 배치와 스케줄 작업 시 수행되는 일회성 작업들의 상태를 저장하고 모아서 프로메테우스가 주기적으로 가져갈수 있도록 공개, 

 

 

 

 

728x90
반응형
LIST

'kubernetes' 카테고리의 다른 글

Kubernetes 연결을 담당하는 서비스  (1) 2024.02.13
Kubernetes 설명  (1) 2024.02.12
centos7에서 docker 재설치(missing signature key)  (0) 2024.02.04
728x90
반응형

https://github.com/

 

GitHub: Let’s build from here

GitHub is where over 100 million developers shape the future of software, together. Contribute to the open source community, manage your Git repositories, review code like a pro, track bugs and fea...

github.com

 

# GitHub에 레포지토리를 생성합니다.

 

 

#  GitHub  사이트 오른쪽 계정을 클릭하여 아래 Settings을 클릭합니다.

 

 

 

 

#  Profile 제일 아래 Developer Setting 을 클릭합니다.

 

#  Personal access tokens(classic) 클릭

 

 

 

#  Token 정보를 적고 생성을 클릭합니다.

 

 - Note : 일반정보

 - Expiration : No expiration (영구로 해도 됨)

 - repo 정도 클릭하고 Generate Token을 클

 

#  Token 정보 확인 (ghp..... 로 시작되는 문자를 복사 )

 

 

 

#  여기서 부터 중요함

# 나의 git 주소
- https://github.com/aeroshim/GitOps.git

# 나의 token 정보
- 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

# Linux root 디렉토리 또는 특정 디렉토리에서 git clone 실행 

# git clone https://(계정명:token정보복사@github.com/계정명/레토지토리주소
--> git clone https://aeroshim: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@github.com/aeroshim/GitOps.git

# 파일을 3개 정보 생성한다.

# vi aa.txt
# vi bbb.txt
# vi ccc.ext


# git add .
# git commit -m "init commit"
# git branch -M main
# git push -u origin main

 

 

# git 생성 로그 (linux 서버에서)
# git clone 으로 레포지토리 복사

[root@m-k8s ~]# ls
anaconda-ks.cfg  _Book_k8sInfra
[root@m-k8s ~]#
[root@m-k8s ~]# git clone https://aeroshim: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@github.com/aeroshim/GitOps.git
Cloning into 'GitOps'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (3/3), done.
[root@m-k8s ~]# ls
anaconda-ks.cfg  _Book_k8sInfra  GitOps



# 복사된 레포지토리 확인 및 초기하 

[root@m-k8s ~]# cd GitOps/
[root@m-k8s GitOps]# ls
test
[root@m-k8s GitOps]#
[root@m-k8s GitOps]# git init
Reinitialized existing Git repository in /root/GitOps/.git/
[root@m-k8s GitOps]# git config --list
user.name=aeroshim
user.email=aeroshim@gmail.com
credential.helper=store  --file /root/.git-cred
core.repositoryformatversion=0
core.filemode=true
core.bare=false
core.logallrefupdates=true
remote.origin.url=https://aeroshim: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@github.com/aeroshim/GitOps.git
remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
branch.main.remote=origin
branch.main.merge=refs/heads/main
[root@m-k8s GitOps]#


# 파일 생성 및 연동 테스트  

[root@m-k8s GitOps]# git add .
[root@m-k8s GitOps]# git commit -m "git pull test"
[main 04fe006] git pull test
 3 files changed, 3 insertions(+)
 create mode 100644 aaa.txt
 create mode 100644 bbb.txt
 create mode 100644 ccc.txt
[root@m-k8s GitOps]# git branch -M main
[root@m-k8s GitOps]# git push -u origin main
Counting objects: 6, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (5/5), 493 bytes | 0 bytes/s, done.
Total 5 (delta 0), reused 0 (delta 0)
remote: To https://aeroshim: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@github.com/aeroshim/GitOps.git
   65f15cd..04fe006  main -> main
Branch main set up to track remote branch main from origin.

 

 

 

# 파일이 생성되었는지 확인해 본다

728x90
반응형
LIST
728x90
반응형

 

# Docker 허브 레포지토리 사이트 

 

https://hub.docker.com 

 

Docker Hub Container Image Library | App Containerization

Increase your reach and adoption on Docker Hub With a Docker Verified Publisher subscription, you'll increase trust, boost discoverability, get exclusive data insights, and much more.

hub.docker.com

 

# docker 개념설명

 

  • 도커 이미지로 컨테이너를 만든다 (이미지는 금형과 같다,,, 컨테이너를 찍어 내기 위해서는 도커 이미지가 필요함)
  • 이미지로 만든 컨테이너를 다시 이미지로 만들수 있다.(개조된 버젼, 이미지)
# Docker 설치 (ubuntu 22.04)
1. 우분투 시스템 패키지 업데이트
# sudo apt-get update

2. 필요한 패키지 설치
# sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

3. Docker의 공식 GPG키를 추가
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

4. Docker의 공식 apt 저장소를 추가
# sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

5. 시스템 패키지 업데이트
# sudo apt-get update

6. Docker 설치
# sudo apt-get install docker-ce docker-ce-cli containerd.io

# sudo systemctl status docker


7. Docker-compose 설치

# curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# chmod +x /usr/local/bin/docker-compose

 

# Docker 설치 (CentOS)
# docker repo

# yum install yum-utils -y 
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# docker 설치 
# yum install docker-ce docker-ce-cli containerd.io-1.2.6-3.3.el7 -y
# systemctl enable --now docker



7. Docker-compose 설치

# curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# chmod +x /usr/local/bin/docker-compose

 

# Docker 엔진 시작/종료
# 도커 엔진시작
# systemctl start docker

# 도커 엔진 종료
# systemctl stop docker

# 도커 자동 실행
# systemctl enable docker

 

# docker 이미지 검색  (docker search nginx)
# docker search nginx
NAME                               DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
nginx                              Official build of Nginx.                        19595               [OK]
bitnami/nginx                      Bitnami nginx Docker Image                      181                                     [OK]
nginxinc/nginx-unprivileged        Unprivileged NGINX Dockerfiles                  141
nginxproxy/nginx-proxy             Automated nginx proxy for Docker containers …   131
nginxproxy/acme-companion          Automated ACME SSL certificate generation fo…   130
ubuntu/nginx                       Nginx, a high-performance reverse proxy & we…   112
nginx/nginx-ingress                NGINX and  NGINX Plus Ingress Controllers fo…   88
nginx/unit                         This repository is retired, use the Docker o…   64
nginx/nginx-prometheus-exporter    NGINX Prometheus Exporter for NGINX and NGIN…   36
bitnami/nginx-ingress-controller   Bitnami Docker Image for NGINX Ingress Contr…   32                                      [OK]
unit                               Official build of NGINX Unit: Universal Web …   21                  [OK]

 

  • NAME : 검색된 이미지 이름
  • DESCRIPTION : 이미지에 대한 설명
  • STARS : 해당 이미지를 내려받은 갯수
  • OFFICAL : [OK] 표시는 해당 이미지에 포함된 개발한 업체에서 공식적으로 제공하는 이미지 
  • AUTOMATED : [OK] 표시는 도커 허브에서 자체적으로 제공하는 이미지 빌드 자동화 기능을 활용해 생성한 이미지
# docker 설치 (docker pull nginx)
# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
c57ee5000d61: Pull complete
9b0163235c08: Pull complete
f24a6f652778: Pull complete
9f3589a5fc50: Pull complete
f0bd99a47d4a: Pull complete
398157bc5c51: Pull complete
1ef1c1a36ec2: Pull complete
Digest: sha256:84c52dfd55c467e12ef85cad6a252c0990564f03c4850799bf41dd738738691f
Status: Downloaded newer image for nginx:latest

  

# 기본적인 사용법
# wordpress 컨테이너와 mysql 컨테이너를 다운로드 하여 2개를 연결한다.

# docker 2개 컨테이간 네트워크 연결 생성

# docker network create wordpress000net1


# 컨테이터 2개 다운로드 

# docker run --name wordpress000ex12 -dit --net=wordpress000net1 -p 8085:80 -e WORDPRESS_DB_HOST=mysql000ex11 -e WORDPRESS_DB_NAME=wordpress000db -e WORDPRESS_DB_USER=wordpress000kun -e WORDPRESS_DB_PASSWORD=wkunpass wordpress
# docker run --name mysql000ex11 -dit --net=wordpress000net1 -e MYSQL_ROOT_PASSWORD=myrootpass -e MYSQL_DATABASE=wordpress000db -e MYSQL_USER=wordpress000kun -e MYSQL_PASSWORD=wkunpass mysql --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --default-authentication-plugin=mysql_native_password

# docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED          STATUS          PORTS                                   NAMES
0cf1e7f5cf51   wordpress   "docker-entrypoint.s…"   58 seconds ago   Up 49 seconds   0.0.0.0:8085->80/tcp, :::8085->80/tcp   wordpress000ex12
ad739fca0f25   mysql       "docker-entrypoint.s…"   6 minutes ago    Up 6 minutes    3306/tcp, 33060/tcp                     mysql000ex11

# pc에서 192.168.56.130:8085 접속하면 열림

# dockr 중단

# docker stop wordpress000ex12
# docker stop mysql000ex11

# docker 삭제
# docker rm wordpress000ex12
# docker rm mysql000ex11

# docker network 삭제
# docker network ls
NETWORK ID     NAME               DRIVER    SCOPE
280c498270b5   bridge             bridge    local
895ffb8cd3c4   host               host      local
4533e20c8f87   none               null      local
35468fd66207   wordpress000net1   bridge    local

#docker network rm wordpress000net1

# docker 이미지 삭제

# docker image ls
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
wordpress    latest    2fc2a7b04129   2 weeks ago   739MB
mysql        latest    a88c3e85e887   4 weeks ago   632MB
httpd        latest    2776f4da9d55   4 weeks ago   167MB

# docker image rm wordpress
# docker image rm httpd

 

# 다운로드된 docker 이미지 확인 (docker images nginx)

 

# docker images nginx
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
nginx               latest              b690f5f0a2d5        3 months ago        187MB
nginx               stable              3a8963c304a2        10 months ago       142MB

 

# docker-compose 설치
# dnf install python3 python3-pip
# pip3 install docker-compose

 

# docker-compose 설치, 종료, 삭제 명령어
# 설치 (-f : 파일명, -d : 백그라운드)
# docker-compose -f docker-compose.yml up -d

# 중단
# docker-compose -f docker-compose.yml stop

# 삭제
# docker-compose -f docker-compose.yml rm

 

# Docker file 압축 및 풀기
# docker 파일 압축하기
# docker save -o image.tar image_name

# 압축된 docker 파일 풀기
# docker load -i image.tar

 

# 실행된 도커 컨테이너 진입하기
# docker exec -it (컨테이너 id) /bin/bash

 

 

# docker 한꺼번에 중단 및 삭제
# docker 한꺼번에 중단
# docker ps -qa

# docker 한꺼번에 삭제
# docker rm -f $(docker ps -qa)
728x90
반응형
LIST

+ Recent posts