Easy to configure the Openshift Cluster OKD (3.11) on CentOS
In my demo environment, I am going to build a openshift cluster -- one master node (master/Infra/computer node) and two computer nodes. Any comments are welcome!
System requirements to configure the environment:
Master node: 4 vcpu/32GB mem
Compuer node: 2 vcpu/ 16GB mem
The following steps (1-4) will be done on all nodes as root
1> Create a user on all NODES (any user account you prefer)
useradd origin
passwd origin
2> Grant the sudo access without password for this user on all NODES
echo -e 'Defaults:origin !requiretty\norigin ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/openshift
chmod 440 /etc/sudoers.d/openshift
3> Open firewall on all NODES
firewall-cmd --add-service=ssh --permanent
firewall-cmd --reload
4> Install Openshift, EPEL, Docker, Git abd Py on all NODES
yum -y install centos-release-openshift-origin311 epel-release docker git pyOpenSSL
systemctl start docker
systemctl enable docker
Please make a note: The following steps will be done only on the master node as the user account you created.
5> Configure the SSH key pair and ssh each node without password as the the user
ssh-keygen -q -N ""
vi ~/.ssh/config
# create new ( define each node )
(please make note, master/node1/node2 can be pinged each other by adding into either DNS or localhost)
chmod 600 ~/.ssh/config
ssh-copy-id node01
ssh-copy-id node02
ssh-copy-id master
6> Install/configure the ansible playbook for the openshift cluster as the user
sudo yum -y install openshift-ansible
Please make a note: the apps.eteck.com is sub-domain and all IP addresses of hosts under this sub-domain point to the ip address of master
7> run Prerequisites Playbook
ansible-playbook --ask-pass --ask-sudo-pass /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
8> Run the deploy Cluster playbook
ansible-playbook --ask-pass --ask-sudo-pass /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
(Please make a note: I did not use the default filesysem, I used the ext4 which did not pass the docker storage check. So I use "-e openshift_disable_check=docker_storage" to skip the check.) Here is the command:
ansible-playbook --ask-pass --ask-sudo-pass /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml -e openshift_disable_check=docker_storage
Verify after done:
[origin@osmaster ~]$ oc get nodes
NAME STATUS ROLES AGE VERSION
osmaster Ready infra,master 3h v1.11.0+d4cacc0
osnode01 Ready compute 3h v1.11.0+d4cacc0
osnode02 Ready compute 3h v1.11.0+d4cacc0
[origin@osmaster ~]$ oc get nodes --show-labels=true
NAME STATUS ROLES AGE VERSION LABELS
osmaster Ready infra,master 3h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=osmaster,node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true
osnode01 Ready compute 3h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=osnode01,node-role.kubernetes.io/compute=true
osnode02 Ready compute 3h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=osnode02,node-role.kubernetes.io/compute=true
System requirements to configure the environment:
Master node: 4 vcpu/32GB mem
Compuer node: 2 vcpu/ 16GB mem
The following steps (1-4) will be done on all nodes as root
1> Create a user on all NODES (any user account you prefer)
useradd origin
passwd origin
2> Grant the sudo access without password for this user on all NODES
echo -e 'Defaults:origin !requiretty\norigin ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/openshift
chmod 440 /etc/sudoers.d/openshift
3> Open firewall on all NODES
firewall-cmd --add-service=ssh --permanent
firewall-cmd --reload
4> Install Openshift, EPEL, Docker, Git abd Py on all NODES
yum -y install centos-release-openshift-origin311 epel-release docker git pyOpenSSL
systemctl start docker
systemctl enable docker
Please make a note: The following steps will be done only on the master node as the user account you created.
5> Configure the SSH key pair and ssh each node without password as the the user
ssh-keygen -q -N ""
vi ~/.ssh/config
Host master
Hostname master
User origin
Host node01
Hostname node01
User origin
Host node02
Hostname node02
User origin
(please make note, master/node1/node2 can be pinged each other by adding into either DNS or localhost)
chmod 600 ~/.ssh/config
ssh-copy-id node01
ssh-copy-id node02
ssh-copy-id master
6> Install/configure the ansible playbook for the openshift cluster as the user
sudo yum -y install openshift-ansible
sudo vi /etc/ansible/hosts
# add follows to the end
[OSEv3:children] masters nodes etcd [OSEv3:vars] # admin user created in previous section ansible_ssh_user=origin ansible_become=true openshift_deployment_type=origin # use HTPasswd for authentication openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # define default sub-domain for Master node openshift_master_default_subdomain=apps.eteck.com # allow unencrypted connection within cluster openshift_docker_insecure_registries=172.30.0.0/16 [masters] master.eteck.com openshift_schedulable=true containerized=false [etcd] master.eteck.com [nodes] # defined values for [openshift_node_group_name] in the file below # [/usr/share/ansible/openshift-ansible/roles/openshift_facts/defaults/main.yml] master.eteck.com openshift_node_group_name='node-config-master-infra' node01.eteck.com openshift_node_group_name='node-config-compute' node02.eteck.com openshift_node_group_name='node-config-compute' # if you'd like to separate Master node feature and Infra node feature, set like follows # master.eteck.com openshift_node_group_name='node-config-master' # node01.eteck.com openshift_node_group_name='node-config-compute' # node02.eteck.com openshift_node_group_name='node-config-infra'
Please make a note: the apps.eteck.com is sub-domain and all IP addresses of hosts under this sub-domain point to the ip address of master
ansible-playbook --ask-pass --ask-sudo-pass /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
8> Run the deploy Cluster playbook
ansible-playbook --ask-pass --ask-sudo-pass /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
(Please make a note: I did not use the default filesysem, I used the ext4 which did not pass the docker storage check. So I use "-e openshift_disable_check=docker_storage" to skip the check.) Here is the command:
ansible-playbook --ask-pass --ask-sudo-pass /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml -e openshift_disable_check=docker_storage
Verify after done:
[origin@osmaster ~]$ oc get nodes
NAME STATUS ROLES AGE VERSION
osmaster Ready infra,master 3h v1.11.0+d4cacc0
osnode01 Ready compute 3h v1.11.0+d4cacc0
osnode02 Ready compute 3h v1.11.0+d4cacc0
[origin@osmaster ~]$ oc get nodes --show-labels=true
NAME STATUS ROLES AGE VERSION LABELS
osmaster Ready infra,master 3h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=osmaster,node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true
osnode01 Ready compute 3h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=osnode01,node-role.kubernetes.io/compute=true
osnode02 Ready compute 3h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=osnode02,node-role.kubernetes.io/compute=true
Comments