So I create and destroy Kubernetes clusters on vSphere on a pretty regular basis. Some I create with Terraform and Ansible. Some I use PKS. I have a plumbing test for Pure Service Orchestrator that mounts a single volume to a pod on each node.
Every once in a while I get an error like this, on just one node:
Failed to log in to any iSCSI targets! Will not be able to attach volume
In order to make sure it isn’t PSO with the error and it shouldn’t be since the other nodes are working. Run this command:
iscsiadm -m discovery -t st -p 192.168.230.24 iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.24,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479] iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.25,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479] iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.26,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479] iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.27,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479] 192.168.230.24:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479 192.168.230.25:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479 192.168.230.26:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479 192.168.230.27:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
Now I that isn’t what should be the result. So I thought at first to restart iscsi and that didn’t help. Then I thought, well this is a lab so lets just…
#cd /etc/iscsi
#rm -r nodes
Do not try this if you have other iSCSI targets for other storage. Not sure you will be happy. At first, I thought I should stop iSCSI before doing this. It doesn’t seem to have any effect. Now every node is able to mount and start the pod. Pure Service Orchestrator is trying to mount that volume over and over so it didn’t take long to see everything showing the way I wanted.
NAME READY STATUS RESTARTS AGE pure-flex-4zlcq 1/1 Running 0 12m pure-flex-7stfb 1/1 Running 0 12m pure-flex-g2kt2 1/1 Running 0 12m pure-flex-jg5cz 1/1 Running 0 12m pure-flex-n8wkw 1/1 Running 0 6m34s pure-flex-rtsv7 1/1 Running 0 12m pure-flex-vtph2 1/1 Running 0 12m pure-flex-w8x22 1/1 Running 0 12m pure-flex-wqr9k 1/1 Running 0 12m pure-flex-xwbww 1/1 Running 0 12m pure-provisioner-9c8dc9f79-xrq6d 1/1 Running 1 12m redis-master-demolocal-1-779f74876c-9k24t 1/1 Running 0 12m redis-master-demolocal-10-6695b56f47-zgqc7 1/1 Running 0 12m redis-master-demolocal-2-778666b57-5xdh8 1/1 Running 0 6m3s redis-master-demolocal-3-84848dfb87-fhj6n 1/1 Running 0 12m redis-master-demolocal-4-7c9dfdffb9-6cjv5 1/1 Running 0 12m redis-master-demolocal-5-65b555fc79-jjdkl 1/1 Running 0 12m redis-master-demolocal-6-6d495bfdf-cb5r2 1/1 Running 0 12m redis-master-demolocal-7-5c5db655-fx2qd 1/1 Running 0 12m redis-master-demolocal-8-74bc65b8d9-2bt8h 1/1 Running 0 12m redis-master-demolocal-9-65dd54c587-zb9p2 1/1 Running 0 12m