Configure iSCSI target via targetcli on RHEL 7.
Software
Software used in this article:
- RedHat Enterprise Linux 7.0
- targetcli 2.1.fb34
- iscsi-initiator-utils 6.2.0
Before We Begin
We have 3 VMs available, named ipa, srv1 and srv2. The ipa server, which we set up before, will be configured as an iSCSI target, and srv1 and srv2 will be iSCSI clients.
- iSCSI target provides remote block storage and is called server,
- iSCSI initiator uses this that storage and is called client.
iSCSI Target Installation
On the IPA server, that is going to act as an iSCSI target, create a volume group with a 100MB logical volume to use for iSCSI:
# vgcreate vg_san /dev/sdb # lvcreate --name lv_block1 --size 100M vg_san
Install targetcli package and enable the target service to start on boot:
# yum install -y targetcli # systemctl enable target
Configure firewalld to allow incoming iSCSI traffic on a TCP port 3260:
# firewall-cmd --add-port=3260/tcp --permanent # firewall-cmd --reload
Configure iSCSI Target
Run targetcli to configure iSCSI target:
# targetcli
Our plan for configuring the target is as follows:
- backstore –> block,
- backstore –> fileio,
- iscsi (IQN name),
- iscsi –> tpg1 –> portals,
- iscsi –> tpg1 –> luns,
- iscsi –> tpg1 –> acls.
Create a couple of backstores, block and fileio, with a local file system cache disabled to reduce the risk of data loss:
/> backstores/block create block1 /dev/vg_san/lv_block1 write_back=false /> backstores/fileio create file1 /root/file1.img size=100M sparse=true write_back=false
Create an IQN (Iscsi Qualified Name).
/> iscsi/ create iqn.2003-01.local.rhce.ipa:target Created target iqn.2003-01.local.rhce.ipa:target. Created TPG 1.
On RHEL 7.0 we need to create a portal, however, portal configuration is created automatically on RHEL 7.2.
/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/portals create 0.0.0.0 ip_port=3260
Create a lun for the fileio backstore:
/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/luns create /backstores/fileio/file1
Create two acls for our iSCSI clients (srv1 and srv2), but don’t add the previously mapped lun to the srv1 – the lun should only be available to the srv2:
/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/acls create iqn.1994-05.com.redhat:srv1 add_mapped_luns=false /> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/acls create iqn.1994-05.com.redhat:srv2
Create a lun for the block backstore, this lun will be available for both servers:
/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/luns create /backstores/block/block1
Disable authentication (should be disabled by default anyway):
/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1 set attribute authentication=0
Optionally, set a userid and a password. Navigate to a certain acl of our target:
/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/acls/iqn.1994-05.com.redhat:srv1/ set auth userid=client password=client
Save the configuration and exit.
/> saveconfig
List the configuration:
/> ls o- / ....................................................................................... [...] o- backstores ............................................................................ [...] | o- block ................................................................ [Storage Objects: 1] | | o- block1 .......................... [/dev/vg_san/lv_block1 (100.0MiB) write-thru activated] | o- fileio ............................................................... [Storage Objects: 1] | | o- file1 ................................. [/root/file1.img (100.0MiB) write-thru activated] | o- pscsi ................................................................ [Storage Objects: 0] | o- ramdisk .............................................................. [Storage Objects: 0] o- iscsi .......................................................................... [Targets: 1] | o- iqn.2003-01.local.rhce.ipa:target ............................................... [TPGs: 1] | o- tpg1 ............................................................. [no-gen-acls, no-auth] | o- acls ........................................................................ [ACLs: 2] | | o- iqn.1994-05.com.redhat:srv1 ........................................ [Mapped LUNs: 1] | | | o- mapped_lun0 .............................................. [lun1 block/block1 (rw)] | | o- iqn.1994-05.com.redhat:srv2 ........................................ [Mapped LUNs: 2] | | o- mapped_lun0 .............................................. [lun0 fileio/file1 (rw)] | | o- mapped_lun1 .............................................. [lun1 block/block1 (rw)] | o- luns ........................................................................ [LUNs: 2] | | o- lun0 ............................................... [fileio/file1 (/root/file1.img)] | | o- lun1 ......................................... [block/block1 (/dev/vg_san/lv_block1)] | o- portals .................................................................. [Portals: 1] | o- 0.0.0.0:3260 ................................................................... [OK] o- loopback ....................................................................... [Targets: 0]
Restart the target and check its status:
# systemctl restart target # systemctl status target
Configure iSCSI Client (Initiator)
Configuration of an iSCSI initiator requires installation of the iscsi-initiator-utils package, which includes the iscsi and the iscsid services and the /etc/iscsi/iscsid.conf
and /etc/iscsi/initiatorname.iscsi
configuration files.
On the iSCSI clients srv1 and srv2, install the package:
# yum install -y iscsi-initiator-utils
Note well that on the iSCSI initiator both services are needed. The iscsid service is the main service that accesses all configuration files involved. The iscsi service is the service that establishes the iSCSI connections.
# systemctl enable iscsi iscsid
Our plan for configuring the client is as follows:
- Configure iSCSI initiatorname,
- Discover targets,
- Log into targets.
Open the file /etc/iscsi/initiatorname.iscsi
for editing, and the initiator’s name iqn.1994-05.com.redhat:srv1.
If username and password were configured, put them into /etc/iscsi/iscsid.conf
:
node.session.auth.authmethod = CHAP node.session.auth.username = client node.session.auth.password = client
Be advised that CHAP authentication does not use strong encryption for the passing of credentials. If security of iSCSI data is a requirement, controlling the network side of the protocol is a better method to assure it. For example, using an isolated vlans to pass the iSCSI traffic will be a better implementation from a security point of view.
Discover targets (the ipa server is on 10.8.8.70):
# iscsiadm -m discovery -t sendtargets -p 10.8.8.70:3260 10.8.8.70:3260,1 iqn.2003-01.local.rhce.ipa:target
# iscsiadm -m discovery -P1 SENDTARGETS: DiscoveryAddress: 10.8.8.70,3260 Target: iqn.2003-01.local.rhce.ipa:target Portal: 10.8.8.70:3260,1 Iface Name: default iSNS: No targets found. STATIC: No targets found. FIRMWARE: No targets found.
Log into the discovered target:
# iscsiadm -m node -T iqn.2003-01.local.rhce.ipa:target -p 10.8.8.70:3260 --login
Check the session:
# iscsiadm -m session -P3 | less
An iSCSI disk should be available at this point. Note that the server srv2 will see both an iSCSI block disk block1 as well as fileio file1, since both are mapped to the server. This will not be the case for the server srv1.
[srv1]# lsblk --scsi|grep LIO sdb 3:0:0:0 disk LIO-ORG block1 4.0 iscsi
Create a filesystem:
[srv1]# mkfs.ext4 -m0 /dev/sdb
Create a mount point and get UUID:
[srv1]# mkdir /mnt/block1
[srv1]# blkid | grep sdb /dev/sdb: UUID="6a1c44d0-3e2f-49fc-85ba-ced3e44bb5b0" TYPE="ext4"
Add the following to /etc/fstab
:
UUID=6a1c44d0-3e2f-49fc-85ba-ced3e44bb5b0 /mnt/block1 ext4 _netdev 0 0
Mount the iSCSI drive:
[srv1]# mount /mnt/block1
We can logout or delete the session this way:
# iscsiadm -m node -T iqn.2003-01.local.rhce.ipa:target -p 10.8.8.70:3260 --logout # iscsiadm -m node -T iqn.2003-01.local.rhce.ipa:target -p 10.8.8.70:3260 -o delete
If things go wrong, we can stop the iscsi.service and remove all files under /var/lib/iscsi/nodes
to clean up all current configuration. After doing that, we need to restart the iscsi.service and start the discovery and login again.
# systemctl stop target
# lvextend -L +200M -r /dev/vgsan/lvsan1
#systemctl start target
it keeps the old size even after rebooting the server
remounting it on the initiator ot re-ligging after clearing /var/lib/iscsi/nodes makes the device lost at all (yes, the UUID was updated in /etc/fstab after the re-formatting)
I think that you should perform rediscovery
/dev/sdb will be already attached to the server, right?
If you are asking about my test lab, then yes.
May not be the case for the real exam….
the client takes very long time to unmount the iscsi disks, is it normal? am I supposed to do something about it?
I am thinking to add a script to stop the iscsi service before shutdown is it the right thing to do for the exam?
It takes a second for iSCSI disks to be unmounted in my test lab. Therefore I believe it should not take much time.
In terms of scripting, I’d only write one if there is an exam task to do so.
Hi Tomas,
I ran this command, my screen appeared this error, should i remove “write_back=false” ?
/> backstores/block create block1 /dev/vg_san/lv_block1 write_back=false
Unexpected keyword parameter ‘write_back’.
Hi Tomas,
I couldn’t log into discovered target on srv2. But on srv1, i logged into discovered target successful. When i tried to log into discovered target, i show error
[root@sqllinux2 ~]# iscsiadm -m node -T iqn.2003-01.local.rhce.ipa:target -p 10.0.0.130:3260 –login
Logging in to [iface: default, target: iqn.2003-01.local.rhce.ipa:target, portal: 10.0.0.130,3260] (multiple)
iscsiadm: Could not login to [iface: default, target: iqn.2003-01.local.rhce.ipa:target, portal: 10.0.0.130,3260].
iscsiadm: initiator reported error (24 – iSCSI login failed due to authorization failure)
iscsiadm: Could not log into all portals
[root@sqllinux2 ~]# iscsiadm -m session -P3 | less
iscsiadm: No active sessions.
Can i log into discovered target on srv2 ?, if yes, show me the way
Thanks and Regard !
Looks like an authentication issue to me. Ensure that the iSCSI target is configured to allow srv2 to log in, verify iSCSI initiator name, make sure it matches.
Check what options are available for block creation on the system that you use, if there is no write_back, then remove it.
Everything is matched, i’ve checked it 4-5 times. If srv1 can log into, srv2 cannot, Can i create cluster active-active with that servers in your post ?
https://www.lisenet.com/2016/activeactive-high-availability-pacemaker-cluster-with-gfs2-and-iscsi-shared-storage-on-centos-7/#comment-973
Any chance you had a different initiator name on the srv2 that tried to access the target? It may be cached, try clearing it.
You should be able to create it, yes, assuming the iSCSI target is configured and working properly.
Oh God, i made it work properly. My last question in this post is ” When i ran commands mkfs.ext4 -m0 /dev/sdb -> mkdir /mnt/block1 -> … -> mount /mnt/block1 in srv1, /mnt/block1 mounted that just can be seen in srv1, srv2 is not, why ?????. By definition, iSCSI is a shared storage ”
Thank you so much for this topic
Have a nice day !
You’re welcome. How did you make it work? What was the problem after all?
This is problem :(
[root@linux1 ~]# mkdir /mnt/block1
[root@linux1 ~]# blkid | grep sdb
/dev/sdb: UUID=”fa121996-eb2e-4f56-a0a1-91c97e4cef0f” TYPE=”ext4″
[root@linux1 ~]# vi /etc/fstab
[root@linux1 ~]# mount /mnt/block1
[root@linux1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 17G 1.8G 16G 11% /
devtmpfs 902M 0 902M 0% /dev
tmpfs 912M 0 912M 0% /dev/shm
tmpfs 912M 8.7M 904M 1% /run
tmpfs 912M 0 912M 0% /sys/fs/cgroup
/dev/sda1 1014M 167M 848M 17% /boot
tmpfs 183M 0 183M 0% /run/user/0
/dev/sdb 35G 49M 35G 1% /mnt/block1
[root@linux1 ~]# cat /proc/partitions
major minor #blocks name
8 0 20971520 sda
8 1 1048576 sda1
8 2 19921920 sda2
11 0 1048575 sr0
253 0 17821696 dm-0
253 1 2097152 dm-1
8 16 36700160 sdb
[root@linux2 ~]# lsblk –scsi|grep LIO
sdb 3:0:0:0 disk LIO-ORG file1 4.0 iscsi
sdc 3:0:0:1 disk LIO-ORG block1 4.0 iscsi
[root@linux2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 17G 1.8G 16G 11% /
devtmpfs 902M 0 902M 0% /dev
tmpfs 912M 0 912M 0% /dev/shm
tmpfs 912M 8.6M 904M 1% /run
tmpfs 912M 0 912M 0% /sys/fs/cgroup
/dev/sda1 1014M 167M 848M 17% /boot
tmpfs 183M 0 183M 0% /run/user/0
[root@linux2 ~]# cat /proc/partitions
major minor #blocks name
8 0 20971520 sda
8 1 1048576 sda1
8 2 19921920 sda2
11 0 1048575 sr0
253 0 17821696 dm-0
253 1 2097152 dm-1
8 16 36700160 sdb
8 32 36700160 sdc
Why mount point /dev/sdb can be seen from linux1, but can’t be seen in linux2 ?. I ran comman “cat /proc/partitions”, i still see sdb disk from both nodes
The backstore file1 is mapped to one server only. Check the target configuration.
Ok, how can i see /dev/sdb on both of nodes ?
As per my previous reply, the backstore file1 is mapped to one server only. Go to the target, map the lun to both servers (srv1 and srv2), and you will be able to see
/dev/sdb
on both clients, and both clients will be able to mount it. I tested it, and all worked fine.[root@linux1 block1]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 17G 1.7G 16G 10% /
devtmpfs 902M 0 902M 0% /dev
tmpfs 912M 0 912M 0% /dev/shm
tmpfs 912M 8.6M 904M 1% /run
tmpfs 912M 0 912M 0% /sys/fs/cgroup
/dev/sda1 1014M 184M 831M 19% /boot
tmpfs 183M 0 183M 0% /run/user/0
/dev/sdb 35G 49M 35G 1% /mnt/block1
[root@linux2 block1]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 17G 1.7G 16G 10% /
devtmpfs 902M 0 902M 0% /dev
tmpfs 912M 0 912M 0% /dev/shm
tmpfs 912M 8.7M 904M 1% /run
tmpfs 912M 0 912M 0% /sys/fs/cgroup
/dev/sda1 1014M 184M 831M 19% /boot
tmpfs 183M 0 183M 0% /run/user/0
/dev/sdb 35G 326M 34G 1% /mnt/block1
I can mount /dev/sdb, but size /dev/sdb on srv1 different than srv2, they’re not same. Why ???, i thought they use that /dev/sdb together ??, used size of /dev/sdb must be similar on both of nodes ???
They aren’t the same (well, they should be the same when you mount them initially) because you formatted
/dev/sdb
as an ext4. As soon as you make changes to the disk/dev/sdb
on the server srv1, another server srv2 has no awareness of that and you will eventually lose data, if you make changes from the server srv2.Ext4 is not a shared-disk filesystem. Use GFS2 if you want to have both servers writing to the same iSCSI disk.
Tomas, what “sparse=true” does?
Hi Alex, this parameter creates a thin provisioned LUN. Thin provisioning is called “sparse volumes” sometimes.
The idea here is to allocate blocks of data on-demand rather than allocating all the blocks in advance.
Yesterday in RHCE exam i forgot to logout the iscsiadm and before checking by reboot my time was over and system automatically poweroff so iscsiadm login will create any problem or i will pass or fail ? Plz help me frnds i m very nervous
You should be fine, don’t worry too much.
Dears,
I do not understand …
Why should I logout the iscsiadm??!!!
You don’t have to.
It’s useful to know though, basic knowledge of the tool, might come in handy one day.
Tomas,
Should we really enable both iscsi and iscsid on client ?
I had an issue on my client VM after a reboot, it failed to connect to target when I had iscsid enabled on it, but it worked fine after I disabled iscsid on startup
— Read more iscsi vs iscsid services at
https://unix.stackexchange.com/questions/216239/iscsi-vs-iscsid-services
It says only enable/start iscsi and not iscsid
I enable both, but feel free to enable the iscsi only if that’s the way it works on your system.
You are right,
You do not need to enable iscsid, this service you will touch (restart) just if you change the iqn in the initiator config file.
Thanks, I’m glad you figured that out. I appreciate your input.
Hi Tomas,
Great guide, but apparently, CentOS7/RHEL7 has made some pretty big chances when it comes to LVM filtering of PVs/VGs, which can cause some issues when trying to do a reboot on the target.
You can basically recreate this issue on a CentOS7/RHEL7 machine. Go through the process as normal, then reboot the target (It should be noted, that I properly unmounted all drives from the initiator correctly, and was able to reboot the client/initatior multiple times and was able to mount everything from fstab. It’s only when the target gets rebooted is when everything gets weird) :
You’ll go from this:
o- iscsi …………………………………………………. [Targets: 1]
o- iqn.2017-05.com.mylabserver:t1 ……………………………. [TPGs: 1]
o- tpg1 ……………………………………… [no-gen-acls, no-auth]
o- acls ……………………………………………….. [ACLs: 1]
| o- iqn.2017-05.com.mylabserver:client …………….. [Mapped LUNs: 1]
| o- mapped_lun0 ………………………….. [lun0 block/test (rw)]
o- luns ……………………………………………….. [LUNs: 1]
| o- lun0 ………………………………… [block/test (/dev/xvdg)]
o- portals ………………………………………….. [Portals: 1]
o- 0.0.0.0:3260 …………………………………………… [OK]
To, after a reboot, this:
o- backstores ……………………………………………………………………………………………….. […]
| o- block …………………………………………………………………………………….. [Storage Objects: 0]
| o- fileio ……………………………………………………………………………………. [Storage Objects: 0]
| o- pscsi …………………………………………………………………………………….. [Storage Objects: 0]
| o- ramdisk …………………………………………………………………………………… [Storage Objects: 0]
o- iscsi ……………………………………………………………………………………………… [Targets: 1]
| o- iqn.2017-05.com.mylabserver:t1 ………………………………………………………………………… [TPGs: 1]
| o- tpg1 ………………………………………………………………………………….. [no-gen-acls, no-auth]
| o- acls ……………………………………………………………………………………………. [ACLs: 1]
| | o- iqn.2017-05.com.mylabserver:client …………………………………………………………. [Mapped LUNs: 0]
| o- luns ……………………………………………………………………………………………. [LUNs: 0]
| o- portals ………………………………………………………………………………………. [Portals: 1]
| o- 0.0.0.0:3260 ……………………………………………………………………………………….. [OK]
o- loopback …………………………………………………………………………………………… [Targets: 0]
Doing a quick systemctl check shows:
[root@iscsi-target]# systemctl status target -l
● target.service – Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2017-05-03 23:06:05 EDT; 47min ago
Process: 900 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
Main PID: 900 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/target.service
May 03 23:06:05 isci-target systemd[1]: Starting Restore LIO kernel target configuration…
May 03 23:06:05 isci-target target[900]: Could not create StorageObject test: Cannot configure StorageObject because device /dev/xvdg is already in use, skipped
May 03 23:06:05 isci-target target[900]: Could not find matching StorageObject for LUN 0, skipped
May 03 23:06:05 isci-target target[900]: Could not find matching TPG LUN 0 for MappedLUN 0, skipped
The LUNs, backstores, EVERYTHING related to the drives is gone from the configuration. Any changes that the client made before the reboot (e.g. If he/she made LVMs, partitions, swaps, etc) will be available on the target; however, unreachable by the initiator (As the client, I see no drives being served from the ISCSI target — only what’s locally available to the initiator/client system).
Here’s some others with this issue, but no one really has a solution: https://serverfault.com/questions/837827/centos7-targetcli-configuration-lost-after-server-reboot
There’s a comment on CertDepot’s ISCSI guide about LVM filtering (https://www.certdepot.net/rhel7-configure-iscsi-target-initiator-persistently/#comment-41214), but my own Googling only came up with this man page
centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_filters.html
I’ll see if I have to do this BEFORE the reboot; however, trying this afterwards, hasn’t really helped.
This has me worried, especially for the test. If the person grading the test does a reboot, then essentially I would fail this section because he/she would see no drives from the initiator perspective.
Any thoughts?
Thanks for all the details, much appreciated! Unfortunately I cannot advise much I’m afraid as I never encountered this problem. Which dot version of RHEL 7 do you have the issue on?
LVM filtering does look like a sensible approach, I tend to exclude
/dev/cdrom
by default.Hi,
Got the same problem. It is known issue, but closed as NOTABUG https://bugzilla.redhat.com/show_bug.cgi?id=1139441
It is required to set global filter on the Target host to exclude iSCSI disks from opening by LVM. See https://bugzilla.redhat.com/show_bug.cgi?id=1139441#c8 for details.
Did anyone reach a consensus on how to share nested logical volumes with iscsi? I see it asked in a few practice exams, including the one here. I get the same results as the above. I don’t want to fail if this question it asked.
adding global_filter = [“r|^/dev/vg0|”] to /etc/lvm/lvm.conf didn’t resolve it for me. And even so, I don’t think I would remember that easily for the exam.
I’m on CentOS Linux release 7.6.1810
Hi, great tutorial. My devices are not mounted on boot. After boot, I have to manually execute the iscsiadm ……. –login and then mount -all for them to mount. The services iscsi and iscsid are set to mount at boot. Any ideas as to why this is happening?
Anything in the system log after a boot? If the services tried to mount devices, there should be logs available to help you troubleshoot.
Hello, everyone.
Is it normal if I start initiator before target the iscsi failed and I have to restart iscsi service after target is loaded?? Or is it abnormal situation and iscsi should automatically see that target and mount remote disk?
Thanks.
We have 3 servers
server1 ( main server )
server2 ( client )
server3 ( client )
How do I allow target should only be allowed to server2 not on server3.
You can use ACLs for your iSCSI clients, or you can use firewall rules.
The blog post contains an example of using ACLs.
By the way don’t forget to add in fstab the ‘_netdev’ option, otherwise you may not boot.
Does creation order matters,
I have followed this scheme
iscsi –> tpg1 –> acls.
iscsi –> tpg1 –> luns,
iscsi –> tpg1 –> portals,
with this scheme first i created acl.
/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/acls create iqn.1994-05.com.redhat:srv1 add_mapped_luns=false
/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1/acls create iqn.1994-05.com.redhat:srv2
both the luns are mapped to both acls once they are created.
and it does not seem to restrict on one server as written by this post.
also:
Does naming convention matter ??
iqn.1994-05.com.redhat:srv2
com.redhat:srv2 this has to match with the client/server name ???
or it can be any arbitrary name following convention
e.g.
iqn.1994-05.test.example:testserver
note: example.test does not exist
Instructions written in the article do work for the software versions mentioned at the top of the post, and as can be seen from the output of the iscsi configuration, ACL for iqn.1994-05.com.redhat:srv1 has one LUN mapped where iqn.1994-05.com.redhat:srv2 has two LUNs.
With regards to naming convention, when you create an ACL, you must provide the initiator name of your client (whatever you defined in the file
/etc/iscsi/initiatorname.iscsi
). You don’t have to use com.redhat and can use test.example if you like.This is for information purpose only.
I have configured iscsi on targetcli-2.1.fb46-1.el7.noarch .
with this version , If I follow this method.
iscsi –> tpg1 –> acls.
iscsi –> tpg1 –> luns,
iscsi –> tpg1 –> portals,
It does not work well.
But this sequence works fine.
iscsi –> tpg1 –> portals,
iscsi –> tpg1 –> luns,
iscsi –> tpg1 –> acls.
The default behaviour is to automatically add existing LUNs to new ACLs, and new LUNs to existing ACLs. You can use auto_add_mapped_luns to change this.
Is there any way with in targetcli to restrict access to certain acl to certain host.
acl1 should not be accessible by host1.
acl2 should only be acessible by host2
or we need to use firewalld for restriction.
Thanks
There is, you do that by specifying a client when creating an ACL, e.g.:
this is what i asked earlier :)
“”
Does naming convention matter ??
iqn.1994-05.com.redhat:srv2
com.redhat:srv2 this has to match with the client/server name ???
or it can be any arbitrary name following convention
e.g.
iqn.1994-05.test.example:testserver
note: example.test does not exist “””
so to summarize.
if i use
acls> create iqn.1994-05.com.redhat:srv2
this target will be accsible to srv2.redhat.com ,,, right ??
Yes, and I answered that you can use any name, and not just com.redhat. The key thing is to use the client’s initiator name when creating an ACLs. As long as you do that, you can use any name. The target will be accessible to the client that has the initiator name of iqn.1994-05.com.redhat:srv2. You however still need to export some LUNs as otherwise the client won’t be able to use any. Having said that, you can create multiple ACLs to control resources each client (initiator) has access to.
system1: system1.mydomain.com
system2: system2.mydomain.com
I created this target on system1.mydomain.com and client is system2.mydomain.com.
ideally iqn.2015-12.com.mydomain.district10:acl1 should not have access on system2, but i am able to access it.
can you please guide what wrong with my configuration.
I am trying to create a target that should only be accessible on system2.
I’m looking at the configuration that you’ve posted, and it shows that both initiators iqn.2015-12.com.mydomain.district10:acl1 and iqn.2015-12.com.mydomain:system2 have access to the same LUN.
What do you want to achieve here? You said that “ideally iqn.2015-12.com.mydomain.district10:acl1 should not have access on system2, but i am able to access it”, which is confusing since both are clients and neither is the target.
If you want to create a target that should only be accessible by system2, then simply remove access for iqn.2015-12.com.mydomain.district10:acl1.
this does not work for me . I cant guess the problem so far.
Server:
Client other than system2: { this system should not have acces to this target }
This target ” iqn.2015-12.com.mydomain.district10:system1 “” Should only be allowed on system2 ???
Can the client1 see any LUNs? It does not have an ACL created therefore no LUNs should be available to the client1.
[root@client1 ~]# lsblk –scsi|grep LIO
sdb 3:0:0:0 disk LIO-ORG file1 4.0 iscsi
Please post the following from the client1:
I will recreate this setup and check for errors .
[root@client1 ~]# cat /etc/iscsi/initiatorname.iscsi
#InitiatorName=iqn.1994-05.com.redhat:c7d55594d829
InitiatorName=iqn.2015-12.com.mydomain:system2
This initiator is allowed to access the target, that’s why you see the LUN.
so what changes will be required to achieve this task
This target ” iqn.2015-12.com.mydomain.district10:system1 “” Should only be allowed on system2 ???
When you create an ACL on the target, add the initiator name of the client you want to allow access from (in your case system2), and make sure that the client has the initiator name which you configured on the target ACL, as otherwise it won’t work.
Hi Tomas,
I would really Appreciate if you share your phone number? I really would like to talk with you regarding teaming and IPv6 which crashed my whole networking and failed RHCE last week . I’m scared to rebook the test even though I know almost all contents.
Thanks!!
Hi Santosh, feel free to post any questions/issues that you have on topic-related articles so that other people can also benefit.
Hi Tomas,
Do you have an experience with iscsi-initiator issue after server(initiator) reboot and after no more iscsi partition(lvm) … no errors — nothing just gone :) … and even when you trying –login again it says successful but no partitions from the iscsi server.
here is the all info:
[root@server2 ~]# iscsiadm -m node -T iqn.2017-12.com.example:disk1 -p 50.0.0.60:3260 –login
Logging in to [iface: default, target: iqn.2017-12.com.example:disk1, portal: 50.0.0.60,3260] (multiple)
Login to [iface: default, target: iqn.2017-12.com.example:disk1, portal: 50.0.0.60,3260] successful.
[root@server2 ~]# iscsiadm -m session -P 0
tcp: [3] 50.0.0.60:3260,1 iqn.2017-12.com.example:disk1 (non-flash)
[root@server2 ~]# iscsiadm -m session -P 1
Target: iqn.2017-12.com.example:disk1 (non-flash)
Current Portal: 50.0.0.60:3260,1
Persistent Portal: 50.0.0.60:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:2f1c2ee011be
Iface IPaddress: 50.0.0.61
Iface HWaddress:
Iface Netdev:
SID: 3
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
[root@server2 ~]#
FSTAB output
# Accessible filesystems, by reference, are maintained under ‘/dev/disk’
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=a78bb152-e525-4f0e-961a-bf6147ac7d3e / xfs defaults 1 1
/dev/mapper/iscsi-iscsilv /san ext4 _netdev 0 0
[root@server2 ~]# mount -a
mount: special device /dev/mapper/iscsi-iscsilv does not exist
[root@server2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 40G 0 disk
└─vda1 252:1 0 40G 0 part /
any idea?
Thanks in advance
I didn’t encounter this problem I’m afraid.
It would help if you posted the iSCSI target configuration from server1 as well as the content of the file
/etc/iscsi/initiatorname.iscsi
from server2.I’m having exactly same issue on my client server after reboot.But my scenario is little different here.mount -a command doesn’t complain and successfully mounted. I can see the df -h is showing correct mount point.After seeing the correct mount point.I logged out the session and rebooted the client server then rebooted server then I run the command df -h which shows disk is not mounted.I’m wondering,I can go on that directory and create the file.Is this expected on exam?Please see the below and suggest me. Other things went smooth except this issue.
df -h
mount /mnt/iscsi
mount:special device /dev/sdd1 does not exit
cd /mnt/iscsi
[root@server1 iscsi]# touch yoyo
[root@server1 iscsi]# ls
yoyo
In your case, check if sdd1 is reported in
/proc/mounts
. If it isn’t, then you’re writing to a local folder and not the mountpoint.I made the silly mistake on fstab. /proc/mounts doesn’t show any /dev/sdd1 .I saw only /dev/sdc1 at the end and I mounted it under /mnt/iscsi then rebooted and Waited for luck…….and… and…..it was there…..Thanks million!!!
No worries.
One of the good hacks at the beginning of the exam is to do ls /usr/lib/systemd/system > ~/diff1 and after installing some packages that will bring the services as well execute /usr/lib/systemd/system > ~/diff2 ; after that we can do diff diff1 diff2 and see what is installed, this way we do not have to memorize all service names :) cheers
Thanks, this is helpful. Although if you’re setting up an iscsi target, it’s not difficult to remember services “iscsi”, “iscsid” and “target”.
Hi Tomas,
How are you, hope you are god and doing fine , i need some assistance i configured Target Server with following configuration.
and when from client initiators discovered and logged in successfully and added the entries in fstab
” /dev/sdb1 /mnt xfs rw,seclabel,relatime,attr2,inode64,noquota,_netdev 0 0 ” and then to check persistence i restarted the clients serv1 and serv2. Both got hang with following message
https://imgur.com/3Q8ZvTI
and have to manually restart the server and on reboot the initiator doesn’t automatically login hence device also doesn’t mount.
Target is running on Centos 7.2 and Clients are running on Centos 7.0
@Tomas im getting this error https://i.imgur.com/dp6uoFK.png . Any ideas, i heard its a bug in RHEL 7.0. And im sure exam will be on RHEL 7.0 means i will be having same issue in exam. Any ideas how to solve this i searched on internet but no good results
What’s
/oldroot
? It’s not clear what the actual problem is.even i don’t know that is /oldroot. i dont even have any such mount point. When i try to restart the Client it just hangs and this message comes. So i have to manually restart the Machine. And after system start this new lun doesn’t mount.
I got the same issue during the exam
it’s a bug the solution/workaround is to add the following line in /usr/lib/systemd/system/iscsid.service
[Service]
Type=forking
PIDFile=/var/run/iscsid.pid
ExecStart=/usr/sbin/iscsid
ExecStartPost=/usr/bin/sleep 2 <—–
ExecStop=/sbin/iscsiadm -k 0 2
Hello Tomas and every one,
I have iscsi auto login issue after reboot on rhel 7 and need to login to session to get my data from /iscsi. How can I makw iscsi auto login? Please help me on this. Thanks!
1. My fstab entry was: /dev/sdc1 /iscsi xfs nofail,_netdev 0 0 and changed to
UUID-XXXXXXXX-XXXX-XXXX-XXXXXXXX /iscsi xfs nofail,_netdev 0 0 but it didn’t work and got device didn’t find issue. lsblk –scsi shows sdc as iscsi not sdc1.
2. I checked /etc/iscsi/iscsid.conf and verified node.startup is as: node.startup = automatic
It should start automatically if you have node.startup set to automatic.
Is the iscsci service running? What error do you get? Failure on logging in? Can you see iscsi node and session details?
Hello Tomas and Santosh and everyone,
I have taken the RHCE exam today and I faced with the similar iscsi issue like Santosh said.
Would was that issue related to the network or not? I know nothing… So if you have any ideas or suggestions please share with me. Thanks in advance.
P.S. Tomas many thanks for your brilliant job on the site. I and everyone who prepares to take the RHCE exam appreciate it.
You’re welcome, but please refrain from discussing exam content on this platform.
Hi, Tomas
By default the authentication attribute is set to 0 [ you have already mentioned that]
/> iscsi/iqn.2003-01.local.rhce.ipa:target/tpg1 set attribute authentication=0
If setting up CHAP, do we need to set it to 1?
I have tested both 1 and 0, cleared the connection data, and it seems doesn’t make a difference.
Regards
Lingjie
Just a summary of what I found, in case anyone else also curious on CHAP
when setting attribute authentication=0, target will work with both the CHAP, NONE method, by which, if the iscsid don’t provide authentication, it will also succeed.
“`
/iscsi/iqn.20…6832068c/tpg1> get parameter AuthMethod
AuthMethod=CHAP,None
“`
`set attribute authentication=1` will enforce CHAP, when the client does not provide auth, it will fail with authentication error.
Hi,
I had a problem in the RHCE exam yesterday with the ISCSI target task in which I was asked to create a logical volume for ISCSI target, but I wasn’t able to find any devices in the list using lsblk command on the server. As I know, a block device is needed to create an ISCSI logical volume. Thus, I failed the ISCSI task. Was the exam server configured correctly with a block device or I had to use fileio in root VG?
Have you checked for free space in a volume group? You can create a new logical volume to accomplish the task.
I didn’t check. But when I tried to create a new 3Gb LVM iscsi_data in VHG (the virtual group was named so) there was an error: there is not enough space on a virtual group. I don’t remember the message exactly, but the sense was as I wrote.
Before creating a logical volume, check the amount of free space that is available in the volume group. The error message suggests that you were trying to create a logical volume that exceeded the amount of free space available in the volume group.
Would it be an exam error if I will create a file.image store in the existing logical volume (for example LV root) as a file device (fileio) instead of creating a logical volume if I won’t able to find free space?
I don’t know, it depends on what you are asked to do in particular.
Exactly what I got but I messed up by trying lvresize, sign…..
Hi Tomas,
i have an issue on the initiator after the reboot it doesnt come up and show’s the error:
(776.98876 connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 429324234, last ping 42954231, now 42965434232)
im sure about the target config and then on the initiator im starting iscsi then able to discover, login,mount the sdb correctly using mount -a
in fstab : UUID”” /iscsi xfs _netdev 0 0
so everyting is working fine until i reboot… i realized that this is happening when im using team interface (active backup)
so i tried with a single interface and no issues..
any idea ??
thanks
What virtualisation platform do you use? There might be a problem with a MAC address.
im using virtualbox. and creating the tem interface using the same steps from teaming page of this site, which is all correct. b
Thanks, have you enabled promiscuous mode?
hi, im using Virtualbox. thanks
im not sure how to do that, could you please clarify ?
thanks for the help
hi again, actually i did that too now, but still that same ping timeout error.
i did it this way to be boot persistent
vi /etc/systemd/system/promisc.service
[Unit]
Description=Bring up an interface in promiscuous mode during boot
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/sbin/ip link set enp0s8 promisc on
ExecStart=/usr/sbin/ip link set enp0s9 promisc on
TimeoutStartSec=0
RemainAfterExit=yes
[Install]
WantedBy=default.target
systemctl enable promisc.service
I’d suggest setting up tcpdump/wireshark for packet capture to see what’s causing the problem.