Date
1 - 2 of 2
ICU ideas regarding Feilong / Cloud Connector
Johan Schelling
Hello all,
Just sharing here the story I already shared with Ji Chen and Mike regarding our findings when using the Feilong / Cloud Connector tool and changes that we would like to propose / build.
A customer of ICU has decided to buy 2 LinuxONE Rockhoppers for consolidation of their Oracle databases. ICU is involved in setting up the system, defining the Lpar’s, installation and configuration of zVM and, together with linux-engineers
for the customer, the provisioning of the Linux guests in zVM. For provisioning we have setup the zVM Cloud Connector and the linux-engineers have completely automated the provisioning and configuration of the linux guests using the Cloud Connector API’s
and SALT maintenance processes.
In the initial setup of zVM we wanted to use zVM storage pools to hand-out the needed disk-storage for the linux guests in a software-defined storage way. Within zVM we want to manage the available disk storage and linux-guests should just request
the amount of GB they would need. As this customer doesn’t have any ECKD DASD available the only way to do a setup for this was using FCP SCSI disks through zVM EDEV’s. We started out with a limited number of large EDEV’s (with 1TB LUNs) to hand out the
disk storage through the zVM storage pool. So every linux guest would end-up with a set of minidisks on one of the available EDEVs. This worked fairly well at first…. Provisioning of linux guests went OK, but when the customer started to run SALT maintenance
processes on multiple linux-guests at the same time we ran into serious performance problems. Careful investigation of these problems, together with some IBM zVM specialists showed that we ran into I/O problems due to the use of EDEV’s. It showed that when
using EDEV’s from zVM we only get a single I/O thread per EDEV to the physical LUN. So when there are more Linux-guests installed on the same EDEV / SCSI-disk and they are doing a lot of I/O work at the same time, the throughput drops dramatically. We have
seen I/O speeds down to 2Mb/s where we expected around 120 Mb/s.
As this customer is FCP only, we can’t use any ECKD DASD to solve this problem using Parallel Access Volumes (unless IBM provides us with PAV for EDEVs ;-) )
Currently we have changed the setup within zVM and now have a storage pool with a large number of small EDEV’s (the size of a Linux root filesystem). So every time we provision a new Linux guest, we use a new EDEV and every linux-system ends
up on it’s own EDEV. As we have around 250 linux environments to manage on the LinuxONE , we ended up with the same number of EDEV’s and LUNs on the HDS storage system.
I/O speeds are now stable at between 70 and 120 Mb/s which is fast enough for now. We have tested with the SALT maintenance process and performance there is steady as well. But to the feeling of the customer the I/O speeds are still a bit low.
They feel they have bought an "I/O monster” with LinuxONE but are not getting the speed they expected. And working with a great number of small EDEV’s increases the management and maintenance for storage, to much displeasure of the storage managers.
For the Oracle databases we use SCSI-disks that are directly attached to the Linux-guests (linux multipath, so no zVM EDEV’s involved) and in our tests we see I/O speeds going up to around 1Gb/s. We haven’t seen any I/O issues yet when migrating
Oracle databases onto the new Linux guests on the LinuxONE. We have discussed these I/O speeds with the customer and they have asked us to investigate what it would take to use direct SCSI-disks for the Linux root filesystem as well.
It should be possible to directly provision a Linux Guest on a SCSI-disk (without any zVM EDEV definition) and IPL it from there. But currently the Cloud Connector doesn’t support this way of working. In discussing the entire situation with
Mike, we felt that it should be possible to add some functionality to the zVM Cloud Connector to support this way of working.
What we, at ICU, are planning to do is to investigate how to implement the use of direct SCSI in the zVM Cloud Connector. On Monday September 2nd an intern, Nick Snel, will start with the assignment to check whether it this is possible and
what needs to be done. Nick has a lot of knowledge of Linux and Python but not that much of zVM and SMAPI. In those areas he will be supported by our zVM staff. If we come up with some ideas we will, of course, align with you and your team to see whether
it is useful to continue. Basic idea we discussed with the ICU team yesterday is summarized in the picture below:
We see a lot of potential in the use of Feilong / Cloud Connector in setting up cloud (-like) infrastructures (as we did at our customer) and Openstack environments with zVM as a basis (DBaaS with Oracle).
We would really like to hear what you think …….
Regards,
Johan Schelling
Infrastructure Solution Architect
ICU IT Services BV
Transistorstraat 55b I 1322 CK ALMERE
|
|
jichenjc@...
Thanks, Johan
This in openstack called 'boot from volume', which means you carve a disk from a storage and boot from it ,instead of boot from local disk, this is technically doable (actually we are also working on FCP management as well now) including following points that might need consider
1) The original IPL is from disk number ,and if we need support boot from FCP, we need modify IPL way
2) the PARM and LOADPARM need to be considered
3) The FCP management including mulitpath/NPIV etc management
3) The unpackdiskimage which do the real image copy need to be changed to include FCP/SCSI
I suggest you open an issue (copying below info) to https://github.com/openmainframeproject/python-zvm-sdk/pulls
and we can talk there, and I am happy to cooperate with you on how to moving this ahead to make FCP/SCSI support better on Feilong project
----- Original message ----- |
|