Difference between revisions of "Mad Cow"
(4 intermediate revisions by the same user not shown) | |||
Line 6: | Line 6: | ||
Anyway, in memory of my days dealing with Fibre Channel attached storage, HP EVAs, Brocade switches, HP-UX on Itanium, and fun teammates, this little excursion into nostalgia (and maybe something useful) is being called Mad Cow. |
Anyway, in memory of my days dealing with Fibre Channel attached storage, HP EVAs, Brocade switches, HP-UX on Itanium, and fun teammates, this little excursion into nostalgia (and maybe something useful) is being called Mad Cow. |
||
− | == Overview == |
+ | == Overview == |
+ | Software defined storage. Fibre Channel. Linux. Ceph. All the cool things... |
||
+ | |||
Not a general treatise on Fibre Channel technology. This is more about providing Ceph-backed disk storage (and all its benefits) over Fibre Channel to the Fnord home datacenter. There are a few machines here with Fibre Channel HBAs. There exists one functional Ceph cluster (and equipment for 2 more). And there are also some Fibre Channel switches and cabling to make it all work. |
Not a general treatise on Fibre Channel technology. This is more about providing Ceph-backed disk storage (and all its benefits) over Fibre Channel to the Fnord home datacenter. There are a few machines here with Fibre Channel HBAs. There exists one functional Ceph cluster (and equipment for 2 more). And there are also some Fibre Channel switches and cabling to make it all work. |
||
Line 16: | Line 18: | ||
* tanstaafl, an HP (no 'E') 9000 rp3440, 2U PA-RISC server running the Debian (unreleased port for this architecture) -- this will be the FC client machine |
* tanstaafl, an HP (no 'E') 9000 rp3440, 2U PA-RISC server running the Debian (unreleased port for this architecture) -- this will be the FC client machine |
||
* fnord-201802 Ceph cluster (currently 3 HP (also no 'E') DL380e servers running Debian 12 for amd64 with 10Gbits/sec Ethernet connectivity) -- backend storage |
* fnord-201802 Ceph cluster (currently 3 HP (also no 'E') DL380e servers running Debian 12 for amd64 with 10Gbits/sec Ethernet connectivity) -- backend storage |
||
− | * zarathud, a Dell PowerEdge R620 server running Debian 11 with |
+ | * zarathud, a Dell PowerEdge R620 server running Debian 11 with plenty of connectivity to the Ceph cluster (10GbE and 40/56GbE both) and plenty of connectivity to the Fibre Channel fabrics -- the gateway between the Ceph backend storage and the client system |
* Brocade 5100 FibreChannel switch(es) -- these run a locked down Linux CLI for configuration and management |
* Brocade 5100 FibreChannel switch(es) -- these run a locked down Linux CLI for configuration and management |
||
Line 24: | Line 26: | ||
* Ceph (ditto) |
* Ceph (ditto) |
||
* Linux SCSI target subsystem (not yet sure about fanboi status on this part) |
* Linux SCSI target subsystem (not yet sure about fanboi status on this part) |
||
+ | |||
+ | == How to do it == |
||
+ | # Install Fibre Channel HBA(s) in initiator (client) machines. |
||
+ | # Install Qlogic HBA(s) the gateway machine. The Linux kernel SCSI target system does support Fibre Channel target mode, but only on Qlogic HBAs. Emulex and Qlogic cards are both fine for initiators. But Qlogic is the only supported target hardware. |
||
+ | # Install Fibre Channel switch(es). I have had a pair of Brocade 5100 FC switches sitting in the racks for some time. Mostly collecting dust. But being put to use in this project. |
||
+ | # Get enough of Ceph installed and configured on the gateway machine that an RBD can be mapped |
||
+ | # Install the Linux SCSI target management CLI, <code>targetcli</code> or, preferably, <code>targetcli-fb</code> |
||
+ | # Collect HBA world wide port names (WWPNs) from gateway (target) and server (initiator) systems. |
||
+ | # Get the Fibre Channel switch's/switches' zoning configuration set so that initiators (the client servers) and see the target (gateway) ports on the fabric can communicate. See the [[Brocade Fibre Channel zoning]] article. |
||
+ | # Create an RBD in the Ceph cluster and map it to the gateway machine (something like <code>sudo rbd --cluster fnord-201802 --id zarathud-rbd map zarathud-fc-target-lun-1</code>) |
||
+ | # use the targetcli management tool to create a mapping between the RBD and the Fibre Channel initiators attached to the fabric (See the article on [[targetcli mappings]]. |
||
+ | # discover storage on the initiator machine(s) |
||
+ | # proceed to read and write to the storage on the initiator machine(s) |
Latest revision as of 18:26, 16 April 2025
That is a weird name for IT stuff, or "You are an insensitive clod!"
Back in the day, before kids, I worked for IT at Swift and Company, a corporation that transformed cows, pigs, sheep and occasionally goats into meat. Swift and Company had a large project to overhaul the finance (PeopleSoft, I think), HR (also PeopleSoft), and warehouse management systems (mostly homegrown, and varied from plant to plant.) This involved a lot of then-neat servers and interesting new technologies. One of these was a decent for its day set of HP (no 'E') FibreChannel attached disk arrays, the EVA or Enterprise Virtual Array for storage. I was overall fairly happy with these arrays as hardware goes. They were pretty easy to get along with. I got to the point I could do an install at least as well as the local HP CE (field engineer) assigned to our account. The first one of these arrays to be installed, an EVA 8000, was dubbed "MAD COW" by the UNIX sysadmin team. That was probably my idea, TBH. But it seemed fitting. And all was OK with the world. Until...
So Swift and Company was eventually acquired by a Brazilian outfit called JBS. There was some confusion on the naming front. Like they didn't get jbs.com registered. So called themselves JBS USA instead. Some Googling in 2025 says they are now called JBS Foods Group. Sometime after the acquisition was finalized, the company CEO decided he wanted to approve all IT change tickets. When one came along that said something like "add addtional disk capacity to the MAD COW disk array" the reaction from the corner office (actually in the center of the building) was extremely irate and a demand was passed down that the name must be changed. After the company decided it didn't want me on the payroll any more, they engaged HP services to come along and rename the array. I hope they were billed outrageous amounts for that engagement. But really, all that it required was logging into the web management tool (called "Command View EVA") and changing a simple text field.
Anyway, in memory of my days dealing with Fibre Channel attached storage, HP EVAs, Brocade switches, HP-UX on Itanium, and fun teammates, this little excursion into nostalgia (and maybe something useful) is being called Mad Cow.
Overview
Software defined storage. Fibre Channel. Linux. Ceph. All the cool things...
Not a general treatise on Fibre Channel technology. This is more about providing Ceph-backed disk storage (and all its benefits) over Fibre Channel to the Fnord home datacenter. There are a few machines here with Fibre Channel HBAs. There exists one functional Ceph cluster (and equipment for 2 more). And there are also some Fibre Channel switches and cabling to make it all work.
Desired end state
- One or more Fibre Channel equipped servers able to access storage over their HBAs
- Storage provided by Ceph clusters
Equipment involved in making this happen
- tanstaafl, an HP (no 'E') 9000 rp3440, 2U PA-RISC server running the Debian (unreleased port for this architecture) -- this will be the FC client machine
- fnord-201802 Ceph cluster (currently 3 HP (also no 'E') DL380e servers running Debian 12 for amd64 with 10Gbits/sec Ethernet connectivity) -- backend storage
- zarathud, a Dell PowerEdge R620 server running Debian 11 with plenty of connectivity to the Ceph cluster (10GbE and 40/56GbE both) and plenty of connectivity to the Fibre Channel fabrics -- the gateway between the Ceph backend storage and the client system
- Brocade 5100 FibreChannel switch(es) -- these run a locked down Linux CLI for configuration and management
Software involved in making this happen
- Debian GNU/Linux (yes, I am a fanboy)
- Linux kernel (also a fanboy)
- Ceph (ditto)
- Linux SCSI target subsystem (not yet sure about fanboi status on this part)
How to do it
- Install Fibre Channel HBA(s) in initiator (client) machines.
- Install Qlogic HBA(s) the gateway machine. The Linux kernel SCSI target system does support Fibre Channel target mode, but only on Qlogic HBAs. Emulex and Qlogic cards are both fine for initiators. But Qlogic is the only supported target hardware.
- Install Fibre Channel switch(es). I have had a pair of Brocade 5100 FC switches sitting in the racks for some time. Mostly collecting dust. But being put to use in this project.
- Get enough of Ceph installed and configured on the gateway machine that an RBD can be mapped
- Install the Linux SCSI target management CLI,
targetcli
or, preferably,targetcli-fb
- Collect HBA world wide port names (WWPNs) from gateway (target) and server (initiator) systems.
- Get the Fibre Channel switch's/switches' zoning configuration set so that initiators (the client servers) and see the target (gateway) ports on the fabric can communicate. See the Brocade Fibre Channel zoning article.
- Create an RBD in the Ceph cluster and map it to the gateway machine (something like
sudo rbd --cluster fnord-201802 --id zarathud-rbd map zarathud-fc-target-lun-1
) - use the targetcli management tool to create a mapping between the RBD and the Fibre Channel initiators attached to the fabric (See the article on targetcli mappings.
- discover storage on the initiator machine(s)
- proceed to read and write to the storage on the initiator machine(s)