Mad Cow

From FnordWiki
Jump to navigation Jump to search

I hereby declare this "Project Mad Cow." For reasons.

Not a general treatise on Fibre Channel technology. This is more about providing Ceph-backed disk storage (and all its benefits) over Fibre Channel to the Fnord home datacenter. There are a few machines here with Fibre Channel HBAs. There exists one functional Ceph cluster (and equipment for 2 more). And there are also some Fibre Channel switches and cabling to make it all work.

That is a weird name for IT stuff, or "You are an insensitive clod!"

Back in the day, before kids, I worked for corporate IT at Swift and Company. Swift and Company had a large project to overhaul the finance (PeopleSoft, I think), HR (also PeopleSoft), and warehouse management systems (mostly homegrown, and varied from plant to plant.) This involved a lot of then-neat servers and interesting new technologies. One of these was a decent for its day set of HP (no 'E') FibreChannel attached disk arrays, the EVA or Enterprise Virtual Array for storage. I was overall fairly happy with these arrays as hardware goes. They were pretty easy to get along with. I got to the point I could do an install at least as well as the local HP CE (field engineer) assigned to our account. The first one of these arrays to be installed, an EVA 8000, was dubbed "MAD COW" by the UNIX sysadmin team. That was probably my idea, TBH. But it seemed fitting. And all was OK with the world. Until...

So Swift and Company was eventually acquired by a Brazilian outfit called JBS. There was some confusion on the naming front. Like they didn't get jbs.com registered. So called themselves JBS USA instead. Some Googling in 2025 says they are now called JBS Foods Group. Sometime after the acquisition was finalized, the company CEO decided he wanted to approve all IT change tickets. When one came along that said something like "add addtional disk capacity to the MAD COW disk array" the reaction from the corner office (actually in the center of the building) was extremely irate and a demand was passed down that the name must be changed. After the company decided it didn't want me on the payroll any more, they engaged HP services to come along and rename the array. I hope they were billed outrageous amounts for that engagement. But really, all that it required was logging into the web management tool (called "Command View EVA") and changing a simple text field.

Desired end state

  • One or more Fibre Channel equipped servers able to access storage over their HBAs
  • Storage provided by Ceph clusters

Equipment involved in making this happen

  • tanstaafl, an HP (no 'E') 9000 rp3440, 2U PA-RISC server running the Debian (unreleased port for this architecture) -- this will be the FC client machine
  • fnord-201802 Ceph cluster (currently 3 HP (also no 'E') DL380e servers running Debian 12 for amd64 with 10Gbits/sec Ethernet connectivity) -- backend storage
  • zarathud, a Dell PowerEdge R620 server running Debian 11 with 10Gbits/sec Ethernet connectivity -- the gateway between the Ceph backend storage and the client system

Software involved in making this happen

  • Debian GNU/Linux (yes, I am a fanboy)
  • Linux kernel (also a fanboy)
  • Ceph (ditto)
  • Linux SCSI target subsystem (not yet sure about fanboi status on this part)