I am building my personal private cloud. I am considering using second hand dell optiplexes as worker nodes, but they only have 1 NIC and I’d need a contraption like this for my redundant network.

Then this wish came to my mind. Theoretically, such a one box solution could be faster than gigabit too.

  • SheeEttin@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    2
    ·
    1 year ago

    If you have a bunch of nodes, what do you need redundant NICs for? The other nodes should pick up the slack.

    It’s unlikely for the NIC or cable to suddenly go bad. If you only have one switch, you’re not protected against its failure, either.

    • akash_rawal@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      I plan to have 2 switches.

      Of course, if a switch fails, client devices connected to the switch would drop out, but any computer connected to both switches should have link redundancy.

    • computergeek125@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      There are still tons of reasons to have redundant data paths down to the switch level.

      At the enterprise level, we assume even the switch can fail. As an additional note, only some smart/managed switches (typically the ones with removable modules and cost in the five to six figures USD per chassis) can run a firmware upgrade without blocking networking traffic.

      So from a failure case and switching during an upgrade procedure, you absolutely want two switches if that’s your jam.

      On my home system, I actually have four core switches: a Catalyst 3750X stack of two nodes for L3 and 1Gb/s switching, and then all my “fast stuff” is connected to a pair of ES-16-XG, each of which has a port channel of two 10G DACs back to to Catalyst stack, with one leg to each stack member.

      To the point about NICs going bad - you’re right its infrequent but can happen, especially with consumer hardware and not enterprise hardware. Also, at the 10G fiber level, though infrequent, you still see SFPs and DACs go bad at a higher rate than NICs