Dan Langille is a user on mastodon.social. You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.

This seems like an odd issue to be having in 2017.

[dan@x8dtu:/iocage/jails/x8dtu-ingress01/root/var/db/freshports/message-queues/archive/.zfs] $ ls -l snapshot
ls: snapshot-for-backup: File name too long
total 0

I had plans to rig up the R710 tonight, but the SSDs & Icy Dock adaptors have not showed up.

What's a man to do on a Friday night if not bugger about with computers?

I plan to swap the X5660 out for L5640

I'm swapping computing power for lower power consumption & less heat.

FYI this was two new CPUs for the R710.

That word does not mean what you think it means.

Did I miss something here? mastodon.social/media/tIKAhlen

3TB Toshiba drives soon to be selling. I have 7 from 2015, 6 from 2013, and 9 from 2012.

imgur.com/gallery/hoxMH

Dan Langille boosted

This gist compares 3Ware RAID status from 2010 (good: dan.langille.org/2010/08/26/mo) and today (bad). Do you agree the issue is only with the spares? I think I lost a spare. Don;'t know why u0 is RAID-10: gist.github.com/dlangille/1fcd

These were the CPUs I was considering for this Dell R710 I have recently added to my rack.

I'm sticking with the two X5660 CPUs it came with. It has 60GB, so for now I'll use that.

I'll be removing the SAS HDDs and adding in SATA SSDs with the help of a few 3.5"->2.5" adaptors.

ark.intel.com/compare/47921,48

What issues exist with 3D NAND SSDs? Use them like any other?

Dan Langille boosted

@dvl I'm not really a network expert, but I always thought that ICMP must never be restricted on any ICMP type. Am I wrong?

I had an issue with IPv6 pings. I found content.pivotal.io/blog/a-bare and went with that.

I now have the following pf rules on FreeBSD:

icmp6_types="{ 2, 128 }"
icmp6_types_ext_if="{ 128, 133, 134, 135, 136, 137 }"

pass in quick on $EXT_IF inet6 proto ipv6-icmp icmp6-type $icmp6_types keep state

pass in quick on $EXT_IF inet6 proto ipv6-icmp from any to any icmp6-type $icmp6_types_ext_if

Any comments/suggestions there?

Dan Langille boosted

@Nezchan even putting aside how terrible ads are, browsing the web w/o an ad blocker is *actively harmful* to your computer, security-wise. If places like Forbes can be made to serve malware though their 3rd-party ad networks, what chance does a user w/o an ad blocker even have?

* use the R710: it has about 5TB of disk space; I could copy to there, then spool to disk; then more copies of the backup exist. Also contingent upon getting at least 120MB/s from the first bacula-sd.

Do you have other ideas / suggestions / things to consider? Recommendations?

Thank you.

When I copy to tape, I usually run multiple jobs at once. There are are two bigger jobs, and one or the other usually streams alone for a while.

Here are the scenarios I have mulling over in my head:

* use the R610, add a 128G SATA SSD drive for spooling. That will saturate the drive.

* use the R610, don't spool since it's a 10Gb network. This is contingent upon getting at least 120MB/s from the first bacula-sd.

... CONTINUED

I backup to disk then copy to LTO4 tape. All the servers in question are on a 10Gb network.

The bacula-sd on disk has a 10x 5TB raidz3 ZFS array. I am not sure of my read speeds there, but I know there is no room to attach the LTO4 tape library on that server.

I have two spare servers which can be attached to the library:

* Dell R610 - cheaper to run - only SSDs in here.
* Dell R710 - about 2.6 times more expensive to run, no SSD.

Please check my math.

LTO4 can write at 120 MB/s

If I have 90GB free on an SSD, and use that as spooling for backup to tape, it will take about 13 minutes to empty that drive.

Am I making sense? I'm just trying to get a sense of how long the data will take to spool to tape.

Over a 10Gb link should only take about 90 seconds to copy that data.

Now I recall why I wanted the R710 over the R610. Noise.

The R610 was louder. The fans never spun down. That seems to have resolved itself.

Using the R610 would drop the monthly cost from $21 to $8, a savings of about $150 / year.

Perhaps I still stay with the R610 and toss in a couple of 500GB SSDs, which will pay for themselves in 2 years.

Why the R710 and not the R610 I also have?

I'm not sure.

Looking at this graph, powering up the R710 requires an additional 2A of current, taking me from about 4.7A to 6.7A.

The meter says it is using about 215W at idle. Let's call that 240W and do the math. Current electricity charges are about $0.12004 / kWh

That's about $0.70 a day, or roughly $20.75 a month.

Let's compare that to the R610 I was using 90W.

Even going to lesser drives, the payback period is at least two years. mastodon.social/media/DCZmjlf7