Anyone have any tips for working with large text files? (~30Gb)
I'm assuming that pretty much any normal text editor is out of the question? Is a combination of sed and grep my best bet?
This almost seems like a case for Ed, man!
https://www.gnu.org/fun/jokes/ed-msg.html
@codesections Use grep to separate them into separate files by first character?
Use /usr/bin/split to section the data into separate files?
@drwho @codesections This is the way to go.
@lord @codesections That's a really interesting idea... wouldn't you eventually run out of inodes, though?
@drwho @codesections It was a joke.
It's doable, you won't run out of inodes with modern filesystem (btrfs can have 2^64 files/folders) but it won't magically give you free space.
Your files will weight 0 but your filesystem metadata will be huge…