Lyqyd, on 20 August 2016 - 12:00 PM, said:
You could examine the text looking for the maximum block quote level and then surround it with block quotes one level greater.
That sounds complicated, and I'm a lazy bastard. I had a function that goes through the file to replace all instances of "\***" with string.char(***), and for big files, that takes far too long. I since replaced the code with a clever string.gsub(), but I'm supposing that code to analyze quote level will take too long for my purposes.
Gorzoid, on 20 August 2016 - 12:19 PM, said:
If you used a binary file instead of a serialized table you could shorten it down alot more. For example, f.write(filename.."\0"..toBytes(file)..file) where toBytes returns a 4 byte string representing the filesize.
That doesn't sound like a bad idea, but I'm not so sure that I could turn compressed files into binaries. And I don't think it preserves comments.
Luca_S, on 20 August 2016 - 07:21 AM, said:
I know, but what i mean is this:
You store it like this currently:
{
["filename"] = {
"Line1",
"Line2"
}
}
Why don't you store it like this:
{
["filename"] = "Line1\nLine2"
}
You can just do
f = fs.open(file,"r")
files[file] = f.readAll()
f.close()
That IS a good idea. HMM...