Jump to content




getBlockInfo async


14 replies to this topic

#1 Alais

  • Members
  • 6 posts

Posted 12 September 2015 - 01:44 AM

I am trying to read the name and data of each block in a cuboid region into an array. I am currently using the vanilla testforblock command with execAsync, but it is a pretty poor substitute for commands.getBlockInfo.Specifically, testforblock does not return the block info in a sensible format, and it does not return the block's parent mod at all.

I believe execAsync only works with non-computercraft commands. Is there a way to execute commands.getBlockInfo asynchronously, so that I can gather the data faster than one block per tick? Apologies if I am missing something obvious.

#2 Bomb Bloke

    Hobbyist Coder

  • Moderators
  • 7,099 posts
  • LocationTasmania (AU)

Posted 13 September 2015 - 01:38 AM

Yes, by running multiple coroutines and having each call commands.getBlockInfo() individually. That command yields while waiting for CC's backend to get the data it wants to return, and while it's doing that you can resume your other coroutines and have them make more calls.

The parallel API (a basic coroutine manager built into ComputerCraft) makes this fairly straightforward to do (compared to coding your own coroutine manager, anyway). An example's in WorldPorter - between lines 293/314, a function is defined that scans in world data via a basic loop. On line 316, a couple of hundred copies of the function pointer are dumped into table. On line 356 all functions in the table are executed in parallel.

There are some other functions loaded into the table which additionally handle updating the progress displayed on-screen and the writing of the scanned data to disk, but just the lines mentioned should be enough to give you some ideas.

#3 Alais

  • Members
  • 6 posts

Posted 13 September 2015 - 06:05 PM

Thanks a lot for this, it's exactly what I was looking for. I have also learned some pretty useful things here. In particular:
1) if statements will treat nil as false (good for checking if something has been assigned)
2) unpack() is a thing (I was avoiding the parallel API because I didn't want to pass hundreds of arguments to it manually or via concatenation)
3) By substituting three for loops with a while loop and three manual iterators, I can loop through a 3D array from any starting x,y,z (something that is not trivial with for loops)

#4 Alais

  • Members
  • 6 posts

Posted 13 September 2015 - 10:38 PM

Ok, so I have made a test program based on the lines your indicated from WorldPorter. I am very happy to say that it works, however it is unfortunately quite slow. I am currently working on ways to improve this. I did notice an oddity with WorldPorter and the test program. It appears that adding more functions in parallel is not necessarily faster. There is some testing below, but the findings are that the optimum for maxSeekerFuncs is about 28. Do you know why this might be? Is it possible that a custom coroutine handler would solve the issue?

Testing, using WorldPorter itself, region = 5280 blocks (x*y*z = 16*33*10):
  • maxSeekerFuncs = 26, time = 10.3s, 10.25s
  • maxSeekerFuncs = 28, time = 10.05s, 10.1s
  • maxSeekerFuncs = 30, time = 10.1s, 10.25s
  • maxSeekerFuncs = 230, time = 61.5s, 59.75s
As you can see, the optimum for maxSeekerFuncs is around 28, whereas if you bump it up to the theoretical safe maximum of 230, you get performance roughly 6x slower. The optimum of 28 also holds for my test program and for different region sizes. At the optimum, scan speed is ~530 blocks/sec, independent of region size. If you don't mind, could you let me know if you get similar results?

For reference, comparison times using the actual region I want to scan (x*y*z = 31*31*91 = 87451 blocks)
  • My WorldPorter-based test code, with maxSeekerFuncs = 28, execution time = 159s
  • Actual WorldPorter scan, with maxSeekerFuncs = 28, execution time = 165s
  • My old code, using testforblock, execution time = 37s
I would post the old code, but it's a bit ugly and was definitely a WIP at the point I realized it wouldn't do what I needed it to. What it did was to read all the region's blocks' names into an array. Please let me know if it would help to post it.

#5 Bomb Bloke

    Hobbyist Coder

  • Moderators
  • 7,099 posts
  • LocationTasmania (AU)

Posted 14 September 2015 - 01:09 AM

View PostAlais, on 13 September 2015 - 06:05 PM, said:

By substituting three for loops with a while loop and three manual iterators, I can loop through a 3D array from any starting x,y,z (something that is not trivial with for loops)

This may be something you're already clear on, but just to be sure: it's all about scope.

On line 295, myX / myY / myZ are declared as local to checkPos(). Each of the ~200 instances of that function started by the parallel API get their own copy of those variables, which are inaccessible to all the others and effectively wiped from memory once the functions complete. They use these to keep track of which exact co-ord they should individually be checking during a given iteration of their "while" loops.

On line 249, x / y / z are declared as local at the script's root level. These variables are thus shared by every instance of the checkPos() function that gets started (as they all refer to the same upvalues), allowing them to each pick unique co-ords to scan with no overlaps.

When you create a "for" loop, the counter variable(s) are automatically made local to that loop - sharing them between multiple function instances would hence be a lot more difficult than using upvalues.

#6 Alais

  • Members
  • 6 posts

Posted 14 September 2015 - 03:58 PM

I think I am clear on how the parallel functions are working together, but it did take me a little while to fully grasp what was going on. It is a great concise way to do it, wish I had thought of it myself. The ability to restart the loop at any point is just a nice side effect. It allows you to do something like going back 200 blocks if you detect an error in the most recent group of block scans.

Not sure if you saw my previous post above, it took a few hours to be approved, so your most recent reply is appearing after it. I am very curious as to whether varying the maxSeekerFuncs value behaves the same way for you as it does for me. If it does, I have some interesting further analysis that might help nail down what's going on.

#7 Bomb Bloke

    Hobbyist Coder

  • Moderators
  • 7,099 posts
  • LocationTasmania (AU)

Posted 15 September 2015 - 03:10 AM

Yes, I'd missed that last post. Thanks for bumping.

I'd briefly considered whether there was a point where adding more coroutines would hinder rather than help, but I took the lazy way out of assuming it to be a rather high number (rather than actually, you know... checking).

I've a pretty good idea what slows things down, though. ComputerCraft's coroutine management is built around a string-identified "event" based system - when a coroutine yields, it has the option of returning a name to its manager, specifying the type of event it wants to be resumed with. When the manager has an event ready of the specified type, it resumes the coroutine with it. For example, if you call os.pullEvent("key"), you're yielding your code and it'll only be resumed when a "key" event is ready.

It's designed that way because it's simple to follow (ComputerCraft's for beginner programmers, after all), but this over-simplified system has its flaws (as do most high-level constructs). Occasionally you'll hit an instance where a coroutine wants a very specific event, as opposed to "any events of a type".

For example, when commands.getBlockInfo() runs, it triggers a request for world data and is given a number. It then has to start a loop of yields, requesting a "task_complete" event on each iteration. Every time it's resumed with an event of that type, it checks to see if the event contains its magic number - the loop only breaks when the numbers match.

The parallel API has no way of knowing which specific "task_complete" event belongs to which coroutine. It hence resumes ALL coroutines with ALL "task_complete" events, because they're ALL asking for events of that type. Even when it hits the correct coroutine for a given event, it has no way of knowing that, and so it continues to resume all the other coroutines with it too. You should be able to see how this wastes time.

Unfortunately, there's no "elegant" method of optimising this system. The "magic numbers" each call wants in their respective "task_complete" events are tracked by variables localised to each function instance, and we have no way of knowing which number which coroutine wants. We don't even have access to the function which triggers the world-data request in the first place (... which'd be our "getBlockInfo async" function if we did have it, the one that'd remove all need for coroutine management).

That said: guessing will probably work. It'd simply be a case of making a coroutine manager which assumes that the order in which "task_complete" events become available matches the order in which the coroutines its handling have yielded. If that's always true, then all redundant resumes can be avoided.

I'll do some tests using such a system later this evening, and if it actually works I'll list out the modified code.

#8 Lyqyd

    Lua Liquidator

  • Moderators
  • 8,465 posts

Posted 15 September 2015 - 03:35 AM

If the magic numbers start at one and are increased monotonically per-computer, shouldn't it be possible to override the function at startup to put the value of coroutine.running() in a table with the expected ID as the key? You'd then write a coroutine manager that special-cases these events to resume whichever coroutine is pointed to by the table entry rather than resuming all of them, and then remove the entry from the table.

#9 Bomb Bloke

    Hobbyist Coder

  • Moderators
  • 7,099 posts
  • LocationTasmania (AU)

Posted 15 September 2015 - 04:28 AM

Unfortunately all commands API function calls (worldwide!) cause it to increment, resetting requires a server restart, and even if you're first in line to run code after that you can't be sure where the starting point will be.

But since you mention it, if I do run into problems, manually inspecting the numeric IDs within the events should allow easy compensation for some degree of error, yes. I could maintain a buffer of eg 50 events, and every time I go to add an extra, resume the next coroutine in line with whichever event has the lowest ID. I'd imagine that'd be sufficient to compensate if the events arrive out of order... though with any luck, that won't even be an issue.

#10 Bomb Bloke

    Hobbyist Coder

  • Moderators
  • 7,099 posts
  • LocationTasmania (AU)

Posted 15 September 2015 - 01:55 PM

Alright, so fortunately the events always seem to appear in the order in which they were requested, so the custom coroutine management was simple to implement.

In my tests, I found that WorldPorter defaulted to around 10k blocks per minute with 230 coroutines, and did about 33k bpm with 28 coroutines.

With the new code, it still ticks along at 33k bpm with 28 coroutines, but manages 156k when cranked back up to 230. That's a bit of an improvement - it's about the same speed at which it can build. :)

In the source, the old "parallel.waitForAll(unpack(scanfuncs))" line has been replaced with this chunk:

	local scanfuncs, curFunc = {}, 1
	for i = 1, maxSeekerFuncs do
		scanfuncs[i] = coroutine.create(checkPos)
		coroutine.resume(scanfuncs[i])
	end
	
	while true do
		local myEvent = {os.pullEvent()}
		
		if myEvent[1] == "timer" then
			coroutine.resume(displayProgress, unpack(myEvent))
		elseif myEvent[1] == "task_complete" then
			coroutine.resume(scanfuncs[curFunc], unpack(myEvent))
			curFunc = curFunc == maxSeekerFuncs and 1 or curFunc + 1
			coroutine.resume(processData, "task_complete")
			if coroutine.status(processData) == "dead" then
				coroutine.resume(displayProgress, "timer", myTimer)
				break
			end
		end
	end

The displayProgress / processData coroutines were previously additional (unnamed) functions in the scanfuncs table. They're now individually built directly as coroutines and resumed a single time each before this loop starts. Their contents could actually be dragged down into this loop, but I'm lazy.

This code is built around the rule that displayProgress only ever requests timer events and the other coroutines only ever request task_complete events. A "regular" ComputerCraft-based coroutine manager would need to track what's being requested, but here the manager can safely make assumptions. Beats me how familiar you are with how coroutines work, but I'd like to imagine that this snippet demonstrates most everything you need to know to understand the above.

Edited by Bomb Bloke, 15 September 2015 - 02:03 PM.


#11 Alais

  • Members
  • 6 posts

Posted 15 September 2015 - 02:31 PM

Awesome, I will check this out tonight when I get home from work. One thing I would like to check is the robustness of the assumption that events always arrive in the order requested. I think it is feasible that one getBlockInfo() could take fractionally longer than another, for example if your region crosses a boundary from loaded to unloaded chunks. If ComputerCraft is actually enforcing the order of receipt then there is no problem, but wouldn't that go against the philosophy of asynchronous commands?

If they don't always appear in order, as you say, a buffer would provide some protection. If you wanted to error check the whole set at once, you could probably check if the current taskID is greater than the previous one, store any that fail in a separate table, and then slot them in at the right points afterwards. Hopefully this is not needed.

#12 Lyqyd

    Lua Liquidator

  • Moderators
  • 8,465 posts

Posted 15 September 2015 - 06:37 PM

Or you could call coroutine.status after resuming the coroutine with its matching event. If the coroutine is dead, you know it worked. If it is suspended, put the event and the coroutine in a couple of "mismatches" tables. When you've received every event, resume each of the coroutines in the mismatches table with each of the mismatched events.

If you can find a case where they arrive out-of-order, of course.

#13 Bomb Bloke

    Hobbyist Coder

  • Moderators
  • 7,099 posts
  • LocationTasmania (AU)

Posted 15 September 2015 - 10:56 PM

I started out by putting in a check to ensure that the ID returned by a given event always exceeded that of the previous, and to error if it didn't. This check passed over a couple of million events. In the case of WorldPorter, if a single event DOES arrive out of order, this'll be "indicated" by the whole script hanging at that point. The processData thread won't carry on until it has valid data for each co-ord, received in the correct order.

The problem is that a protection mechanism, such as a buffer, would slow execution. Even using Lyqyd's coroutine status idea requires me to allow my coroutines to frequently die and be rebuilt. So for the moment, given that it appears unneeded, I'll leave such code out.

I can't directly observe what's going on in the back-end when ComputerCraft is gathering task_complete events for our code to process, but it looks like each command runs to completion before the next can start. That is to say, they run asynchronously to our scripts, but not to each other. This is the behaviour I'd been hoping for.

It may also be worth noting that I've never had a commands.getBlockInfo() call return incorrectly since I first started using it (it'll even load / generate chunks, if it needs to). I've had other commands API functions fail to generate task_complete events (which is a whole other can of worms), but they still at least perform their work to the extent that's possible.

#14 Lyqyd

    Lua Liquidator

  • Moderators
  • 8,465 posts

Posted 15 September 2015 - 10:58 PM

Ah, I hadn't picked up that each coroutine in a given round of execution is performing multiple calls over its lifetime. Whoops.

#15 Bomb Bloke

    Hobbyist Coder

  • Moderators
  • 7,099 posts
  • LocationTasmania (AU)

Posted 15 September 2015 - 11:19 PM

That'd be pretty easy to change, though, if it came to it.





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users