Jump to content




[1.7+] "task_complete" events fail to fire


5 replies to this topic

#1 Bomb Bloke

    Hobbyist Coder

  • Moderators
  • 7,099 posts
  • LocationTasmania (AU)

Posted 20 February 2015 - 02:13 PM

CC 1.7 (through to at least 1.74), MC 1.7.10, Forge 10.13.1.1219.

If more than ~256 commands have been requested at a time via commands.execAsync(), their respective "task_complete" events will stop firing. For one thing, this makes it difficult to measure the amount of commands that're still pending execution (... a figure which may lead to a "task limit exceeded" error if it's allowed to grow unchecked, or to my other script outright "stalling" for reasons I haven't nailed down yet but I certainly hope don't involve commands.execAsync() yielding!).

Use the following to verify (this code fails due to the event queue being flooded, see two posts down for an example that fails without doing so):

Spoiler

Tweaking maxCommands down to around 256 (or less), on the other hand, works, and you shouldn't have to go much higher in order to trigger the stall. These symptoms also exist under 1.66pr3 (I've not had access to a 1.66 build older than that).

TLDR version:

Too many commands being executed at once result in the event queue being flooded, no matter how fast you try to pull them.

But even if you don't go anywhere near flooding the queue, while commands are running events have a random chance of not making it in there (affecting timer events, command events, and presumably others too).

Edited by Bomb Bloke, 06 November 2015 - 10:58 PM.


#2 Bomb Bloke

    Hobbyist Coder

  • Moderators
  • 7,099 posts
  • LocationTasmania (AU)

Posted 02 April 2015 - 11:58 AM

Bumpity; still a thing under 1.74pr16.

#3 Bomb Bloke

    Hobbyist Coder

  • Moderators
  • 7,099 posts
  • LocationTasmania (AU)

Posted 20 April 2015 - 11:44 PM

Another slightly tweaked example. The above one demonstrates that attempting to run more than ~256 commands at once is liable to fail instantly. This one demonstrates that even if you stay below that limit, it may well randomly fail anyway:

Spoiler

As "commandLimit" gets lower, the odds of successfully executing to completion get better. For example, at 30, you might have to run the script a few times to get it to stall.

Edit: Just to clarify, it seems that OTHER event-types also have a chance of failing to fire when commands.async() calls are resolving. For example, just a few dozen are enough to randomly prevent timer events being generated by os.startTimer().

Edited by Bomb Bloke, 12 June 2015 - 01:08 AM.


#4 MKlegoman357

  • Members
  • 1,170 posts
  • LocationKaunas, Lithuania

Posted 20 May 2015 - 08:35 PM

I found the problem of missing events a few months ago, and only now I realized that I did... The problem is that the event queue can only hold up to 256 events, after that any new events are just not added to the queue.

#5 theoriginalbit

    Semi-Professional ComputerCrafter

  • Moderators
  • 7,332 posts
  • LocationAustralia

Posted 20 May 2015 - 11:38 PM

I can confirm this. Following the path that calling queue event takes led me to this
LinkedBlockingQueue queue = (LinkedBlockingQueue) m_computerTasks.get(queueObject);
if(queue == null) {
  m_computerTasks.put(queueObject, queue = new LinkedBlockingQueue(256));
}


ArrayList var4 = m_computerTasksPending;
synchronized(m_computerTasksPending) {
  queue.offer(_task);
  // ...
}

// ...
Notice the 256. And unlike a normal collections in Java a LinkedBlockingQueue will not resize when it is given extra elements, see the offer method for it
public boolean offer(E e) {
  // ...
  if (count.get() < capacity) {
	enqueue(node);
	c = count.getAndIncrement();
	if (c + 1 < capacity)
	  notFull.signal();
  }
  // ...
}
Given that the LinkedBlockingQueue's JavaDoc states
// The optional capacity bound constructor argument serves as a way to prevent excessive queue expansion.
it seems to me as though this decision while annoying, is intentional

Edited by theoriginalbit, 20 May 2015 - 11:39 PM.


#6 Bomb Bloke

    Hobbyist Coder

  • Moderators
  • 7,099 posts
  • LocationTasmania (AU)

Posted 21 May 2015 - 10:38 AM

Righto, thought as much. Presumably the commands generate events at a much faster pace than the script can pull them ("all at once"), resulting in the total stall.

That in itself isn't so bad - by keeping the total "unpulled event" count below 256 at all times, everything can run at an "acceptable" pace - but even if you don't flood the queue it bugs out anyway, per my second code example!

Here's another bit of code I wrote later on to try to deal with it in skyTerm (not included in the uploaded version, because, well, it doesn't work):

		while curCommands > 0 do
			local myTimer, myBackupTimer = os.startTimer(1), os.startTimer(1)
			local myEvent = {os.pullEvent()}
			
			if (myEvent[1] == "timer" and (myEvent[2] == myTimer or myEvent[2] == myBackupTimer)) or myEvent[1] == "task_complete" then
				curCommands = curCommands - 1
				os.cancelTimer(myTimer)
				os.cancelTimer(myBackupTimer)
			end
		end

"curCommands" was incremented every time commands.execAsync() was called, and this loop was put in place at the end of each "line write" to ensure that all commands resolved before the script continued. Typically, when hitting it, the counter'd be somewhere below 35 (the number of characters I had the "terminal" writing per line). You'd think that even if every "task_complete" event failed to fire (low odds in itself), it would still resolve in a bit over half a minute (thanks to the timers) - but it still manages to stall *inside this loop* ... usually within a couple of minutes of constant "line writes"! The addition of a second timer only served to mitigate things somewhat. :(





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users