Question

I have a model which works good for less than 100 agents (20 * 20 world size), but one of my model requirements now is to test my model for different groups of agents and I need to have more than 100 agents (and 40 * 40 world size). I have tried to optimize each function individually but I am afraid there is nothing left that I can change without destroying model requirements.

In the current version I am using links to keep track of agent relationship and there is no limit on how many links each agent can have, therefor number of links grows really fast (more than 2000 links), There is a need for updating each link's relationship value after each interaction.

A little more detail on the use of links in my model:

  • Agents create / update link's value and frequency with each other if they have social Interaction

    Many agents create or update their link value and frequency with one agent if they observe an unusual social activity of that agent (different groups Of these activities has been defined and different actions will be called based on the type of activity)

    Agents will observe co-located agents link's value when they are in Same patch and according to that they might have different kinds of social interaction

    Agent in a certain age range will find a mate based on their link value and other criteria.

And maybe a few more that I can't recall right now, but the link's value is being called many times in each tick, the agent lifetime is 4000 and simulation length is 40000 ticks, for 100 agents it takes 10-15 min to run the simulation but for 200 agents it takes 10 min to finish only 2000 ticks!

Since I have a big problem testing my model for more agents I was thinking of eliminating all links and use a global table with each pair of agents and their relationship value and frequency of their relationship but since using links is really easy I think I will have difficulties setting and getting values.

Does anyone have any idea of a better way to do this? Or ways to make netlogo models scalable ?

UPDATE: Hi, I checked again and again And I am sure my programming style makes my program slow, I have found 2 cases of accidental asks one of them is really stupid because I set links invisible which is not even necessary!!! and I just could set hide-link whenever a link is created! without asking it again :D

Additionally I have eliminated the cases which I used ask or with for out-link neighbors , links and out-links . For example I have replaced a code which was checking to find any other agent who has a common relationship with caller agent, My initial code was really slow and I have replaced it with following code which works much faster:

Let Agents_I_Met out-link-neighbors 
if any? other agents-here with [any? out-link-neighbors with [member? self Agents_I_Met ]]
          [
Let Other_Agent one-of other agents-here with [any? out-link-neighbors with [member? self Agents_I_Met ]]
Let CommonAgent one-of Agents_I_Met with [member? Other_Agent in-link-neighbors  ]
...

but still there are many cases that I need to call Other agents so I think it is ok to ask agents ask other agents in-radius X!

Finally now my system works much better in more reasonable time for 400 agents and 15000-20000 LINKS :)

But I am sure there is still place for improvement. Thanks Seth for your helpful answer :)

Was it helpful?

Solution

It's not a good idea. Links in NetLogo are implemented efficiently; whatever you substitute for them is very likely to be slower rather than faster.

You seem eager for this to be NetLogo's fault, but it probably isn't; the problem is almost certainly a problem in your own code, a problem that would have arisen in whatever programming language you were working in.

In most NetLogo simulations, the runtime increases linearly with the number of agents, i.e., doubling the number of agents. From the numbers you give, it sounds like you have written code that takes time proportional to the square of the number of agents. (It might be exponential in the number of agents, but it's much more common for people to accidentally write code that takes polynomial time.)

There are two ways this might have happened:

  • The algorithm you are implementing is inherently one that takes polynomial time to compute.
  • The algorithm you are implementing can be coded such that it executes in linear time, but you accidentally coded it in a way that takes polynomial time.

You need to ask yourself, what is it about my code that makes it not run in linear time?

You need to look at every loop in your code that loops over all of the agents in the model and ask yourself, is there anything in the body of this loop which can't be executed in constant time?

Primitives that loop over all the agents include ask and with. So the most common mistake is to write something like:

ask turtles [
  ... turtles with [ ... ] ...
]

without knowing anything else about the code, I can look at this and know that the model it occurs in will be a slow model, because the above code requires polynomial time to execute. Every possible pair of turtles executes the both of the with, so e.g. if there are 100 turtles, the code will take 10,000 steps to execute. If there are twice as many turtles, the code will take 40,000 steps to execute — doubling the number of turtles causes the runtime to quadruple, not just double.

We already saw once that at Use undirected links instead of Directed links that code of this form was the reason your model was slow.

So, you need to find where it is in your model that you have nested loops like this, where the size of both loops is proportional to the total number of agents. (The two loops might not necessarily be ask and with, and they might not even be in the same procedure.)

And when you find a place where you're doing this, you need to figure out whether you're doing a computation that inherently takes this long, or whether you've accidentally coded it in an unnecessarily slow form. If the latter, fix it. If the former, you'll need to rethink your requirements.

And it sounds like it's your requirements that are fault. You write "In the current version I am using links to keep track of agent relationship and there is no limit on how many links each agent can have". So the number of links is proportional to the number of turtles, right? That means that you don't even need nested loops in order to have written a program with polynomial runtime; if you ever do ask links [ ... ], even once, you're dead, since the number of links is already proportional to the square of the number of turtles.

As an example of how to fix it, you might consider putting some fixed size on the number of relationships each turtle can have. That would make it possible again for your model to run in linear time.

That doesn't mean you shouldn't also scrutinize your code for accidental problems though; it's possible that you have both problems, essential and accidental slowness. You may have accidentally done ask links [ ... ask turtles ... or ask turtles [ ... ask links ... in which case your runtime will increase as the cube of the number of turtles, or if you've done ask links [ ... ask links ..., as the fourth power.

None of this advice is especially specific to NetLogo. If it's easy to write slow programs in NetLogo, it's only because it's easy to write all kinds of programs in NetLogo — including slow ones. In particular, NetLogo makes it very easy to write loops. A short snippet of code like turtles with [color = red] is a loop, but it's so short and easy to write that it doesn't necessarily look or feel like a loop, so it's easy to miss the performance implications.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top