Real Time Crossfire Mailing List Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CF: object structure layout.



Kjetil Torgrim Homme wrote:

> >    Another issue is that for the objects that do need to do
> >   something, the rest of the object structure is going have to get
> >   loaded anyways.  So if you wanted to get more clever, you could
> >   have various run queus - ie, the 1 tick away run queue, the 2 tick
> >   away, 3 tick,.. to 5+ tick away.
> 
> Very interesting idea!  It needn't complicate the code much, either.
> This removes the need for a wakeup attribute, since it is implicit if
> you have enough queues. The penalty for having many queues is very low
> (it's just a pointer per queue), so that's OK.  The slowest object in
> Crossfire right now is the Skull generator with speed 0.0002.  That's
> 5000 ticks, though.  Hmm.  Does it make sense to have objects which
> only do something every 10 minutes?  The slow objects are currently
> generators and diseases.

 There is probably some upper limit on number of queues needed.  At some point,
going beyond a certain number of queus would actually be slower than having a
queue for long wait time objects.

 For example, if you have 500 queues for actions, the number of items from queue
50 to 500 (or maybe even sooner) are likely to be pretty sparse.  So you could
have a queue for 50+ tick objects, and then every 50 ticks, you go through that
and shift stuff to the other 50 queues or keep it on the same queue.


> Very true.  I don't have any clever ideas, there.  As long as people
> are running the server off NFS, there will be problems with loading
> times, no matter what optimisation we come up with.  Well, actually
> one very simple approach is to stop saving to disk and rely on the OS'
> paging/swapping instead.  If nothing else, this saves spending CPU on
> converting object data to/from ASCII.

 True.  But to have the server effectively use the os paging for unusued maps,
the unused objects must be disassociated from the lists that the program is
using for the active maps.  So that sort of goes back to having the needed lists
referanced from the map structure, and not global lists.

> 
> >    I know in one of my mail messages a while back following up on
> >   David's idea of a more unified scale would be to only process
> >   objects in the vincinity of the player (so if you had a 50x50 map
> >   for example, only objects with 10 spaces of the player gets
> >   processed).
> 
> My hunch is that the code for this will be complex.

 It is easy to do in O(p*m) time (n=players, m=other objects).  go through all
the players, and then go through the map objects and see what is nearby.  But if
p gets pretty large, this could be pretty slow - slower than just updating all
objects on a map that has a player on it.


> >   1) Keep track of last tick a player was on any particular map.
> >   2) Have individual maps be setable for their swapout time and how
> >      long to still move monsters for after last player leaves.
> >   3) Only process objects for maps players are on or that fall into
> >      parameters set in #2 above.
> >
> >    There are advantages here:
> >   1) Some commonly used maps could have very long swap out times
> >      without adversely affecting server performance (only memory for
> >      the objects in ram).
> >   2) If the only referances to objects on a map are within the map
> >      object itself (and not global lists), it should be easier to
> >      require threaded load/save routines since there would be less
> >      synchronization needed with other lists.
> >   3) Following up with #2 above, it would be quite conceivable for
> >      each map to have its own thread to handle the creatures on the
> >      map.
> 
> 3 is interesting, but be careful so that the design doesn't make
> teleport between maps impossible.

 Some synchronization between threads would be needed.  For example, in #3
above, is the player handled on the per map thread or some other thread?  If the
per map thread, you need some form of control when someone does a shout since it
goes to all players.

 To deal with moving maps, you probably want an 'incoming object' list in the
map structure and do some locking on that (since objects are not likely to move
between maps very often, the locking you do on that is not likely to cause
thread blocking very often).  The other map thread looks at the incoming object
queue once in a while, and moves the stuff in as needed.

 That method can work very good for unloaded maps - you get the basic map
structure, insert the objects needed, and start the thread.  The thread then
does the loading of the new map, and when that is done, moves whatever is in the
incoming queue onto the map.

 Note that you probably still want to be able to swap maps out - so that if the
server crashes, you still have some map data, and other maps sort of need to be
swapped out to be useful (apartments on the like - don't really want to loose
those - that could be a type of map with a very short swapout time)

Steven Fink wrote:
> Hmmm.... ok, so maybe I'm confused, but wouldn't 1..5 tick away queues
> require a total of 15 queues? Or for 1..5000, 12.5 million? (The
> every-tick queue, the every-other-tick queue that started at t0, the
> every-other-tick queue that started at t0+1) And how to you keep track
> of which 5 queues to look at each tick?

 What queue is the 1 queue would rotate along.  For example, lets have 6 queues
- 1 to 5, and a 5+ queue.  We will letter them A-f for simplicity.

 On tick 1, queue A is the 1 tick away, B is 2 tick away, etc.
 On tick 2, queue A is processed, queue B is the 1 tick away, C is 2 tick away,
and A becomes 5 tick away.
 On tick 3, process queue B, C is 1 tick away, etc.

 On tick 5, you process queue f - for stuff that will now run within the next 5
ticks, you move it to queue A-E - for stuff still more than 5 ticks away, it
stays on queue E.
-
[you can put yourself on the announcement list only or unsubscribe altogether
by sending an email stating your wishes to crossfire-request@ifi.uio.no]