Crossfire Mailing List Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Programming thoughts.




 Been merging the crossfire 0.7 source in with the crossfire 0.90 source.
ITs been coming along OK.

 One thing I noticed was in the define.h file.  In that file, there
are about 40 lines of bitmasks, with macros to apply them to the correct
flag in the object structure.

 One problem I see with the present method is that many of the flags have
the same value, it is the assign/unassign or test macros that make sure
it is the correct bit.

 I was thinking of changing all those values to integers (from about 0 to 40,
all unique).  Then have generic functions (set_bitmask(obj *, bit),
clear_bitamsk(obj*, bit) and test_bitmask(obj *, bit) that then tests
the correct bit.  For example, if the bit is from 0-31, it tests
flags[0], if from 32-63, tests flags[1], and so on.  It uses shift operations
to test the appropriate bit in the specific integer.)

 This would seem to clean up the code a bit, and make the chances of
things being applied to the incorrect bit virtually nil.  If those functions
are done as macros or inline functions, the cost over the present method
should be virutally nothing, assuming the compiler does decent optimization.

 I say this because the bit position to change/look at would be a constant
when the program is compiled, so I would think/hope that the compiler would
evaluate the values at run time.

 For example, something like:
#define SET_BIT(xyz, p) \
        xyz->flags[p/32] = 1 << (p % 32)

p would still be the defined value (like F_ALIVE).

 As such, something like SET_ALIVE(player) would become SET_BIT(player, F_ALIVE)

Any thoughts and why this should not be done?  It seems to me that this
method would also make it easier to add additional bits (just increment
the array value, and add a define for the position).

   Mark Wedel
master@rahul.net