Tsenzouken wrote:Hrm. It seems like you believe that strategy = precision prediction.
I would not go as far to say Precision, but one of the key quotes for war "Know thy Enemy" assumes that the knowledge is meaningful, on account of the behavior being to some degree predictable.
I would not have 100% predictability, since i find that rather dry, but rather 'predictable to within the scope long range plans' is important. Even big, game changing, upsets are cool, from time to time, since they force the user to rethink everything. But having something as ubiquitous as research finishing times be overly jumpy would only cause me stress and headache. (key being overly, some is nice of course)
Tsenzouken wrote:
In my experience, that is not always necessarily the case. A scalar probability would still give you a general idea of when you would get a theory breakthrough, but not pin it down. I do agree that the numbers probably need to be changed, but there is a specific reason that I allowed for significant overruns in theoretical research.
If theoretical research takes a long time, it prolongs the "ages" of technology. A research-focused empire will probably get to the applications and refinements faster, but shouldn't be able to just run through all of the theoretical techs and leap from Laser Mk1 to Quantum Phase Nanocannon Mk60 in one jump. That is why in the diagram the guaranteed-discovery point is at the 150% RPs invested to cost ratio.
I definitely concur jumping from lasers 1 to anything mark 60 is ludicrous, but that is far, far removed from anything i was even remotely suggesting.
I would restate for the record that if using an exponential system the 150% marker would be unreasonably high, since the user can only direct focus for the final %ages of the total research costs anyway, the vast majority being 'filler' to give the idea of the correct time between techs. Unless you defined it as 150% of the amount 'since the user was able to direct RP'. But this is a mere technical detail.
Tsenzouken wrote:
The exponential percentage method seems like a more complicated way of requiring a minimum turn length on research. Granted, probably a more elegant way of requiring minimum turns--but still such a method. Though, come to think of it, the numbers could be manipulated such that your chance to make a breakthrough began at 66.67% of the way through the research, but you could start researching applications as soon as you got 33.3% into the theory, using the curve from my earlier graph as a basis for RP factoring. The end result would be a theory which takes a long time to complete, and a use for raw RP output to be channeled into other things at the same time.
I would very much be interested in a simpler method for requiring min turns before applying RP, which allows all the dynamic behaviors which i have outlined as being possible using a simple exponential decay system. I have yet to find one.
But that aside, I would have the, can direct RP, level around the 80% completion point, (refinement techs earlier theory techs later). I would then have the probability to self complete start no earlier then 95%, or even 100%.
Having the threshold for direction so close to finishing permits the 'between techs' period to be as flexible as possible, since it has very few other game effects; it simply allows the maximal room for balancing, for the minimal coding/intellectual effort, and minimizes some of the more undesirable, by some, effects of non-liner systems.
Also the question of 'where' the probability of completion starts to be non-zero is somewhat academic, considering the 'zero' point and the '100%' points are both arbitrary. Somewhere between investing no RP and investing Lots of RP, the probability of finishing a tech next turn goes from pure 0 to some non zero. If you define the 'complete' point as being a 'complete enough' point then it just makes the most intrinsic sense to the user that after reaching 'complete enough' you have a chance of finishing the tech, before that you don't. Crossing the 100% finished line aught have some distinct effect, at least in my mind, and I'm sure I'm not alone. That can either be 'finishing' the tech, or now having the possibility to finish the tech. But it aught be somewhat important either way. (otherwise why does it exist?)
Tsenzouken wrote:
I also like the idea of an inefficiency factor built into researching, which things like Semi-sentient Databases would correct as Learning research progressed. That way not ALL research techs have to be +X max research infrastructure meter. Just a thought.
I'm afraid I do not understand this point. Although the most common effect may be +X max meter, +X% to natural research rate, would also be common, at least with an exponetial system.
could you clarify for me, thank you.
Tsenzouken wrote:
P.S. I also like 'weak' pre-reqs. Trying to figure out an easy way to implement them, more on that later.
ok
Best wishes, thank you all,
Robbie Price.