Most choices and consequences stuff has two flaws.
1) It's often really obvious what consequences your choice will have, with no hint that often you don't have laser-precise knowledge of what the future will hold. It also plays lip-service to the conceit that the player character is a demi-god who can bend the game world to his will.
2) Save + reload exploits and walkthroughs mean that, even if a game is bold enough to make it difficult, then meta-gaming can lead you to make the 'right' choice.
So my solution is to make the consequences variable. So choice A leads to a 60% chance of consequence 1, 30% of c2, and 10% of C3, and so on. Thus instead of infallibly choosing a result, you instead are choosing between various probability distributions, which are not revealed to you directly (although, obviously, they should usually resemble what the choice seems to predict: sensible choices should bring a higher chance of good results).
This I think gives more organic model of how decision making really works: you can fluke out even if you fuck up, and conversely you really can do 'everything right' and still things go horribly wrong. So what're the problems?
There might be principled objections to having exactly the same choices turn out differently, but I don't see why: the player character shouldn't have omnipotence over the game world, and so its reasonable to say some things are simply outside of his control. Suppose (following ME2) he assigns his personel under his command in the optimal way to achieve the mission. Yet that shouldn't guarantee they'll succeed. Maybe they just catch a stray bullet or whatever.
This system can still be meta-gamed, if only by pooling playthrough data or directly reverse engineering the game. But there isn't going to be a good way to obscure the mapping of choices onto consequences, so there's no defence against this sort of meta-gaming - but it still won't guarantee the optimal result. However, the bigger risk is the player just saving and reloading the choice until he gets the consequence. The simplest way to stop this is to ensure the consequences hit a fair amount of play time after the choice is made.
Final issue is implementation. It probably requires more work to develop an array of plausible consequences than the 'this or that' decision which is common. Moreover, it might be difficult to constrain limits to this: it seems unfair for the player to fail just because he kept getting unlucky, but it might be hard to balance this imperative besides maintaining genuine risk.
Thoughts?
1) It's often really obvious what consequences your choice will have, with no hint that often you don't have laser-precise knowledge of what the future will hold. It also plays lip-service to the conceit that the player character is a demi-god who can bend the game world to his will.
2) Save + reload exploits and walkthroughs mean that, even if a game is bold enough to make it difficult, then meta-gaming can lead you to make the 'right' choice.
So my solution is to make the consequences variable. So choice A leads to a 60% chance of consequence 1, 30% of c2, and 10% of C3, and so on. Thus instead of infallibly choosing a result, you instead are choosing between various probability distributions, which are not revealed to you directly (although, obviously, they should usually resemble what the choice seems to predict: sensible choices should bring a higher chance of good results).
This I think gives more organic model of how decision making really works: you can fluke out even if you fuck up, and conversely you really can do 'everything right' and still things go horribly wrong. So what're the problems?
There might be principled objections to having exactly the same choices turn out differently, but I don't see why: the player character shouldn't have omnipotence over the game world, and so its reasonable to say some things are simply outside of his control. Suppose (following ME2) he assigns his personel under his command in the optimal way to achieve the mission. Yet that shouldn't guarantee they'll succeed. Maybe they just catch a stray bullet or whatever.
This system can still be meta-gamed, if only by pooling playthrough data or directly reverse engineering the game. But there isn't going to be a good way to obscure the mapping of choices onto consequences, so there's no defence against this sort of meta-gaming - but it still won't guarantee the optimal result. However, the bigger risk is the player just saving and reloading the choice until he gets the consequence. The simplest way to stop this is to ensure the consequences hit a fair amount of play time after the choice is made.
Final issue is implementation. It probably requires more work to develop an array of plausible consequences than the 'this or that' decision which is common. Moreover, it might be difficult to constrain limits to this: it seems unfair for the player to fail just because he kept getting unlucky, but it might be hard to balance this imperative besides maintaining genuine risk.
Thoughts?