Goldilocks problems are contrasted with simpler problems, where the only goal is to "not have too little" (or to "not have too much") — where one extreme is to be avoided, but the other extreme is acceptable.
An example of the simpler type of problem is having enough water to survive. Above a certain threshold, you'll be healthy, and below that threshold you'll get sick or die.
This is a simpler problem, because you just store a lot of water up, and you'll be fine. Going well above the threshold causes no (or relatively few) problems, so you just stock up. You're not aiming for a small region of solutions, you're aiming for a nearly infinite area.
In the case of drinking water, the "Goldilocks range" is quite wide — if you throw a dart to choose an amount of water to drink, it's most likely that you'll be fine. But it's nonetheless a Goldilocks problem because both extremes will kill you.
I also use "fishtailing" to describe problems where there is a common etiology behind those who overcompensate and those who undercompensate. For example, for 12 step groups that meet to discuss problem eating, overeaters often meet in the same group as undereaters, because it's thought that both groups have similar problems and similar solutions. The task is to stop all compulsive behavior related to food, neither compulsively eating when already full, nor compulsively avoiding food when very hungry. There may be common reasons why someone is resorting to compulsive behavior, in either direction.
A clear example of this is prisoner behavior when deciding whether to break out of jail. If trying to break out of jail, there is no middle ground. If you are half-hearted about breaking out, then the jailors will discover your plan before you successfully carry it out (because of failed attempts, or because you take more than the minimum amount of time to carry out the plan). Once your plans are discovered, jailors will greatly increase their scrutiny of you, and your future plans are likely to fail. As a result, prisoners must either avoid trying to break out completely, or they must dedicate all of their resources to successfully breaking out on their first attempt.
This has been seen in game theory and population simulations related to tit for tat strategies. One simulation I read about (damn, I can't find the reference right now), was about whether individuals in the population were, in general, liars or truth-tellers. What the simulation found was that, after the number of liars in a population dropped below 10%, there was suddenly a large penalty to liars for lying (lying went from being a cultural norm, to being relatively rare), and this large social penalty caused most remaining liars to choose to become truth-tellers.
So, a population on the whole tends to either be made up of liars, or truth-tellers. There's a region of instability between the two, where societies will quickly move towards being predominantly-liars or predominantly-truthtellers.