Sample_weight causes strange classifier performance issues

Are there any restrictions on what the sample_weight could be ? For example, sample_weight in the range of [0, 0.1] vs [0,100], under the assumption the relative value corresponding to each sample instance being the same. I find strange issues when sample_weight are small (basically trainer does not learn anything and classifies everything to be False). If I multiply them by 100x, it seems to work better.

I also found another threat, citing strangeness. Anyone has any idea ? Another related question is, can sample_weight leak information into the trainer ?