Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Classifier predict numerical precision issue with large raw_score #6405

Open
drblarg opened this issue Apr 3, 2024 · 4 comments
Open

Classifier predict numerical precision issue with large raw_score #6405

drblarg opened this issue Apr 3, 2024 · 4 comments
Labels

Comments

@drblarg
Copy link

drblarg commented Apr 3, 2024

Description

I have a trained model with a binary objective using n_estimators=1000. The model performance (AUC) is quite good. I need the raw probabilities for selection by ranking. The probabilities provided by presict_proba or predict however have a very large number with value 0 or value 1 and an odd bowl shaped distribution.

When I use raw_score=True, I get scores from -11k to +135k without a large number being squashed to the min or max, and an expected distribution. Applying a simple sigmoid to these raw scores regains the non-raw scores. This clearly shows that the numerical precision is insufficient to distinguish the very low and very high values, so they get flattened to 0 and 1 respectively.

I believe the raw_score values should be normalized in some way first so as to avoid this problem. I have an old version of the model using v2 of lightgbm that does not have this issue (with training data and parameters nearly identical). Perhaps the old version averaged rather than summed the random Forrest raw scores, to avoid a number-of-trees dependence, before applying the sigmoid function? These seems like the right approach.

Environment info

LightGBM version or commit hash: 4.3.0
Running on python 3.11 in AWS (sagemaker)

@jameslamb
Copy link
Collaborator

Thanks for using LightGBM.

Are you able to share some minimal code showing precisely what you mean? I'm unsure how to interpret some of these statements like "a very large number with value 0 or value 1".

@drblarg
Copy link
Author

drblarg commented Apr 3, 2024

I cannot share much in the way of specifics, but here is the workflow:

# X = features, y = known outcome

model_pipeline = Pipeline(
    lightgbm.LGBMClassifier(
        objective="binary",
        boosting="rf",
        n_estimators=1000,
        # etc., mostly default values
    )
)

model_pipeline = model_pipeline.fit(X, y)

scores = model_pipeline.predict_proba(X)[:,1]

scores are distributed from about 1e-5 to 1.0 in a bowl shape (high population at the min and max), with a large quantity having a value of exactly 1.0 (loss of ranking information).

If instead I look at:

scores_raw = model_pipeline.predict_proba(X, raw_score=True)

Then scores_raw is distributed from about -11000 to +136000 with a shape more resembling a decaying exponential, and no repeated values at the max score (no loss of ranking information). I can apply the basic sigmoid function to scores_raw to regain scores, which illustrates the numerical precision limit on the upper end. If the scores_raw distribution was first scaled down to something close to 1, the sigmoid distribution would not run into numerical precision limitations. Then the score ranking could again be used as intended.

As I mentioned, a previous version of lightgbm did not behave in the current way, avoiding this problem.

@jameslamb
Copy link
Collaborator

Ok, so to clarify:

  • you are not actually using Random Forest mode? (boosting="rf")
  • you are using the built-in binary loss function (objective="binary")?

@drblarg
Copy link
Author

drblarg commented Apr 3, 2024

Apologies, yea I am using boosting="rf", I have edited my previous comment to include that. I am also using the built-in binary loss function.

@drblarg drblarg closed this as completed Apr 12, 2024
@drblarg drblarg reopened this Apr 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants