Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RegistryBundle can't be imported #2460

Closed
kulcsarm opened this issue May 15, 2024 · 10 comments
Closed

RegistryBundle can't be imported #2460

kulcsarm opened this issue May 15, 2024 · 10 comments
Assignees
Labels
question Further information is requested

Comments

@kulcsarm
Copy link

kulcsarm commented May 15, 2024

Hello, I am trying to use the developer API of Ax. I would like to run the code on an HPC cluster which has a maximum allowed runtime, so I need to be able to save and load my progress. I looked at the tutorial gpei_hartmann_developer.py but I ran into an issue when trying to run it. At first importing the RegistryBundle (line 296) resulted in an error stating that I was missing the sqlalchemy module. After I installed the said package I get the following error:
kép
How could I fix this error?

Thank you for your help!

@kulcsarm kulcsarm changed the title RegistryBundle can't be imported and a question about arms RegistryBundle can't be imported May 15, 2024
@bernardbeckerman
Copy link
Contributor

Hi @kulcsarm, Thanks for reporting this! While I look into this, is there anything preventing you from being able to migrate to the Service API (tutorial)? This API tends to be most robust and can generally serve a wide range of use-cases.

@bernardbeckerman bernardbeckerman self-assigned this May 15, 2024
@bernardbeckerman bernardbeckerman added the question Further information is requested label May 15, 2024
@kulcsarm
Copy link
Author

Hello @bernardbeckerman, thank you for your quick reply! I decided to use the Developer API because my problem has non-linear constraints (which are very unlikely to be violated but would break the evaluation function) which the three built in constraint classes can't handle as far as I understand, so I filter out unfeasible points "by hand" during the optimization loop.

@Fa20
Copy link

Fa20 commented May 15, 2024

@kulcsarm could you please how did you handel such this non-linear constarined in AX? becuase I have the same problem.

@bernardbeckerman
Copy link
Contributor

@kulcsarm Interesting! It sounds like you're doing rejection sampling, i.e., if the point suggested by Ax violates your nonlinear parameter constraint, you skip evaluation and just leave the point out, is that right? If so, I'd imagine that Ax might eventually keep re-suggesting the same constraint-violating points, since most Ax generation strategies try to sample yet-unsampled parts of the search space, which it doesn't know violate a constraint. I'm not sure if we have a good setup to handle this - let me loop in one of our researchers to help.

@kulcsarm
Copy link
Author

@Fa20, just as @bernardbeckerman I'm doing the rejection and resampling method, I have a function which given the parameters of a generator run evaluates the constraints and returns whether the point is feasible or not, this is the part of my code which handles this in the main loop: (my batch size is 1 for now, so I only select a single point)

    candidates =  model.gen(batch_size*3)
    new_point = Models.SOBOL(search_space=mvg_search_space).gen(batch_size)
    while not is_feasible(new_point.arms[0].parameters):
        new_point = Models.SOBOL(search_space=mvg_search_space).gen(batch_size)
    for i in range(0,batch_size*3):
        if is_feasible(candidates.arms[i].parameters):
            new_point.arms[0]._parameters = candidates.arms[i].parameters
            print(f"parameters accepted: {candidates.arms[i].parameters}")
            break
        else:
            print(f"parameters rejected: {candidates.arms[i].parameters}")

How I tried to get around trying to sample the same point is that if I don't find a feasible point in the top three candidate points I just evaluate a random (but feasible) point which hopefully changes the acquisition function enough not to resample the same points, but any help is greatly appriciated as I am quite new to this.

@bernardbeckerman
Copy link
Contributor

@kulcsarm You can accomplish this in the Service API tutorial (link), by substituting the evaluate function for something like this:

def evaluate(parameterization):
    x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
    l2norm = np.sqrt((x**2).sum())
    if l2norm > 1.25:
        return {"l2norm": (l2norm, 0.0)}
    # In our case, standard error is 0, since we are computing a synthetic function.
    return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (l2norm, 0.0)}

This will accomplish two things:

  1. Computes the constraint upfront and skips computation of the objective in the case that the constraint is violated. This is useful in the case that the constraint is cheap to compute relative to the objective (which I would imagine is the case if your constraint is a cheap mathematical function of the parameters) since it lets you skip evaluating the objective in cases where you know it won't be useful due to a violated constraint.
  2. In the case that the constraint is violated, the constraint metric values are still returned so that the model can learn to avoid them in the future. This should help you skip the manual step of throwing out trials.

Note that it may take more trials than the 25 that the tutorial uses in order to produce the 12 complete trials that Ax needs to proceed to the modeling stage (I've been using 50 trials, which usually does the trick). Once the optimization proceeds to the model-based "BoTorch" phase, Ax will use its internal understanding of the space to try to avoid bad parameterizations.

Let me know if this helps!

@esantorella
Copy link
Contributor

@kulcsarm Interesting! It sounds like you're doing rejection sampling, i.e., if the point suggested by Ax violates your nonlinear parameter constraint, you skip evaluation and just leave the point out, is that right? If so, I'd imagine that Ax might eventually keep re-suggesting the same constraint-violating points

Yeah, I'd expect so. If you want to keep doing this sort of manual rejection sampling, you could avoid that problem by attaching these trials with "Pending" status and never evaluating them. In the Developer API, that could be achieved by doing trial.run() but never trial.mark_completed(). Then in your evaluation code, you would need to do something to ensure that running such trials doesn't actually trigger the expensive evaluation. This is also doable and actually easier with the Service API, where you would run ax_client.get_next_trial() as usual but never do ax_client.complete_trial(...).

For what it's worth, Ax actually does support nonlinear constraints, but only with BoTorch models, which do Bayesian optimization. By default, Ax starts with a batch of quasi-random Sobol points, and that step doesn't support nonlinear constraints.

@bernardbeckerman
Copy link
Contributor

@kulcsarm any luck with the above suggestions? I'm closing this out for now but please feel free to comment or reopen for further help!

@Fa20
Copy link

Fa20 commented May 31, 2024

@bernardbeckerman another questions if it is possible: plus this problem with non-linear constarined on the parameters which can be solved as explained I have 3 other constarined on the objective functions not on the search parameters which should be checked after we evaluate the objective function . should we add this constarined on the evaluation function and on the outcome constarined or what is the best way to handel this problem

@kulcsarm
Copy link
Author

@kulcsarm any luck with the above suggestions? I'm closing this out for now but please feel free to comment or reopen for further help!

I'm sorry for not replying I was away. The original issue still stands, I can't import the RegistryBundle class, I didn't try using the Service API yet. Do I understand correctly that to choose wihich model I want to use in the Service API, I have to set up a GenerationStrategy?

For what it's worth, Ax actually does support nonlinear constraints, but only with BoTorch models, which do Bayesian optimization. By default, Ax starts with a batch of quasi-random Sobol points, and that step doesn't support nonlinear constraints.

@esantorella I am using a BoTorch model for the Bayesian part, as I only switched to Ax for its feature to save and continue the experiments. Can I pass the inequality_constraints inside the BoTorchModel() and if yes, how? I think my method should work fine for the initial points.

bernardbeckerman pushed a commit to bernardbeckerman/Ax that referenced this issue Jun 4, 2024
…nstraints

Summary:
This diff adds to tutorials a method to early-exit trial evaluation based on an easy-to-calculate constraint metric (e.g., a nonlinear function of parameter values) in order to mimic behavior of parameter constraints when parameter constraints cannot be used. See this discussion for details ([link](facebook#2460)).

Adds the following to the end of the `Special Cases` section of the Service API tutorial ([link](https://ax.dev/tutorials/gpei_hartmann_service.html#Special-Cases)):

{F1670348258}

Differential Revision: D58146231
bernardbeckerman pushed a commit to bernardbeckerman/Ax that referenced this issue Jun 4, 2024
…nstraints (facebook#2500)

Summary:

This diff adds to tutorials a method to early-exit trial evaluation based on an easy-to-calculate constraint metric (e.g., a nonlinear function of parameter values) in order to mimic behavior of parameter constraints when parameter constraints cannot be used. See this discussion for details ([link](facebook#2460)).

Adds the following to the end of the `Special Cases` section of the Service API tutorial ([link](https://ax.dev/tutorials/gpei_hartmann_service.html#Special-Cases)):

{F1670348258}

Differential Revision: D58146231
bernardbeckerman pushed a commit to bernardbeckerman/Ax that referenced this issue Jun 6, 2024
…nstraints (facebook#2500)

Summary:
Pull Request resolved: facebook#2500

This diff adds to tutorials a method to early-exit trial evaluation based on an easy-to-calculate constraint metric (e.g., a nonlinear function of parameter values) in order to mimic behavior of parameter constraints when parameter constraints cannot be used. See this discussion for details ([link](facebook#2460)).

Adds the following to the end of the `Special Cases` section of the Service API tutorial ([link](https://ax.dev/tutorials/gpei_hartmann_service.html#Special-Cases)):

{F1670348258}

Reviewed By: saitcakmak

Differential Revision: D58146231
facebook-github-bot pushed a commit that referenced this issue Jun 7, 2024
…nstraints (#2500)

Summary:
Pull Request resolved: #2500

This diff adds to tutorials a method to early-exit trial evaluation based on an easy-to-calculate constraint metric (e.g., a nonlinear function of parameter values) in order to mimic behavior of parameter constraints when parameter constraints cannot be used. See this discussion for details ([link](#2460)).

Adds the following to the end of the `Special Cases` section of the Service API tutorial ([link](https://ax.dev/tutorials/gpei_hartmann_service.html#Special-Cases)):

{F1670348258}

Reviewed By: saitcakmak

Differential Revision: D58146231

fbshipit-source-id: 1fc460a707a3acad45b200d88c17941d80910fc9
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants