Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run Without Distributed #26

Open
Maddy12 opened this issue Mar 25, 2022 · 3 comments
Open

Run Without Distributed #26

Maddy12 opened this issue Mar 25, 2022 · 3 comments

Comments

@Maddy12
Copy link

Maddy12 commented Mar 25, 2022

Hello, I am trying to run your code but I keep running into issues with the distributed learning. Is it possible to run without this?

@ArrowLuo
Copy link
Contributor

Hi @Maddy12, what is your error when you run with distributed launch? It can be run on only one GPU. Otherwise, I think you should modify the code to the none distributed version.

@Maddy12
Copy link
Author

Maddy12 commented Apr 11, 2022

I apologize for wasting your time. There was an error but it was not in the distributed element. There is an error in the metrics.py in the function compute_metrics.

My hack is:

if isinstance(x, list): 
        x = np.concatenate(x)
sx = np.sort(-x, axis=1)

I do not know if this expected behavior or if I am actually now implementing something incorrect.

@ArrowLuo
Copy link
Contributor

ArrowLuo commented Apr 12, 2022

Hi @Maddy12, what is the error? Can you print it here? Or you can test x= np.concatenate(tuple(x), axis=0) as follows,

if isinstance(x, list): 
    x= np.concatenate(tuple(x), axis=0)
sx = np.sort(-x, axis=1)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants