Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: let's have compressed keys as an option [WIP] #519

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

bcm-at-zama
Copy link
Collaborator

[WIP for now]

let's check how it behaves, is there issues sometimes?

@bcm-at-zama bcm-at-zama requested a review from a team as a code owner February 16, 2024 16:39
@cla-bot cla-bot bot added the cla-signed label Feb 16, 2024
@bcm-at-zama bcm-at-zama marked this pull request as draft February 16, 2024 16:39
@bcm-at-zama
Copy link
Collaborator Author

bcm-at-zama commented Feb 16, 2024

eg, python -m pytest -xsvv "tests/sklearn/test_sklearn_models.py::test_separated_inference[True-6-fhe-XGBClassifier0]" --forcing_random_seed 17118411869559400649 --randomly-dont-reset-seed --randomly-seed=4004100345 fails, but it's not deterministic, and it's not the fault to our seeding system:

  assert array_allclose_and_same_shape(a, b, rtol, atol), error_message

E AssertionError: Not equal to tolerance rtol=0, atol=0.001
E a: [[0.06188408 0.93811592]]
E b: [[0.06611255 0.93388745]]
E
E assert False
E + where False = array_allclose_and_same_shape(array([[0.06188408, 0.93811592]]), array([[0.06611255, 0.93388745]]), 0, 0.001)

@bcm-at-zama bcm-at-zama force-pushed the make_experiment_with_compressed_keys branch from d926f43 to d3eb96e Compare February 16, 2024 18:30
@BourgerieQuentin
Copy link
Member

please retry experiments with coming nightly (fixed in zama-ai/concrete#691)

for now, I make it the default, but we can change that before merging.
@bcm-at-zama bcm-at-zama force-pushed the make_experiment_with_compressed_keys branch from d3eb96e to 1c92647 Compare March 6, 2024 14:12
@bcm-at-zama
Copy link
Collaborator Author

Rebasing + using the new CP (which comes tomorrow, and has a fix for key compression)

@bcm-at-zama
Copy link
Collaborator Author

Can't test for now since concrete-python==2024.3.6 does not exist

@bcm-at-zama
Copy link
Collaborator Author

Trying to run https://github.com/zama-ai/concrete-ml/actions/workflows/update_licenses.yaml workflow even if Mac distribution is not there: I wonder if we need Mac for regular builds, I would say No

@bcm-at-zama
Copy link
Collaborator Author

The action fails because it tries to install CP for Mac

@bcm-at-zama
Copy link
Collaborator Author

At the same time, I have the impression that CP was not built for Linux as well, let's wait

@bcm-at-zama bcm-at-zama force-pushed the make_experiment_with_compressed_keys branch from 1c92647 to 553d995 Compare March 11, 2024 10:54
@bcm-at-zama bcm-at-zama force-pushed the make_experiment_with_compressed_keys branch from 553d995 to 0d332b4 Compare March 12, 2024 08:21
@bcm-at-zama
Copy link
Collaborator Author

Let's launch with the new nightly.

Copy link

Coverage passed ✅

Coverage details

---------- coverage: platform linux, python 3.8.18-final-0 -----------
Name    Stmts   Miss  Cover   Missing
-------------------------------------
TOTAL    7125      0   100%

52 files skipped due to complete coverage.

@bcm-at-zama
Copy link
Collaborator Author

Tests are green now!

Copy link
Collaborator

@andrei-stoian-zama andrei-stoian-zama left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • I think we should only set "compress_keys" through the Configuration if that's possible
  • We should find out if there is any runtime latency difference with / without compression
  • We should set the proper default based on the previous question

@@ -632,6 +632,7 @@ def compile(
global_p_error: Optional[float] = None,
verbose: bool = False,
inputs_encryption_status: Optional[Sequence[str]] = None,
compress_evaluation_keys: bool = True,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather we set this through the configuration and keep only ML related params here

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then, would be awesome to have a single place / class for crypto-related configs, and we would change just one place and it would apply to all models we support (linear, tree, qnn, torch). Maybe already the case!

@BourgerieQuentin
Copy link
Member

As well there are runtime latency with compression, as you need to decompress on server side.

You should have quicker key generation on client side but longest evaluation time on server side as the keys are decompressed on demand.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants