Feature/scikit pr code#14
Feature/scikit pr code#14tutrie wants to merge 272 commits intodiana-hep:masterfrom tutrie:FEATURE/scikit_pr_code
Conversation
up to date
CUDA device agnostic implementation
fix minor argparse bug
…developer new prior features
discovered bug with shared memory in the result object for the list objects. self.result.update updates lists of result objects for dif problems
we use the excursionresult to store data don't need two copies
…d test case and new example
…xity Feature/refactor for reduced complexity
commented out unused sampler/latin.py and added notes for PES.py. fixed bug in test_full_simple_dimensions.py
added some boilerplate for sklearn excursionmodel
created new model class and notebook with which to do basic tests. some issues with not converging i think it was in the old library too, unsure it could also be the updated newer sklearn package version
there is an issue with hnormal signature overloading in acquisition/utils.py. going to change gpytorch PES implementation to fix this
no improvement still doesnt work
had to decrease size of grid. also using latin sampling of grid. need to implement this for the learner overall
requires jump start. requires when using builders to be careful abt datatype. need more robust proxy system.
clean code base with only sklearn backend code. all examples run
last commit
| @@ -1 +0,0 @@ | |||
| recursive-include testcases/data | |||
There was a problem hiding this comment.
why was this removed?
There was a problem hiding this comment.
i was not confident I was using this code anywhere or know what it was being used for.
| @@ -1,6 +1,14 @@ | |||
| # `excursion` — Efficient Excursion Set Estimation | |||
There was a problem hiding this comment.
this diff is probably not intended
| from sklearn.gaussian_process.kernels import ConstantKernel, RBF, WhiteKernel | ||
|
|
||
|
|
||
| def get_kernel(ndim, kernel_type): |
There was a problem hiding this comment.
this could prrob be moved to sklearn_gp.py
| from excursion.models import ExcursionModel | ||
|
|
||
|
|
||
| class _Optimizer(object): |
There was a problem hiding this comment.
(object) is prob not needed.. why the underscore?
There was a problem hiding this comment.
The underscore tells the compiler to ignore it when creating links to available module namespaces in the imported package. https://stackoverflow.com/questions/12192207/what-is-the-underscore-prefix-for-python-file-name
We had previously discussed the usage of abstract classes and interfaces in the file structure. Even tho this not necessary in python the OOP convention and style guide suggest this. https://www.python.org/dev/peps/pep-0008/#package-and-module-names.
In terms of usefulness to the end user, it provides a way for them to easily read and understand what the abstract class is, and then make their own version that is compatible with the rest of the excursion packages.
excursion/optimizer/optimizer.py
Outdated
| # Store acquisition function (must be a str originally, for now, if None, | ||
| # then base_model was None, else use _point_sampler): | ||
| self.acq_func = acq_func | ||
| # currently do not support any batches, or acq_func_kwargs. This is place holder |
There was a problem hiding this comment.
lots of commented out code here, would be good to clean up
| def plot_confusion_matrix(confusion_matrix, pct_correct: int): | ||
| fig, ax = plt.subplots(1, figsize=(5, 5)) | ||
| plt.title("Confusion Matrix, "+str(pct_correct)+"% Accuracy") | ||
| # plt.xticks(tick_marks, c, rotation=45) |
There was a problem hiding this comment.
remove ccommented out code
also changed the way device and dtype stored in optimizer object. now in a dictionary for easy pass as kwargs
The current learner (wrapper class) ask-and-tell API uses a backend based on scikitlearn. The merge request contains package updates for a new learner with an ask-and-tell API based on scikit-optimize which uses a scikitlearn backend, implementing the same heuristic as the previous implementation, to provide support for solving higher dimensional excursion problems and adding gpytorch backend in the future