Skip to content

Hyperparameter for the results reported in the paper #8

@xqlin98

Description

@xqlin98

Dear author,

I am trying to replicate the results posted in your original paper. However, there are some parameters that are not elaborated clearly in your original paper. Specifically, there are the following questions:

  • How many iterations do you run to get the results in Table 2 of your paper?
  • How many initial random data points are drawn before you start the BO iterations, Specifically what is the value for N_INIT?
  • In the paper, you mentioned "the number of tokens in soft prompts, we search for the best value among { 3, 5, 10} based on the validation set performance". Did you tune this number of tokens for each dataset or did you set the same number (e.g., 5 in your code) for all datasets?

It would be extremely helpful if you could directly post the best parameters for each dataset here such that I can reproduce all the results in the paper. Thank you for your help in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions