Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encoding representations #64

Open
hkmztrk opened this issue May 17, 2017 · 5 comments
Open

Encoding representations #64

hkmztrk opened this issue May 17, 2017 · 5 comments

Comments

@hkmztrk
Copy link

hkmztrk commented May 17, 2017

Hello,

I'm pretty new to autoencoders and I know we can use utilize them for unsupervised learning. Is it possible to use this model to create representations (with encoding) for a set of SMILES?

If so, I guess first I had to preprocess my data set, then use sample.py ?

Thanks!

@pechersky
Copy link
Collaborator

pechersky commented May 17, 2017 via email

@hkmztrk
Copy link
Author

hkmztrk commented May 17, 2017

Yes, it was that paper led me to here. Thanks!

@liamnaka
Copy link

Last time I checked the VAE encodings were pretty substantial in size, so if you're trying to learn from a large set of SMILES it might be more feasible to generate them on the fly.

@hkmztrk
Copy link
Author

hkmztrk commented May 25, 2017

Hello, thanks for your suggestion. Sorry for my asking but, how do we generate them on the fly? Aren't we supposed to learn the model first? How do we do that?

@pechersky
Copy link
Collaborator

pechersky commented May 25, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants