Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to understand output to generate crystal graphs without training model #21

Open
vmeschke opened this issue Sep 29, 2020 · 6 comments

Comments

@vmeschke
Copy link

Good morning!

I was wondering if there is an easy way to generate the crystal graphs for visualization with this code. While I would like to train a model later, I currently just want to visualize my dataset, and I'm finding it particularly hard with this code.

Thank you, and wonderful project! Everything runs smoothly with essentially no error.

Vanessa

@sgbaird
Copy link

sgbaird commented Dec 18, 2020

@txie-93 I think this is a related question on our part, but we are looking to extract an intermediate layer from CGCNN. Any suggestions on how to go about this?

@txie-93
Copy link
Owner

txie-93 commented Jan 3, 2021

@vmeschke and @sgbaird Sorry for my slow reply. In case you haven't solved the problem, I feel the easiest way is to return the feature vector of the intermediate layer similar to the out. For example, you may return both out and crys_fea.

cgcnn/cgcnn/model.py

Lines 152 to 166 in d612a69

atom_fea = self.embedding(atom_fea)
for conv_func in self.convs:
atom_fea = conv_func(atom_fea, nbr_fea, nbr_fea_idx)
crys_fea = self.pooling(atom_fea, crystal_atom_idx)
crys_fea = self.conv_to_fc(self.conv_to_fc_softplus(crys_fea))
crys_fea = self.conv_to_fc_softplus(crys_fea)
if self.classification:
crys_fea = self.dropout(crys_fea)
if hasattr(self, 'fcs') and hasattr(self, 'softpluses'):
for fc, softplus in zip(self.fcs, self.softpluses):
crys_fea = softplus(fc(crys_fea))
out = self.fc_out(crys_fea)
if self.classification:
out = self.logsoftmax(out)
return out

@sgbaird
Copy link

sgbaird commented Jan 7, 2021

Hi @txie-93,

Thank you for your response! By including crys_fea as a second output of model.py: CrystalGraphConvNet.forward(), I also need to update main.py train() and validate()

cgcnn/main.py

Line 251 in d612a69

output = model(*input_var)

cgcnn/main.py

Line 350 in d612a69

output = model(*input_var)

to be output, crys_fea = model(*input_var) to avoid errors.

However, I'm a bit stuck at this point.
Each of these lines is nested inside two for loops:

cgcnn/main.py

Lines 173 to 178 in d612a69

for epoch in range(args.start_epoch, args.epochs):
# train for one epoch
train(train_loader, model, criterion, optimizer, epoch, normalizer)
# evaluate on validation set
mae_error = validate(val_loader, model, criterion, normalizer)

train():

cgcnn/main.py

Line 226 in d612a69

for i, (input, target, _) in enumerate(train_loader):

validate():

cgcnn/main.py

Line 325 in d612a69

for i, (input, target, batch_cif_ids) in enumerate(val_loader):

Using Spyder and debugging, crys_fea has the following properties for the sample-regression data: Type: Tensor, Size: torch.Size([1, 128]). Does the bottom level for statement loop through each training (or validation) crystal structure?

Sterling

P.S. I've been looking into how to extract the intermediate layers using PyTorch hooks, but haven't been able to figure it out, hence the above approach. I'm relatively new to python (bulk of my experience is in MATLAB and Mathematica).

@mliu7051

@txie-93
Copy link
Owner

txie-93 commented Jan 8, 2021

Each of these lines is nested inside two for loops:

One iteration in the outer loop trains the model with the entire dataset. One iteration in the inner loop trains the model with one batch.

Using Spyder and debugging, crys_fea has the following properties for the sample-regression data: Type: Tensor, Size: torch.Size([1, 128]).

I think this is because you are having a batch size of 1?

I am not exactly sure what you want. Sometime people want to visualize the learned feature vectors for each crystal. These feature vectors are usually only meaningful after the training. You may want to get all the feature vectors in the testing data?

@sgbaird
Copy link

sgbaird commented Jan 8, 2021

We're interested in extracting feature vectors for all the input crystals and trying to feed these into a different model. We were thinking that the first iteration might have the most "correspondence" to the original crystal structures, but we were also considering using the last iteration (i.e. I think before applying softmax?).

Since we're not exactly sure whether we want the first, last, or an intermediate iteration's feature vectors (we will probably come to know through testing), I think what I want is to be able to extract crys_fea for each crystal (both training and test) at a given iteration. If you have a way of accessing the final iteration's feature vectors, we could start with that.

These feature vectors are usually only meaningful after the training.

Could you elaborate? (also, when you say after the training, do you mean after all iterations of the outer loop?)

@Myles091
Copy link

Myles091 commented Oct 7, 2023

Hi @txie-93 ,
I want to visualize the learned feature vectors for each crystal.
I want to get all the feature vectors in the testing data. I need these for my other training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants