Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Input Pipeline rework #245

Merged
merged 14 commits into from
Mar 26, 2024
Merged

Input Pipeline rework #245

merged 14 commits into from
Mar 26, 2024

Conversation

M-R-Schaefer
Copy link
Contributor

I have reworked the data pipeline. The NL is no longer precomputed for the entire dataset and stored in memory.
This used up a log of RAM and the conversion of unpadded NLs to tf ragged tensors was terribly slow.
This rework drastically speeds up start up times and reduces memory consumption without compromising performance.

@M-R-Schaefer M-R-Schaefer added the enhancement New feature or request label Mar 25, 2024
Comment on lines 38 to 60
# def initialize_dataset(
# config,
# atoms_list,
# read_labels: bool = True,
# calc_stats: bool = True,
# ):
# if calc_stats and not read_labels:
# raise ValueError(
# "Cannot calculate scale/shift parameters without reading labels."
# )
# inputs = process_inputs(
# atoms_list,
# r_max=config.model.r_max,
# disable_pbar=config.progress_bar.disable_nl_pbar,
# pos_unit=config.data.pos_unit,
# )
# labels = atoms_to_labels(
# atoms_list,
# additional_properties_info=config.data.additional_properties_info,
# read_labels=read_labels,
# pos_unit=config.data.pos_unit,
# energy_unit=config.data.energy_unit,
# )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these the placeholder for implementing arbitrary labels?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was part of the old data pipeline. I forgot to remove the comment.

self.n_jit_steps = 1
if pre_shuffle:
shuffle(atoms)
self.sample_atoms = atoms[0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in general we should be more consistent with atom, atoms and list of atoms. atoms should be one structure. Should we make an issue?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I agree

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


def enqueue(self, num_elements):
for _ in range(num_elements):
data = self.prepare_item(self.count)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe prepare_data? In my opinion item is somewhat misleading.

config.data.shift_options,
config.data.scale_options,
)
# TODO IMPL DELETE FILES
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which files should be deleted here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reminder comment from the .cache based implementation.

Copy link
Contributor

@Tetracarbonylnickel Tetracarbonylnickel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If everything is addressed the PR can be merged.

@M-R-Schaefer M-R-Schaefer merged commit 10470b1 into dev Mar 26, 2024
3 checks passed
@M-R-Schaefer M-R-Schaefer deleted the otf_nl branch August 7, 2024 06:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants