gt4sd.training_pipelines.pytorch_lightning.molformer.core module¶
Molformer training utilities.
Summary¶
Classes:
Data arguments related to Molformer trainer. |
|
Model arguments related to Molformer trainer. |
|
Saving arguments related to Molformer trainer. |
|
Training arguments related to Molformer trainer. |
|
Molformer training pipelines for crystals. |
Reference¶
- class MolformerTrainingPipeline(**kwargs)[source]¶
Bases:
PyTorchLightningTrainingPipeline
Molformer training pipelines for crystals.
- get_data_and_model_modules(model_args, dataset_args, **kwargs)[source]¶
Get data and model modules for training.
- Parameters
model_args (
Dict
[str
,Union
[float
,str
,int
]]) – model arguments passed to the configuration.dataset_args (
Dict
[str
,Union
[float
,str
,int
]]) – dataset arguments passed to the configuration.
- Return type
Tuple
[LightningDataModule
,LightningModule
]- Returns
the data and model modules.
- get_pretraining_modules(model_args, dataset_args)[source]¶
Get data and model modules for pretraing.
- Parameters
model_args (
Dict
[str
,Union
[float
,str
,int
]]) – model arguments passed to the configuration.dataset_args (
Dict
[str
,Union
[float
,str
,int
]]) – dataset arguments passed to the configuration.
- Return type
Tuple
[LightningDataModule
,LightningModule
]- Returns
the data and model modules.
- get_classification_modules(model_args, dataset_args)[source]¶
Get data and model modules for pretraing.
- Parameters
model_args (
Dict
[str
,Union
[float
,str
,int
]]) – model arguments passed to the configuration.dataset_args (
Dict
[str
,Union
[float
,str
,int
]]) – dataset arguments passed to the configuration.
- Return type
Tuple
[LightningDataModule
,LightningModule
]- Returns
the data and model modules.
- get_multitask_classification_modules(model_args, dataset_args)[source]¶
Get data and model modules for pretraing.
- Parameters
model_args (
Dict
[str
,Union
[float
,str
,int
]]) – model arguments passed to the configuration.dataset_args (
Dict
[str
,Union
[float
,str
,int
]]) – dataset arguments passed to the configuration.
- Return type
Tuple
[LightningDataModule
,LightningModule
]- Returns
the data and model modules.
- get_regression_modules(model_args, dataset_args)[source]¶
Get data and model modules for pretraing.
- Parameters
model_args (
Dict
[str
,Union
[float
,str
,int
]]) – model arguments passed to the configuration.dataset_args (
Dict
[str
,Union
[float
,str
,int
]]) – dataset arguments passed to the configuration.
- Return type
Tuple
[LightningDataModule
,LightningModule
]- Returns
the data and model modules.
- __annotations__ = {}¶
- __doc__ = 'Molformer training pipelines for crystals.'¶
- __module__ = 'gt4sd.training_pipelines.pytorch_lightning.molformer.core'¶
- class MolformerDataArguments(batch_size=512, data_path='', max_len=100, train_load=None, num_workers=1, dataset_name='sol', measure_name='measure', data_root='my_data_root', train_dataset_length=None, eval_dataset_length=None, aug=False, measure_names=<factory>)[source]¶
Bases:
TrainingPipelineArguments
Data arguments related to Molformer trainer.
- __name__ = 'MolformerDataArguments'¶
- batch_size: int = 512¶
- data_path: str = ''¶
- max_len: int = 100¶
- train_load: Optional[str] = None¶
- num_workers: Optional[int] = 1¶
- dataset_name: str = 'sol'¶
- measure_name: str = 'measure'¶
- data_root: str = 'my_data_root'¶
- train_dataset_length: Optional[int] = None¶
- eval_dataset_length: Optional[int] = None¶
- aug: bool = False¶
- measure_names: List[str]¶
- __annotations__ = {'aug': <class 'bool'>, 'batch_size': <class 'int'>, 'data_path': <class 'str'>, 'data_root': <class 'str'>, 'dataset_name': <class 'str'>, 'eval_dataset_length': typing.Optional[int], 'max_len': <class 'int'>, 'measure_name': <class 'str'>, 'measure_names': typing.List[str], 'num_workers': typing.Optional[int], 'train_dataset_length': typing.Optional[int], 'train_load': typing.Optional[str]}¶
- __dataclass_fields__ = {'aug': Field(name='aug',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'aug.'}),kw_only=False,_field_type=_FIELD), 'batch_size': Field(name='batch_size',type=<class 'int'>,default=512,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Batch size.'}),kw_only=False,_field_type=_FIELD), 'data_path': Field(name='data_path',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Pretraining - path to the data file.'}),kw_only=False,_field_type=_FIELD), 'data_root': Field(name='data_root',type=<class 'str'>,default='my_data_root',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Finetuning - Data root for the dataset.'}),kw_only=False,_field_type=_FIELD), 'dataset_name': Field(name='dataset_name',type=<class 'str'>,default='sol',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Finetuning - Name of the dataset to be found in the data root directory.'}),kw_only=False,_field_type=_FIELD), 'eval_dataset_length': Field(name='eval_dataset_length',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Finetuning - Length of evaluation dataset.'}),kw_only=False,_field_type=_FIELD), 'max_len': Field(name='max_len',type=<class 'int'>,default=100,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Max of length of SMILES.'}),kw_only=False,_field_type=_FIELD), 'measure_name': Field(name='measure_name',type=<class 'str'>,default='measure',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Finetuning - Measure name to be used as groundtruth.'}),kw_only=False,_field_type=_FIELD), 'measure_names': Field(name='measure_names',type=typing.List[str],default=<dataclasses._MISSING_TYPE object>,default_factory=<function MolformerDataArguments.<lambda>>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Class names for multitask classification.'}),kw_only=False,_field_type=_FIELD), 'num_workers': Field(name='num_workers',type=typing.Optional[int],default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of workers.'}),kw_only=False,_field_type=_FIELD), 'train_dataset_length': Field(name='train_dataset_length',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Finetuning - Length of training dataset.'}),kw_only=False,_field_type=_FIELD), 'train_load': Field(name='train_load',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Where to load the model.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Data arguments related to Molformer trainer.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(batch_size=512, data_path='', max_len=100, train_load=None, num_workers=1, dataset_name='sol', measure_name='measure', data_root='my_data_root', train_dataset_length=None, eval_dataset_length=None, aug=False, measure_names=<factory>)¶
- __match_args__ = ('batch_size', 'data_path', 'max_len', 'train_load', 'num_workers', 'dataset_name', 'measure_name', 'data_root', 'train_dataset_length', 'eval_dataset_length', 'aug', 'measure_names')¶
- __module__ = 'gt4sd.training_pipelines.pytorch_lightning.molformer.core'¶
- __repr__()¶
Return repr(self).
- class MolformerModelArguments(type='classification', n_head=8, n_layer=12, q_dropout=0.5, d_dropout=0.1, n_embd=768, fc_h=512, dropout=0.1, dims=<factory>, num_classes=None, restart_path='', lr_start=0.00030000000000000003, lr_multiplier=1, seed=12345, min_len=1, root_dir='.', num_feats=32, pooling_mode='cls', fold=0, pretrained_path=None, results_dir='.', debug=False)[source]¶
Bases:
TrainingPipelineArguments
Model arguments related to Molformer trainer.
- __name__ = 'MolformerModelArguments'¶
- type: str = 'classification'¶
- n_head: int = 8¶
- n_layer: int = 12¶
- q_dropout: float = 0.5¶
- d_dropout: float = 0.1¶
- n_embd: int = 768¶
- fc_h: int = 512¶
- dropout: float = 0.1¶
- dims: List[int]¶
- num_classes: Optional[int] = None¶
- restart_path: str = ''¶
- lr_start: float = 0.00030000000000000003¶
- lr_multiplier: int = 1¶
- seed: int = 12345¶
- min_len: int = 1¶
- root_dir: str = '.'¶
- num_feats: int = 32¶
- pooling_mode: str = 'cls'¶
- fold: int = 0¶
- pretrained_path: Optional[str] = None¶
- results_dir: str = '.'¶
- debug: bool = False¶
- __annotations__ = {'d_dropout': <class 'float'>, 'debug': <class 'bool'>, 'dims': typing.List[int], 'dropout': <class 'float'>, 'fc_h': <class 'int'>, 'fold': <class 'int'>, 'lr_multiplier': <class 'int'>, 'lr_start': <class 'float'>, 'min_len': <class 'int'>, 'n_embd': <class 'int'>, 'n_head': <class 'int'>, 'n_layer': <class 'int'>, 'num_classes': typing.Optional[int], 'num_feats': <class 'int'>, 'pooling_mode': <class 'str'>, 'pretrained_path': typing.Optional[str], 'q_dropout': <class 'float'>, 'restart_path': <class 'str'>, 'results_dir': <class 'str'>, 'root_dir': <class 'str'>, 'seed': <class 'int'>, 'type': <class 'str'>}¶
- __dataclass_fields__ = {'d_dropout': Field(name='d_dropout',type=<class 'float'>,default=0.1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Decoder layers dropout.'}),kw_only=False,_field_type=_FIELD), 'debug': Field(name='debug',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Debug training'}),kw_only=False,_field_type=_FIELD), 'dims': Field(name='dims',type=typing.List[int],default=<dataclasses._MISSING_TYPE object>,default_factory=<function MolformerModelArguments.<lambda>>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'dropout': Field(name='dropout',type=<class 'float'>,default=0.1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Dropout used in finetuning.'}),kw_only=False,_field_type=_FIELD), 'fc_h': Field(name='fc_h',type=<class 'int'>,default=512,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Fully connected hidden dimensionality.'}),kw_only=False,_field_type=_FIELD), 'fold': Field(name='fold',type=<class 'int'>,default=0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'number of folds for fine tuning.'}),kw_only=False,_field_type=_FIELD), 'lr_multiplier': Field(name='lr_multiplier',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'lr weight multiplier.'}),kw_only=False,_field_type=_FIELD), 'lr_start': Field(name='lr_start',type=<class 'float'>,default=0.00030000000000000003,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Initial lr value.'}),kw_only=False,_field_type=_FIELD), 'min_len': Field(name='min_len',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'minimum length to be generated.'}),kw_only=False,_field_type=_FIELD), 'n_embd': Field(name='n_embd',type=<class 'int'>,default=768,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Latent vector dimensionality.'}),kw_only=False,_field_type=_FIELD), 'n_head': Field(name='n_head',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'GPT number of heads.'}),kw_only=False,_field_type=_FIELD), 'n_layer': Field(name='n_layer',type=<class 'int'>,default=12,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'GPT number of layers.'}),kw_only=False,_field_type=_FIELD), 'num_classes': Field(name='num_classes',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Finetuning - Number of classes'}),kw_only=False,_field_type=_FIELD), 'num_feats': Field(name='num_feats',type=<class 'int'>,default=32,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'number of random features for FAVOR+.'}),kw_only=False,_field_type=_FIELD), 'pooling_mode': Field(name='pooling_mode',type=<class 'str'>,default='cls',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'type of pooling to use.'}),kw_only=False,_field_type=_FIELD), 'pretrained_path': Field(name='pretrained_path',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Path to the base pretrained model.'}),kw_only=False,_field_type=_FIELD), 'q_dropout': Field(name='q_dropout',type=<class 'float'>,default=0.5,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Encoder layers dropout.'}),kw_only=False,_field_type=_FIELD), 'restart_path': Field(name='restart_path',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'path to trainer file to continue training.'}),kw_only=False,_field_type=_FIELD), 'results_dir': Field(name='results_dir',type=<class 'str'>,default='.',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Path to save evaluation results during training.'}),kw_only=False,_field_type=_FIELD), 'root_dir': Field(name='root_dir',type=<class 'str'>,default='.',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'location of root dir.'}),kw_only=False,_field_type=_FIELD), 'seed': Field(name='seed',type=<class 'int'>,default=12345,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Seed.'}),kw_only=False,_field_type=_FIELD), 'type': Field(name='type',type=<class 'str'>,default='classification',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The training type, for example pretraining or classification.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Model arguments related to Molformer trainer.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(type='classification', n_head=8, n_layer=12, q_dropout=0.5, d_dropout=0.1, n_embd=768, fc_h=512, dropout=0.1, dims=<factory>, num_classes=None, restart_path='', lr_start=0.00030000000000000003, lr_multiplier=1, seed=12345, min_len=1, root_dir='.', num_feats=32, pooling_mode='cls', fold=0, pretrained_path=None, results_dir='.', debug=False)¶
- __match_args__ = ('type', 'n_head', 'n_layer', 'q_dropout', 'd_dropout', 'n_embd', 'fc_h', 'dropout', 'dims', 'num_classes', 'restart_path', 'lr_start', 'lr_multiplier', 'seed', 'min_len', 'root_dir', 'num_feats', 'pooling_mode', 'fold', 'pretrained_path', 'results_dir', 'debug')¶
- __module__ = 'gt4sd.training_pipelines.pytorch_lightning.molformer.core'¶
- __repr__()¶
Return repr(self).
- class MolformerTrainingArguments(accumulate_grad_batches=1, strategy='ddp', gpus=-1, max_epochs=1, monitor=None, save_top_k=1, mode='min', every_n_train_steps=None, every_n_epochs=None, save_last=None, save_dir='logs', basename='lightning_logs', val_check_interval=1.0, gradient_clip_val=50, resume_from_checkpoint=None)[source]¶
Bases:
TrainingPipelineArguments
Training arguments related to Molformer trainer.
- __name__ = 'MolformerTrainingArguments'¶
- accumulate_grad_batches: int = 1¶
- strategy: str = 'ddp'¶
- gpus: int = -1¶
- max_epochs: int = 1¶
- monitor: Optional[str] = None¶
- save_top_k: int = 1¶
- mode: str = 'min'¶
- every_n_train_steps: Optional[int] = None¶
- every_n_epochs: Optional[int] = None¶
- save_last: Optional[bool] = None¶
- save_dir: Optional[str] = 'logs'¶
- basename: Optional[str] = 'lightning_logs'¶
- val_check_interval: float = 1.0¶
- gradient_clip_val: float = 50¶
- resume_from_checkpoint: Optional[str] = None¶
- __annotations__ = {'accumulate_grad_batches': <class 'int'>, 'basename': typing.Optional[str], 'every_n_epochs': typing.Optional[int], 'every_n_train_steps': typing.Optional[int], 'gpus': <class 'int'>, 'gradient_clip_val': <class 'float'>, 'max_epochs': <class 'int'>, 'mode': <class 'str'>, 'monitor': typing.Optional[str], 'resume_from_checkpoint': typing.Optional[str], 'save_dir': typing.Optional[str], 'save_last': typing.Optional[bool], 'save_top_k': <class 'int'>, 'strategy': <class 'str'>, 'val_check_interval': <class 'float'>}¶
- __dataclass_fields__ = {'accumulate_grad_batches': Field(name='accumulate_grad_batches',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Accumulates grads every k batches or as set up in the dict.'}),kw_only=False,_field_type=_FIELD), 'basename': Field(name='basename',type=typing.Optional[str],default='lightning_logs',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Experiment name.'}),kw_only=False,_field_type=_FIELD), 'every_n_epochs': Field(name='every_n_epochs',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of epochs between checkpoints.'}),kw_only=False,_field_type=_FIELD), 'every_n_train_steps': Field(name='every_n_train_steps',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of training steps between checkpoints.'}),kw_only=False,_field_type=_FIELD), 'gpus': Field(name='gpus',type=<class 'int'>,default=-1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'number of gpus to use.'}),kw_only=False,_field_type=_FIELD), 'gradient_clip_val': Field(name='gradient_clip_val',type=<class 'float'>,default=50,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Gradient clipping value.'}),kw_only=False,_field_type=_FIELD), 'max_epochs': Field(name='max_epochs',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'max number of epochs.'}),kw_only=False,_field_type=_FIELD), 'mode': Field(name='mode',type=<class 'str'>,default='min',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Quantity to monitor in order to store a checkpoint.'}),kw_only=False,_field_type=_FIELD), 'monitor': Field(name='monitor',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Quantity to monitor in order to store a checkpoint.'}),kw_only=False,_field_type=_FIELD), 'resume_from_checkpoint': Field(name='resume_from_checkpoint',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Path/URL of the checkpoint from which training is resumed.'}),kw_only=False,_field_type=_FIELD), 'save_dir': Field(name='save_dir',type=typing.Optional[str],default='logs',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Save directory for logs and output.'}),kw_only=False,_field_type=_FIELD), 'save_last': Field(name='save_last',type=typing.Optional[bool],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When True, always saves the model at the end of the epoch to a file last.ckpt'}),kw_only=False,_field_type=_FIELD), 'save_top_k': Field(name='save_top_k',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The best k models according to the quantity monitored will be saved.'}),kw_only=False,_field_type=_FIELD), 'strategy': Field(name='strategy',type=<class 'str'>,default='ddp',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The accelerator backend to use (previously known as distributed_backend).'}),kw_only=False,_field_type=_FIELD), 'val_check_interval': Field(name='val_check_interval',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': ' How often to check the validation set.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Training arguments related to Molformer trainer.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(accumulate_grad_batches=1, strategy='ddp', gpus=-1, max_epochs=1, monitor=None, save_top_k=1, mode='min', every_n_train_steps=None, every_n_epochs=None, save_last=None, save_dir='logs', basename='lightning_logs', val_check_interval=1.0, gradient_clip_val=50, resume_from_checkpoint=None)¶
- __match_args__ = ('accumulate_grad_batches', 'strategy', 'gpus', 'max_epochs', 'monitor', 'save_top_k', 'mode', 'every_n_train_steps', 'every_n_epochs', 'save_last', 'save_dir', 'basename', 'val_check_interval', 'gradient_clip_val', 'resume_from_checkpoint')¶
- __module__ = 'gt4sd.training_pipelines.pytorch_lightning.molformer.core'¶
- __repr__()¶
Return repr(self).
- class MolformerSavingArguments[source]¶
Bases:
TrainingPipelineArguments
Saving arguments related to Molformer trainer.
- __name__ = 'MolformerSavingArguments'¶
- __annotations__ = {}¶
- __dataclass_fields__ = {}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Saving arguments related to Molformer trainer.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__()¶
- __match_args__ = ()¶
- __module__ = 'gt4sd.training_pipelines.pytorch_lightning.molformer.core'¶
- __repr__()¶
Return repr(self).