gt4sd.training_pipelines.regression_transformer.core module¶
Regression Transformer training utilities.
Summary¶
Classes:
Arguments related to RegressionTransformer data loading. |
|
Saving arguments related to RegressionTransformer trainer. |
|
Arguments related to RegressionTransformer trainer. |
Reference¶
- class RegressionTransformerTrainingArguments(output_dir, overwrite_output_dir=False, do_train=False, do_eval=False, do_predict=False, evaluation_strategy='no', prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, eval_delay=0, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=10, max_steps=-1, lr_scheduler_type='linear', warmup_ratio=0.0, warmup_steps=0, log_level='passive', log_level_replica='passive', log_on_each_node=True, logging_dir=None, logging_strategy='steps', logging_first_step=False, logging_steps=500, logging_nan_inf_filter=True, save_strategy='steps', save_steps=500, save_total_limit=None, save_on_each_node=False, no_cuda=False, use_mps_device=False, seed=42, data_seed=None, jit_mode_eval=False, use_ipex=False, bf16=False, fp16=False, fp16_opt_level='O1', half_precision_backend='auto', bf16_full_eval=False, fp16_full_eval=False, tf32='no', local_rank=-1, xpu_backend=None, tpu_num_cores=None, tpu_metrics_debug=False, debug='', dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name=None, disable_tqdm='no', remove_unused_columns='yes', label_names=None, load_best_model_at_end=None, metric_for_best_model=None, greater_is_better='no', ignore_data_skip=False, sharded_ddp='', fsdp='', fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, deepspeed=None, label_smoothing_factor=0.0, optim='adamw_hf', adafactor=False, group_by_length=False, length_column_name='length', report_to=None, ddp_find_unused_parameters='no', ddp_bucket_cap_mb=None, dataloader_pin_memory=True, skip_memory_metrics=True, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, hub_model_id=None, hub_strategy='every_save', hub_token=None, hub_private_repo=False, gradient_checkpointing=False, include_inputs_for_metrics=False, fp16_backend='auto', push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=None, mp_parameters='', auto_find_batch_size=False, full_determinism=False, torchdynamo=None, ray_scope='last', ddp_timeout=1800, training_name='rt_training', batch_size=16, log_interval=100, gradient_interval=1, max_span_length=5, plm_probability=0.16666666666666666, alternate_steps=50, cc_loss=False, cg_collator='vanilla_cg', entity_to_mask=-1, entity_separator_token='.', mask_entity_separator=False)[source]¶
Bases:
TrainingPipelineArguments
,TransformersTrainingArgumentsCLI
Arguments related to RegressionTransformer trainer. NOTE: All arguments from transformers.training_args.TrainingArguments can be used. Only additional ones are specified below.
- __name__ = 'RegressionTransformerTrainingArguments'¶
- training_name: str = 'rt_training'¶
- num_train_epochs: int = 10¶
- batch_size: int = 16¶
- log_interval: int = 100¶
- gradient_interval: int = 1¶
- eval_steps: int = 1000¶
- max_span_length: int = 5¶
- plm_probability: float = 0.16666666666666666¶
- alternate_steps: int = 50¶
- cc_loss: bool = False¶
- cg_collator: str = 'vanilla_cg'¶
- entity_to_mask: int = -1¶
- entity_separator_token: str = '.'¶
- mask_entity_separator: bool = False¶
- __annotations__ = {'_n_gpu': 'int', 'adafactor': 'bool', 'adam_beta1': 'float', 'adam_beta2': 'float', 'adam_epsilon': 'float', 'alternate_steps': <class 'int'>, 'auto_find_batch_size': 'bool', 'batch_size': <class 'int'>, 'bf16': 'bool', 'bf16_full_eval': 'bool', 'cc_loss': <class 'bool'>, 'cg_collator': <class 'str'>, 'data_seed': 'Optional[int]', 'dataloader_drop_last': 'bool', 'dataloader_num_workers': 'int', 'dataloader_pin_memory': 'bool', 'ddp_bucket_cap_mb': 'Optional[int]', 'ddp_find_unused_parameters': 'Optional[str]', 'ddp_timeout': 'Optional[int]', 'debug': 'str', 'deepspeed': 'Optional[str]', 'disable_tqdm': 'Optional[str]', 'do_eval': 'bool', 'do_predict': 'bool', 'do_train': 'bool', 'entity_separator_token': <class 'str'>, 'entity_to_mask': <class 'int'>, 'eval_accumulation_steps': 'Optional[int]', 'eval_delay': 'Optional[float]', 'eval_steps': <class 'int'>, 'evaluation_strategy': 'Optional[str]', 'fp16': 'bool', 'fp16_backend': 'str', 'fp16_full_eval': 'bool', 'fp16_opt_level': 'str', 'fsdp': 'str', 'fsdp_min_num_params': 'int', 'fsdp_transformer_layer_cls_to_wrap': 'Optional[str]', 'full_determinism': 'bool', 'gradient_accumulation_steps': 'int', 'gradient_checkpointing': 'bool', 'gradient_interval': <class 'int'>, 'greater_is_better': 'Optional[str]', 'group_by_length': 'bool', 'half_precision_backend': 'str', 'hub_model_id': 'Optional[str]', 'hub_private_repo': 'bool', 'hub_strategy': 'Optional[str]', 'hub_token': 'Optional[str]', 'ignore_data_skip': 'bool', 'include_inputs_for_metrics': 'bool', 'jit_mode_eval': 'bool', 'label_names': 'Optional[str]', 'label_smoothing_factor': 'float', 'learning_rate': 'float', 'length_column_name': 'Optional[str]', 'load_best_model_at_end': 'Optional[str]', 'local_rank': 'int', 'log_interval': <class 'int'>, 'log_level': 'Optional[str]', 'log_level_replica': 'Optional[str]', 'log_on_each_node': 'bool', 'logging_dir': 'Optional[str]', 'logging_first_step': 'bool', 'logging_nan_inf_filter': 'bool', 'logging_steps': 'int', 'logging_strategy': 'Optional[str]', 'lr_scheduler_type': 'Optional[str]', 'mask_entity_separator': <class 'bool'>, 'max_grad_norm': 'float', 'max_span_length': <class 'int'>, 'max_steps': 'int', 'metric_for_best_model': 'Optional[str]', 'mp_parameters': 'str', 'no_cuda': 'bool', 'num_train_epochs': <class 'int'>, 'optim': 'Optional[str]', 'output_dir': 'str', 'overwrite_output_dir': 'bool', 'past_index': 'int', 'per_device_eval_batch_size': 'int', 'per_device_train_batch_size': 'int', 'per_gpu_eval_batch_size': 'Optional[int]', 'per_gpu_train_batch_size': 'Optional[int]', 'plm_probability': <class 'float'>, 'prediction_loss_only': 'bool', 'push_to_hub': 'bool', 'push_to_hub_model_id': 'Optional[str]', 'push_to_hub_organization': 'Optional[str]', 'push_to_hub_token': 'Optional[str]', 'ray_scope': 'Optional[str]', 'remove_unused_columns': 'Optional[str]', 'report_to': 'Optional[str]', 'resume_from_checkpoint': 'Optional[str]', 'run_name': 'Optional[str]', 'save_on_each_node': 'bool', 'save_steps': 'int', 'save_strategy': 'Optional[str]', 'save_total_limit': 'Optional[int]', 'seed': 'int', 'sharded_ddp': 'str', 'skip_memory_metrics': 'bool', 'tf32': 'Optional[str]', 'torchdynamo': 'Optional[str]', 'tpu_metrics_debug': 'bool', 'tpu_num_cores': 'Optional[int]', 'training_name': <class 'str'>, 'use_ipex': 'bool', 'use_legacy_prediction_loop': 'bool', 'use_mps_device': 'bool', 'warmup_ratio': 'float', 'warmup_steps': 'int', 'weight_decay': 'float', 'xpu_backend': 'Optional[str]'}¶
- __dataclass_fields__ = {'_n_gpu': Field(name='_n_gpu',type=<class 'int'>,default=-1,default_factory=<dataclasses._MISSING_TYPE object>,init=False,repr=False,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'adafactor': Field(name='adafactor',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to replace AdamW by Adafactor.'}),kw_only=False,_field_type=_FIELD), 'adam_beta1': Field(name='adam_beta1',type=<class 'float'>,default=0.9,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Beta1 for AdamW optimizer'}),kw_only=False,_field_type=_FIELD), 'adam_beta2': Field(name='adam_beta2',type=<class 'float'>,default=0.999,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Beta2 for AdamW optimizer'}),kw_only=False,_field_type=_FIELD), 'adam_epsilon': Field(name='adam_epsilon',type=<class 'float'>,default=1e-08,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Epsilon for AdamW optimizer.'}),kw_only=False,_field_type=_FIELD), 'alternate_steps': Field(name='alternate_steps',type=<class 'int'>,default=50,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Per default, training alternates between property prediction and conditional generation. This argument specifies the alternation frequency.If you set it to 0, no alternation occurs and we fall back to vanilla permutation language modeling (PLM). Default: 50.'}),kw_only=False,_field_type=_FIELD), 'auto_find_batch_size': Field(name='auto_find_batch_size',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to automatically decrease the batch size in half and rerun the training loop again each time a CUDA Out-of-Memory was reached'}),kw_only=False,_field_type=_FIELD), 'batch_size': Field(name='batch_size',type=<class 'int'>,default=16,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Size of the batch.'}),kw_only=False,_field_type=_FIELD), 'bf16': Field(name='bf16',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to use bf16 (mixed) precision instead of 32-bit. Requires Ampere or higher NVIDIA architecture or using CPU (no_cuda). This is an experimental API and it may change.'}),kw_only=False,_field_type=_FIELD), 'bf16_full_eval': Field(name='bf16_full_eval',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to use full bfloat16 evaluation instead of 32-bit. This is an experimental API and it may change.'}),kw_only=False,_field_type=_FIELD), 'cc_loss': Field(name='cc_loss',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether the cycle-consistency loss is computed during the conditional generation task. Defaults to False.'}),kw_only=False,_field_type=_FIELD), 'cg_collator': Field(name='cg_collator',type=<class 'str'>,default='vanilla_cg',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "The collator class. Following options are implemented: 'vanilla_cg': Collator class that does not mask the properties but anything else as a regular DataCollatorForPermutationLanguageModeling. Can optionally replace the properties with sampled values. NOTE: This collator can deal with multiple properties. 'multientity_cg': A training collator the conditional-generation task that can handle multiple entities. Default: vanilla_cg."}),kw_only=False,_field_type=_FIELD), 'data_seed': Field(name='data_seed',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Random seed to be used with data samplers.'}),kw_only=False,_field_type=_FIELD), 'dataloader_drop_last': Field(name='dataloader_drop_last',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Drop the last incomplete batch if it is not divisible by the batch size.'}),kw_only=False,_field_type=_FIELD), 'dataloader_num_workers': Field(name='dataloader_num_workers',type=<class 'int'>,default=0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process.'}),kw_only=False,_field_type=_FIELD), 'dataloader_pin_memory': Field(name='dataloader_pin_memory',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to pin memory for DataLoader.'}),kw_only=False,_field_type=_FIELD), 'ddp_bucket_cap_mb': Field(name='ddp_bucket_cap_mb',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When using distributed training, the value of the flag `bucket_cap_mb` passed to `DistributedDataParallel`.'}),kw_only=False,_field_type=_FIELD), 'ddp_find_unused_parameters': Field(name='ddp_find_unused_parameters',type=typing.Optional[str],default='no',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When using distributed training, the value of the flag `find_unused_parameters` passed to `DistributedDataParallel`.'}),kw_only=False,_field_type=_FIELD), 'ddp_timeout': Field(name='ddp_timeout',type=typing.Optional[int],default=1800,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Overrides the default timeout for distributed training (value should be given in seconds).'}),kw_only=False,_field_type=_FIELD), 'debug': Field(name='debug',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to enable debug mode. Current options: `underflow_overflow` (Detect underflow and overflow in activations and weights), `tpu_metrics_debug` (print debug metrics on TPU).'}),kw_only=False,_field_type=_FIELD), 'deepspeed': Field(name='deepspeed',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Enable deepspeed and pass the path to deepspeed json config file (e.g. ds_config.json) or an already loaded json file as a dict'}),kw_only=False,_field_type=_FIELD), 'disable_tqdm': Field(name='disable_tqdm',type=typing.Optional[str],default='no',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to disable the tqdm progress bars.'}),kw_only=False,_field_type=_FIELD), 'do_eval': Field(name='do_eval',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to run eval on the dev set.'}),kw_only=False,_field_type=_FIELD), 'do_predict': Field(name='do_predict',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to run predictions on the test set.'}),kw_only=False,_field_type=_FIELD), 'do_train': Field(name='do_train',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to run training.'}),kw_only=False,_field_type=_FIELD), 'entity_separator_token': Field(name='entity_separator_token',type=<class 'str'>,default='.',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "Only applies if `cg_collator='multientity_cg'`.The token that is used to separate entities in the input. Defaults to '.' (applicable to SMILES & SELFIES)"}),kw_only=False,_field_type=_FIELD), 'entity_to_mask': Field(name='entity_to_mask',type=<class 'int'>,default=-1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "Only applies if `cg_collator='multientity_cg'`. The entity that is being masked during training. 0 corresponds to first entity and so on. -1 corresponds to a random sampling scheme where the entity-to-be-masked is determined at runtime in the collator. NOTE: If 'mask_entity_separator' is true, this argument will not have any effect. Defaults to -1."}),kw_only=False,_field_type=_FIELD), 'eval_accumulation_steps': Field(name='eval_accumulation_steps',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of predictions steps to accumulate before moving the tensors to the CPU.'}),kw_only=False,_field_type=_FIELD), 'eval_delay': Field(name='eval_delay',type=typing.Optional[float],default=0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of epochs or steps to wait for before the first evaluation can be performed, depending on the evaluation_strategy.'}),kw_only=False,_field_type=_FIELD), 'eval_steps': Field(name='eval_steps',type=<class 'int'>,default=1000,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The time interval at which validation is performed.'}),kw_only=False,_field_type=_FIELD), 'evaluation_strategy': Field(name='evaluation_strategy',type=typing.Optional[str],default='no',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "The evaluation strategy to adopt during training. Possible values are: - 'no': No evaluation is done during training. - 'steps'`: Evaluation is done (and logged) every `eval_steps`. - 'epoch': Evaluation is done at the end of each epoch."}),kw_only=False,_field_type=_FIELD), 'fp16': Field(name='fp16',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to use fp16 (mixed) precision instead of 32-bit'}),kw_only=False,_field_type=_FIELD), 'fp16_backend': Field(name='fp16_backend',type=<class 'str'>,default='auto',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Deprecated. Use half_precision_backend instead', 'choices': ['auto', 'cuda_amp', 'apex', 'cpu_amp']}),kw_only=False,_field_type=_FIELD), 'fp16_full_eval': Field(name='fp16_full_eval',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to use full float16 evaluation instead of 32-bit'}),kw_only=False,_field_type=_FIELD), 'fp16_opt_level': Field(name='fp16_opt_level',type=<class 'str'>,default='O1',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']. See details at https://nvidia.github.io/apex/amp.html"}),kw_only=False,_field_type=_FIELD), 'fsdp': Field(name='fsdp',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to use PyTorch Fully Sharded Data Parallel (FSDP) training (in distributed training only). The base option should be `full_shard`, `shard_grad_op` or `no_shard` and you can add CPU-offload to `full_shard` or `shard_grad_op` like this: full_shard offload` or `shard_grad_op offload`. You can add auto-wrap to `full_shard` or `shard_grad_op` with the same syntax: full_shard auto_wrap` or `shard_grad_op auto_wrap`.'}),kw_only=False,_field_type=_FIELD), 'fsdp_min_num_params': Field(name='fsdp_min_num_params',type=<class 'int'>,default=0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "FSDP's minimum number of parameters for Default Auto Wrapping. (useful only when `fsdp` field is passed)."}),kw_only=False,_field_type=_FIELD), 'fsdp_transformer_layer_cls_to_wrap': Field(name='fsdp_transformer_layer_cls_to_wrap',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Transformer layer class name (case-sensitive) to wrap ,e.g, `BertLayer`, `GPTJBlock`, `T5Block` .... (useful only when `fsdp` flag is passed).'}),kw_only=False,_field_type=_FIELD), 'full_determinism': Field(name='full_determinism',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to call enable_full_determinism instead of set_seed for reproducibility in distributed training'}),kw_only=False,_field_type=_FIELD), 'gradient_accumulation_steps': Field(name='gradient_accumulation_steps',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of updates steps to accumulate before performing a backward/update pass.'}),kw_only=False,_field_type=_FIELD), 'gradient_checkpointing': Field(name='gradient_checkpointing',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'If True, use gradient checkpointing to save memory at the expense of slower backward pass.'}),kw_only=False,_field_type=_FIELD), 'gradient_interval': Field(name='gradient_interval',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Gradient accumulation steps'}),kw_only=False,_field_type=_FIELD), 'greater_is_better': Field(name='greater_is_better',type=typing.Optional[str],default='no',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether the `metric_for_best_model` should be maximized or not.'}),kw_only=False,_field_type=_FIELD), 'group_by_length': Field(name='group_by_length',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to group samples of roughly the same length together when batching.'}),kw_only=False,_field_type=_FIELD), 'half_precision_backend': Field(name='half_precision_backend',type=<class 'str'>,default='auto',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The backend to be used for half precision.', 'choices': ['auto', 'cuda_amp', 'apex', 'cpu_amp']}),kw_only=False,_field_type=_FIELD), 'hub_model_id': Field(name='hub_model_id',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The name of the repository to keep in sync with the local `output_dir`.'}),kw_only=False,_field_type=_FIELD), 'hub_private_repo': Field(name='hub_private_repo',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether the model repository is private or not.'}),kw_only=False,_field_type=_FIELD), 'hub_strategy': Field(name='hub_strategy',type=typing.Optional[str],default='every_save',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Optional, defaults to `every_save`. Defines the scope of what is pushed to the Hub and when. Possible values are: - `end`: push the model, its configuration, the tokenizer (if passed along to the Trainer and a draft of a model card when the Trainer.save_model method is called. - `every_save`: push the model, its configuration, the tokenizer (if passed along to the Trainer and a draft of a model card each time there is a model save. The pushes are asynchronous to not block training, and in case the save are very frequent, a new push is only attempted if the previous one is finished. A last push is made with the final model at the end of training. - `checkpoint`: like `every_save` but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with `trainer.train(resume_from_checkpoint)`. - `all_checkpoints`: like `checkpoint` but all checkpoints are pushed like they appear in the output folder (so you will get one checkpoint folder per folder in your final repository).'}),kw_only=False,_field_type=_FIELD), 'hub_token': Field(name='hub_token',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The token to use to push to the Model Hub.'}),kw_only=False,_field_type=_FIELD), 'ignore_data_skip': Field(name='ignore_data_skip',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When resuming training, whether or not to skip the first epochs and batches to get to the same training data.'}),kw_only=False,_field_type=_FIELD), 'include_inputs_for_metrics': Field(name='include_inputs_for_metrics',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not the inputs will be passed to the `compute_metrics` function.'}),kw_only=False,_field_type=_FIELD), 'jit_mode_eval': Field(name='jit_mode_eval',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to use PyTorch jit trace for inference'}),kw_only=False,_field_type=_FIELD), 'label_names': Field(name='label_names',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'A string containing keys in your dictionary of inputs that correspond to the labels.A single string, but can contain multiple keys separated with comma: `key1,key2`'}),kw_only=False,_field_type=_FIELD), 'label_smoothing_factor': Field(name='label_smoothing_factor',type=<class 'float'>,default=0.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The label smoothing epsilon to apply (zero means no label smoothing).'}),kw_only=False,_field_type=_FIELD), 'learning_rate': Field(name='learning_rate',type=<class 'float'>,default=5e-05,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The initial learning rate for AdamW.'}),kw_only=False,_field_type=_FIELD), 'length_column_name': Field(name='length_column_name',type=typing.Optional[str],default='length',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Column name with precomputed lengths to use when grouping by length.'}),kw_only=False,_field_type=_FIELD), 'load_best_model_at_end': Field(name='load_best_model_at_end',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to load the best model found during training at the end of training.'}),kw_only=False,_field_type=_FIELD), 'local_rank': Field(name='local_rank',type=<class 'int'>,default=-1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'For distributed training: local_rank'}),kw_only=False,_field_type=_FIELD), 'log_interval': Field(name='log_interval',type=<class 'int'>,default=100,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of steps between log intervals.'}),kw_only=False,_field_type=_FIELD), 'log_level': Field(name='log_level',type=typing.Optional[str],default='passive',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "Logger log level to use on the main node. Possible choices are the log levels as strings: 'debug', 'info', 'warning', 'error' and 'critical', plus a 'passive' level which doesn't set anything and lets the application set the level. Defaults to 'passive'.", 'choices': dict_keys(['debug', 'info', 'warning', 'error', 'critical', 'passive'])}),kw_only=False,_field_type=_FIELD), 'log_level_replica': Field(name='log_level_replica',type=typing.Optional[str],default='passive',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Logger log level to use on replica nodes. Same choices and defaults as ``log_level``', 'choices': dict_keys(['debug', 'info', 'warning', 'error', 'critical', 'passive'])}),kw_only=False,_field_type=_FIELD), 'log_on_each_node': Field(name='log_on_each_node',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When doing a multinode distributed training, whether to log once per node or just once on the main node.'}),kw_only=False,_field_type=_FIELD), 'logging_dir': Field(name='logging_dir',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Tensorboard log dir.'}),kw_only=False,_field_type=_FIELD), 'logging_first_step': Field(name='logging_first_step',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Log the first global_step'}),kw_only=False,_field_type=_FIELD), 'logging_nan_inf_filter': Field(name='logging_nan_inf_filter',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Filter nan and inf losses for logging.'}),kw_only=False,_field_type=_FIELD), 'logging_steps': Field(name='logging_steps',type=<class 'int'>,default=500,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Log every X updates steps.'}),kw_only=False,_field_type=_FIELD), 'logging_strategy': Field(name='logging_strategy',type=typing.Optional[str],default='steps',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "The logging strategy to adopt during training. Possible values are: - 'no': No logging is done during training. - 'steps'`: Logging is done every `logging_steps`. - 'epoch': Logging is done at the end of each epoch."}),kw_only=False,_field_type=_FIELD), 'lr_scheduler_type': Field(name='lr_scheduler_type',type=typing.Optional[str],default='linear',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The scheduler type to use. See the documentation of `transformers.SchedulerType` for all possible values.'}),kw_only=False,_field_type=_FIELD), 'mask_entity_separator': Field(name='mask_entity_separator',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "Only applies if `cg_collator='multientity_cg'`. Whether or not the entity separator token can be masked. If True, *all** textual tokens can be masked and we the collator behaves like the `vanilla_cg ` even though it is a `multientity_cg`. If False, the exact behavior depends on the entity_to_mask argument. Defaults to False."}),kw_only=False,_field_type=_FIELD), 'max_grad_norm': Field(name='max_grad_norm',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Max gradient norm.'}),kw_only=False,_field_type=_FIELD), 'max_span_length': Field(name='max_span_length',type=<class 'int'>,default=5,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Max length of a span of masked tokens for PLM.'}),kw_only=False,_field_type=_FIELD), 'max_steps': Field(name='max_steps',type=<class 'int'>,default=-1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'If > 0: set total number of training steps to perform. Override num_train_epochs.'}),kw_only=False,_field_type=_FIELD), 'metric_for_best_model': Field(name='metric_for_best_model',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The metric to use to compare two different models.'}),kw_only=False,_field_type=_FIELD), 'mp_parameters': Field(name='mp_parameters',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Used by the SageMaker launcher to send mp-specific args. Ignored in Trainer'}),kw_only=False,_field_type=_FIELD), 'no_cuda': Field(name='no_cuda',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Do not use CUDA even when it is available'}),kw_only=False,_field_type=_FIELD), 'num_train_epochs': Field(name='num_train_epochs',type=<class 'int'>,default=10,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of epochs.'}),kw_only=False,_field_type=_FIELD), 'optim': Field(name='optim',type=typing.Optional[str],default='adamw_hf',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The optimizer to use. One of `adamw_hf`, `adamw_torch`, `adafactor` or `adamw_apex_fused`.'}),kw_only=False,_field_type=_FIELD), 'output_dir': Field(name='output_dir',type=<class 'str'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The output directory where the model predictions and checkpoints will be written.'}),kw_only=False,_field_type=_FIELD), 'overwrite_output_dir': Field(name='overwrite_output_dir',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Overwrite the content of the output directory. Use this to continue training if output_dir points to a checkpoint directory.'}),kw_only=False,_field_type=_FIELD), 'past_index': Field(name='past_index',type=<class 'int'>,default=-1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'If >=0, uses the corresponding part of the output as the past state for next step.'}),kw_only=False,_field_type=_FIELD), 'per_device_eval_batch_size': Field(name='per_device_eval_batch_size',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Batch size per GPU/TPU core/CPU for evaluation.'}),kw_only=False,_field_type=_FIELD), 'per_device_train_batch_size': Field(name='per_device_train_batch_size',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Batch size per GPU/TPU core/CPU for training.'}),kw_only=False,_field_type=_FIELD), 'per_gpu_eval_batch_size': Field(name='per_gpu_eval_batch_size',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Deprecated, the use of `--per_device_eval_batch_size` is preferred. Batch size per GPU/TPU core/CPU for evaluation.'}),kw_only=False,_field_type=_FIELD), 'per_gpu_train_batch_size': Field(name='per_gpu_train_batch_size',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Deprecated, the use of `--per_device_train_batch_size` is preferred. Batch size per GPU/TPU core/CPU for training.'}),kw_only=False,_field_type=_FIELD), 'plm_probability': Field(name='plm_probability',type=<class 'float'>,default=0.16666666666666666,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Ratio of length of a span of masked tokens to surrounding context length for PLM.'}),kw_only=False,_field_type=_FIELD), 'prediction_loss_only': Field(name='prediction_loss_only',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When performing evaluation and predictions, only returns the loss.'}),kw_only=False,_field_type=_FIELD), 'push_to_hub': Field(name='push_to_hub',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to upload the trained model to the model hub after training.'}),kw_only=False,_field_type=_FIELD), 'push_to_hub_model_id': Field(name='push_to_hub_model_id',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The name of the repository to which push the `Trainer`.'}),kw_only=False,_field_type=_FIELD), 'push_to_hub_organization': Field(name='push_to_hub_organization',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The name of the organization in with to which push the `Trainer`.'}),kw_only=False,_field_type=_FIELD), 'push_to_hub_token': Field(name='push_to_hub_token',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The token to use to push to the Model Hub.'}),kw_only=False,_field_type=_FIELD), 'ray_scope': Field(name='ray_scope',type=typing.Optional[str],default='last',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The scope to use when doing hyperparameter search with Ray. By default, `"last"` will be used. Ray will then use the last checkpoint of all trials, compare those, and select the best one. However, other options are also available. See the Ray documentation (https://docs.ray.io/en/latest/tune/api_docs/analysis.html#ray.tune.ExperimentAnalysis.get_best_trial) for more options.'}),kw_only=False,_field_type=_FIELD), 'remove_unused_columns': Field(name='remove_unused_columns',type=typing.Optional[str],default='yes',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Remove columns not required by the model when using an nlp.Dataset.'}),kw_only=False,_field_type=_FIELD), 'report_to': Field(name='report_to',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The list of integrations to report the results and logs to.A single string, but can contain multiple keys separated with comma: `i1,i2`'}),kw_only=False,_field_type=_FIELD), 'resume_from_checkpoint': Field(name='resume_from_checkpoint',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The path to a folder with a valid checkpoint for your model.'}),kw_only=False,_field_type=_FIELD), 'run_name': Field(name='run_name',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'An optional descriptor for the run. Notably used for wandb logging.'}),kw_only=False,_field_type=_FIELD), 'save_on_each_node': Field(name='save_on_each_node',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When doing multi-node distributed training, whether to save models and checkpoints on each node, or only on the main one'}),kw_only=False,_field_type=_FIELD), 'save_steps': Field(name='save_steps',type=<class 'int'>,default=500,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Save checkpoint every X updates steps.'}),kw_only=False,_field_type=_FIELD), 'save_strategy': Field(name='save_strategy',type=typing.Optional[str],default='steps',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "The saving strategy to adopt during training. Possible values are: - 'no': No saving is done during training. - 'steps'`: Saving is done every `saving_steps`. - 'epoch': Saving is done at the end of each epoch."}),kw_only=False,_field_type=_FIELD), 'save_total_limit': Field(name='save_total_limit',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Limit the total amount of checkpoints. Deletes the older checkpoints in the output_dir. Default is unlimited checkpoints'}),kw_only=False,_field_type=_FIELD), 'seed': Field(name='seed',type=<class 'int'>,default=42,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Random seed that will be set at the beginning of training.'}),kw_only=False,_field_type=_FIELD), 'sharded_ddp': Field(name='sharded_ddp',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to use sharded DDP training (in distributed training only). The base option should be `simple`, `zero_dp_2` or `zero_dp_3` and you can add CPU-offload to `zero_dp_2` or `zero_dp_3` like this: zero_dp_2 offload` or `zero_dp_3 offload`. You can add auto-wrap to `zero_dp_2` or with the same syntax: zero_dp_2 auto_wrap` or `zero_dp_3 auto_wrap`.'}),kw_only=False,_field_type=_FIELD), 'skip_memory_metrics': Field(name='skip_memory_metrics',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to skip adding of memory profiler reports to metrics.'}),kw_only=False,_field_type=_FIELD), 'tf32': Field(name='tf32',type=typing.Optional[str],default='no',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to enable tf32 mode, available in Ampere and newer GPU architectures. This is an experimental API and it may change.'}),kw_only=False,_field_type=_FIELD), 'torchdynamo': Field(name='torchdynamo',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Sets up the backend compiler for TorchDynamo. TorchDynamo is a Python level JIT compiler designed to make unmodified PyTorch programs faster. TorchDynamo dynamically modifies the Python bytecode right before its executed. It rewrites Python bytecode to extract sequences of PyTorch operations and lifts them up into Fx graph. We can then pass these Fx graphs to other backend compilers. There are two options - eager and nvfuser. Eager defaults to pytorch eager and is useful for debugging. nvfuser path uses AOT Autograd and nvfuser compiler to optimize the models.', 'choices': ['eager', 'nvfuser', 'fx2trt', 'fx2trt-fp16']}),kw_only=False,_field_type=_FIELD), 'tpu_metrics_debug': Field(name='tpu_metrics_debug',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Deprecated, the use of `--debug tpu_metrics_debug` is preferred. TPU: Whether to print debug metrics'}),kw_only=False,_field_type=_FIELD), 'tpu_num_cores': Field(name='tpu_num_cores',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'TPU: Number of TPU cores (automatically passed by launcher script)'}),kw_only=False,_field_type=_FIELD), 'training_name': Field(name='training_name',type=<class 'str'>,default='rt_training',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Name used to identify the training.'}),kw_only=False,_field_type=_FIELD), 'use_ipex': Field(name='use_ipex',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "Use Intel extension for PyTorch when it is available, installation: 'https://github.com/intel/intel-extension-for-pytorch'"}),kw_only=False,_field_type=_FIELD), 'use_legacy_prediction_loop': Field(name='use_legacy_prediction_loop',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to use the legacy prediction_loop in the Trainer.'}),kw_only=False,_field_type=_FIELD), 'use_mps_device': Field(name='use_mps_device',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to use Apple Silicon chip based `mps` device.'}),kw_only=False,_field_type=_FIELD), 'warmup_ratio': Field(name='warmup_ratio',type=<class 'float'>,default=0.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Linear warmup over warmup_ratio fraction of total steps.'}),kw_only=False,_field_type=_FIELD), 'warmup_steps': Field(name='warmup_steps',type=<class 'int'>,default=0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Linear warmup over warmup_steps.'}),kw_only=False,_field_type=_FIELD), 'weight_decay': Field(name='weight_decay',type=<class 'float'>,default=0.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Weight decay for AdamW if we apply some.'}),kw_only=False,_field_type=_FIELD), 'xpu_backend': Field(name='xpu_backend',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The backend to be used for distributed training on Intel XPU.', 'choices': ['mpi', 'ccl', 'gloo']}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = '\n Arguments related to RegressionTransformer trainer.\n NOTE: All arguments from `transformers.training_args.TrainingArguments` can be used.\n Only additional ones are specified below.\n '¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(output_dir, overwrite_output_dir=False, do_train=False, do_eval=False, do_predict=False, evaluation_strategy='no', prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, eval_delay=0, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=10, max_steps=-1, lr_scheduler_type='linear', warmup_ratio=0.0, warmup_steps=0, log_level='passive', log_level_replica='passive', log_on_each_node=True, logging_dir=None, logging_strategy='steps', logging_first_step=False, logging_steps=500, logging_nan_inf_filter=True, save_strategy='steps', save_steps=500, save_total_limit=None, save_on_each_node=False, no_cuda=False, use_mps_device=False, seed=42, data_seed=None, jit_mode_eval=False, use_ipex=False, bf16=False, fp16=False, fp16_opt_level='O1', half_precision_backend='auto', bf16_full_eval=False, fp16_full_eval=False, tf32='no', local_rank=-1, xpu_backend=None, tpu_num_cores=None, tpu_metrics_debug=False, debug='', dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name=None, disable_tqdm='no', remove_unused_columns='yes', label_names=None, load_best_model_at_end=None, metric_for_best_model=None, greater_is_better='no', ignore_data_skip=False, sharded_ddp='', fsdp='', fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, deepspeed=None, label_smoothing_factor=0.0, optim='adamw_hf', adafactor=False, group_by_length=False, length_column_name='length', report_to=None, ddp_find_unused_parameters='no', ddp_bucket_cap_mb=None, dataloader_pin_memory=True, skip_memory_metrics=True, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, hub_model_id=None, hub_strategy='every_save', hub_token=None, hub_private_repo=False, gradient_checkpointing=False, include_inputs_for_metrics=False, fp16_backend='auto', push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=None, mp_parameters='', auto_find_batch_size=False, full_determinism=False, torchdynamo=None, ray_scope='last', ddp_timeout=1800, training_name='rt_training', batch_size=16, log_interval=100, gradient_interval=1, max_span_length=5, plm_probability=0.16666666666666666, alternate_steps=50, cc_loss=False, cg_collator='vanilla_cg', entity_to_mask=-1, entity_separator_token='.', mask_entity_separator=False)¶
- __match_args__ = ('output_dir', 'overwrite_output_dir', 'do_train', 'do_eval', 'do_predict', 'evaluation_strategy', 'prediction_loss_only', 'per_device_train_batch_size', 'per_device_eval_batch_size', 'per_gpu_train_batch_size', 'per_gpu_eval_batch_size', 'gradient_accumulation_steps', 'eval_accumulation_steps', 'eval_delay', 'learning_rate', 'weight_decay', 'adam_beta1', 'adam_beta2', 'adam_epsilon', 'max_grad_norm', 'num_train_epochs', 'max_steps', 'lr_scheduler_type', 'warmup_ratio', 'warmup_steps', 'log_level', 'log_level_replica', 'log_on_each_node', 'logging_dir', 'logging_strategy', 'logging_first_step', 'logging_steps', 'logging_nan_inf_filter', 'save_strategy', 'save_steps', 'save_total_limit', 'save_on_each_node', 'no_cuda', 'use_mps_device', 'seed', 'data_seed', 'jit_mode_eval', 'use_ipex', 'bf16', 'fp16', 'fp16_opt_level', 'half_precision_backend', 'bf16_full_eval', 'fp16_full_eval', 'tf32', 'local_rank', 'xpu_backend', 'tpu_num_cores', 'tpu_metrics_debug', 'debug', 'dataloader_drop_last', 'eval_steps', 'dataloader_num_workers', 'past_index', 'run_name', 'disable_tqdm', 'remove_unused_columns', 'label_names', 'load_best_model_at_end', 'metric_for_best_model', 'greater_is_better', 'ignore_data_skip', 'sharded_ddp', 'fsdp', 'fsdp_min_num_params', 'fsdp_transformer_layer_cls_to_wrap', 'deepspeed', 'label_smoothing_factor', 'optim', 'adafactor', 'group_by_length', 'length_column_name', 'report_to', 'ddp_find_unused_parameters', 'ddp_bucket_cap_mb', 'dataloader_pin_memory', 'skip_memory_metrics', 'use_legacy_prediction_loop', 'push_to_hub', 'resume_from_checkpoint', 'hub_model_id', 'hub_strategy', 'hub_token', 'hub_private_repo', 'gradient_checkpointing', 'include_inputs_for_metrics', 'fp16_backend', 'push_to_hub_model_id', 'push_to_hub_organization', 'push_to_hub_token', 'mp_parameters', 'auto_find_batch_size', 'full_determinism', 'torchdynamo', 'ray_scope', 'ddp_timeout', 'training_name', 'batch_size', 'log_interval', 'gradient_interval', 'max_span_length', 'plm_probability', 'alternate_steps', 'cc_loss', 'cg_collator', 'entity_to_mask', 'entity_separator_token', 'mask_entity_separator')¶
- __module__ = 'gt4sd.training_pipelines.regression_transformer.core'¶
- __repr__()¶
Return repr(self).
- class RegressionTransformerDataArguments(train_data_path, test_data_path, augment=0, save_datasets=False)[source]¶
Bases:
TrainingPipelineArguments
Arguments related to RegressionTransformer data loading.
- __name__ = 'RegressionTransformerDataArguments'¶
- train_data_path: str¶
- augment: Optional[int] = 0¶
- save_datasets: Optional[bool] = False¶
- __annotations__ = {'augment': typing.Optional[int], 'save_datasets': typing.Optional[bool], 'test_data_path': <class 'str'>, 'train_data_path': <class 'str'>}¶
- __dataclass_fields__ = {'augment': Field(name='augment',type=typing.Optional[int],default=0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Factor by which the training data is augmented. The data modality (SMILES, SELFIES, AAS, natural text) is inferred from the tokenizer. NOTE: For natural text, no augmentation is supported. Defaults to 0, meaning no augmentation. '}),kw_only=False,_field_type=_FIELD), 'save_datasets': Field(name='save_datasets',type=typing.Optional[bool],default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to save the datasets to disk. Datasets will be saved as `.txt` file to the same location where `train_data_path` and `test_data_path` live. Defaults to False.'}),kw_only=False,_field_type=_FIELD), 'test_data_path': Field(name='test_data_path',type=<class 'str'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Path to a `.csv` file with the input testing data. The file has to contain a `text` column (with the string input, e.g, SMILES, AAS, natural text) and an arbitrary number of numerical columns.'}),kw_only=False,_field_type=_FIELD), 'train_data_path': Field(name='train_data_path',type=<class 'str'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Path to a `.csv` file with the input training data. The file has to contain a `text` column (with the string input, e.g, SMILES, AAS, natural text) and an arbitrary number of numerical columns.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Arguments related to RegressionTransformer data loading.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(train_data_path, test_data_path, augment=0, save_datasets=False)¶
- __match_args__ = ('train_data_path', 'test_data_path', 'augment', 'save_datasets')¶
- __module__ = 'gt4sd.training_pipelines.regression_transformer.core'¶
- __repr__()¶
Return repr(self).
- class RegressionTransformerSavingArguments(model_path, checkpoint_name='')[source]¶
Bases:
TrainingPipelineArguments
Saving arguments related to RegressionTransformer trainer.
- __name__ = 'RegressionTransformerSavingArguments'¶
- model_path: str¶
- checkpoint_name: str = ''¶
- __annotations__ = {'checkpoint_name': <class 'str'>, 'model_path': <class 'str'>}¶
- __dataclass_fields__ = {'checkpoint_name': Field(name='checkpoint_name',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Name for the checkpoint that should be copied to inference model. Has to be a subfolder of `model_path`. Defaults to empty string meaning that files are taken from `model_path` (i.e., after training finished).'}),kw_only=False,_field_type=_FIELD), 'model_path': Field(name='model_path',type=<class 'str'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Path where the model artifacts are stored.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Saving arguments related to RegressionTransformer trainer.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(model_path, checkpoint_name='')¶
- __match_args__ = ('model_path', 'checkpoint_name')¶
- __module__ = 'gt4sd.training_pipelines.regression_transformer.core'¶
- __repr__()¶
Return repr(self).