gt4sd.training_pipelines.regression_transformer.utils module

Summary

Classes:

Property

TransformersTrainingArgumentsCLI

GT4SD ships with a CLI to launch training.

Functions:

add_tokens_from_lists

Addding tokens to a tokenizer from parsed datasets hold in memory.

get_hf_training_arg_object

A method to convert a training_args Dictionary into a HuggingFace TrainingArguments object.

get_train_config_dict

rtype

Dict[str, Any]

prepare_datasets_from_files

Converts datasets saved in provided .csv paths into RT-compatible datasets.

Reference

class Property(name)[source]

Bases: object

minimum: float = 0
maximum: float = 0
expression_separator: str = '|'
normalize: bool = False
__init__(name)[source]
name: str
update(line)[source]
property mask_length: int

How many tokens are being masked for this property.

Return type

int

__annotations__ = {'expression_separator': <class 'str'>, 'mask_lengths': 'List', 'maximum': <class 'float'>, 'minimum': <class 'float'>, 'name': <class 'str'>, 'normalize': <class 'bool'>}
__dict__ = mappingproxy({'__module__': 'gt4sd.training_pipelines.regression_transformer.utils', '__annotations__': {'name': <class 'str'>, 'minimum': <class 'float'>, 'maximum': <class 'float'>, 'expression_separator': <class 'str'>, 'normalize': <class 'bool'>, 'mask_lengths': 'List'}, 'minimum': 0, 'maximum': 0, 'expression_separator': '|', 'normalize': False, '__init__': <function Property.__init__>, 'update': <function Property.update>, 'mask_length': <property object>, '__dict__': <attribute '__dict__' of 'Property' objects>, '__weakref__': <attribute '__weakref__' of 'Property' objects>, '__doc__': None})
__doc__ = None
__module__ = 'gt4sd.training_pipelines.regression_transformer.utils'
__weakref__

list of weak references to the object (if defined)

add_tokens_from_lists(tokenizer, train_data, test_data)[source]

Addding tokens to a tokenizer from parsed datasets hold in memory.

Parameters
  • tokenizer (ExpressionBertTokenizer) – The tokenizer.

  • train_data (List[str]) – List of strings, one per sample.

  • test_data (List[str]) – List of strings, one per sample.

Returns

tokenizer with updated vocabulary. dictionary of property names and full property objects. list of strings with training samples. list of strings with testing samples.

Return type

Tuple with

prepare_datasets_from_files(tokenizer, train_path, test_path, augment=0)[source]

Converts datasets saved in provided .csv paths into RT-compatible datasets. NOTE: Also adds the new tokens from train/test data to provided tokenizer.

Parameters
  • tokenizer (ExpressionBertTokenizer) – The tokenizer.

  • train_path (str) – Path to the training data.

  • test_path (str) – Path to the testing data.

  • augment (int) – Factor by which each training sample is augmented.

Returns

tokenizer with updated vocabulary. dict of property names and property objects. list of strings with training samples. list of strings with testing samples.

Return type

Tuple with

get_train_config_dict(training_args, properties)[source]
Return type

Dict[str, Any]

class TransformersTrainingArgumentsCLI(output_dir, overwrite_output_dir=False, do_train=False, do_eval=False, do_predict=False, evaluation_strategy='no', prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, eval_delay=0, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type='linear', warmup_ratio=0.0, warmup_steps=0, log_level='passive', log_level_replica='passive', log_on_each_node=True, logging_dir=None, logging_strategy='steps', logging_first_step=False, logging_steps=500, logging_nan_inf_filter=True, save_strategy='steps', save_steps=500, save_total_limit=None, save_on_each_node=False, no_cuda=False, use_mps_device=False, seed=42, data_seed=None, jit_mode_eval=False, use_ipex=False, bf16=False, fp16=False, fp16_opt_level='O1', half_precision_backend='auto', bf16_full_eval=False, fp16_full_eval=False, tf32='no', local_rank=-1, xpu_backend=None, tpu_num_cores=None, tpu_metrics_debug=False, debug='', dataloader_drop_last=False, eval_steps=None, dataloader_num_workers=0, past_index=-1, run_name=None, disable_tqdm='no', remove_unused_columns='yes', label_names=None, load_best_model_at_end=None, metric_for_best_model=None, greater_is_better='no', ignore_data_skip=False, sharded_ddp='', fsdp='', fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, deepspeed=None, label_smoothing_factor=0.0, optim='adamw_hf', adafactor=False, group_by_length=False, length_column_name='length', report_to=None, ddp_find_unused_parameters='no', ddp_bucket_cap_mb=None, dataloader_pin_memory=True, skip_memory_metrics=True, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, hub_model_id=None, hub_strategy='every_save', hub_token=None, hub_private_repo=False, gradient_checkpointing=False, include_inputs_for_metrics=False, fp16_backend='auto', push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=None, mp_parameters='', auto_find_batch_size=False, full_determinism=False, torchdynamo=None, ray_scope='last', ddp_timeout=1800)[source]

Bases: TrainingArguments

GT4SD ships with a CLI to launch training. This conflicts with some data types native in transformers.training_arguments.TrainingArguments especially iterables which cannot be easily passed from CLI. Therefore, this class changes the affected attributes to CLI compatible datatypes.

label_names: Optional[str] = None
report_to: Optional[str] = None
sharded_ddp: str = ''
tf32: Optional[str] = 'no'
disable_tqdm: Optional[str] = 'no'
greater_is_better: Optional[str] = 'no'
remove_unused_columns: Optional[str] = 'yes'
load_best_model_at_end: Optional[str] = None
ddp_find_unused_parameters: Optional[str] = 'no'
evaluation_strategy: Optional[str] = 'no'
lr_scheduler_type: Optional[str] = 'linear'
logging_strategy: Optional[str] = 'steps'
save_strategy: Optional[str] = 'steps'
hub_strategy: Optional[str] = 'every_save'
optim: Optional[str] = 'adamw_hf'
__post_init__()[source]

Necessary because the our ArgumentParser (that is based on argparse) converts empty strings to None. This is prohibitive since the HFTrainer relies on them being actual strings. Only concerns a few arguments.

__annotations__ = {'_n_gpu': 'int', 'adafactor': 'bool', 'adam_beta1': 'float', 'adam_beta2': 'float', 'adam_epsilon': 'float', 'auto_find_batch_size': 'bool', 'bf16': 'bool', 'bf16_full_eval': 'bool', 'data_seed': 'Optional[int]', 'dataloader_drop_last': 'bool', 'dataloader_num_workers': 'int', 'dataloader_pin_memory': 'bool', 'ddp_bucket_cap_mb': 'Optional[int]', 'ddp_find_unused_parameters': typing.Optional[str], 'ddp_timeout': 'Optional[int]', 'debug': 'str', 'deepspeed': 'Optional[str]', 'disable_tqdm': typing.Optional[str], 'do_eval': 'bool', 'do_predict': 'bool', 'do_train': 'bool', 'eval_accumulation_steps': 'Optional[int]', 'eval_delay': 'Optional[float]', 'eval_steps': 'Optional[int]', 'evaluation_strategy': typing.Optional[str], 'fp16': 'bool', 'fp16_backend': 'str', 'fp16_full_eval': 'bool', 'fp16_opt_level': 'str', 'fsdp': 'str', 'fsdp_min_num_params': 'int', 'fsdp_transformer_layer_cls_to_wrap': 'Optional[str]', 'full_determinism': 'bool', 'gradient_accumulation_steps': 'int', 'gradient_checkpointing': 'bool', 'greater_is_better': typing.Optional[str], 'group_by_length': 'bool', 'half_precision_backend': 'str', 'hub_model_id': 'Optional[str]', 'hub_private_repo': 'bool', 'hub_strategy': typing.Optional[str], 'hub_token': 'Optional[str]', 'ignore_data_skip': 'bool', 'include_inputs_for_metrics': 'bool', 'jit_mode_eval': 'bool', 'label_names': typing.Optional[str], 'label_smoothing_factor': 'float', 'learning_rate': 'float', 'length_column_name': 'Optional[str]', 'load_best_model_at_end': typing.Optional[str], 'local_rank': 'int', 'log_level': 'Optional[str]', 'log_level_replica': 'Optional[str]', 'log_on_each_node': 'bool', 'logging_dir': 'Optional[str]', 'logging_first_step': 'bool', 'logging_nan_inf_filter': 'bool', 'logging_steps': 'int', 'logging_strategy': typing.Optional[str], 'lr_scheduler_type': typing.Optional[str], 'max_grad_norm': 'float', 'max_steps': 'int', 'metric_for_best_model': 'Optional[str]', 'mp_parameters': 'str', 'no_cuda': 'bool', 'num_train_epochs': 'float', 'optim': typing.Optional[str], 'output_dir': 'str', 'overwrite_output_dir': 'bool', 'past_index': 'int', 'per_device_eval_batch_size': 'int', 'per_device_train_batch_size': 'int', 'per_gpu_eval_batch_size': 'Optional[int]', 'per_gpu_train_batch_size': 'Optional[int]', 'prediction_loss_only': 'bool', 'push_to_hub': 'bool', 'push_to_hub_model_id': 'Optional[str]', 'push_to_hub_organization': 'Optional[str]', 'push_to_hub_token': 'Optional[str]', 'ray_scope': 'Optional[str]', 'remove_unused_columns': typing.Optional[str], 'report_to': typing.Optional[str], 'resume_from_checkpoint': 'Optional[str]', 'run_name': 'Optional[str]', 'save_on_each_node': 'bool', 'save_steps': 'int', 'save_strategy': typing.Optional[str], 'save_total_limit': 'Optional[int]', 'seed': 'int', 'sharded_ddp': <class 'str'>, 'skip_memory_metrics': 'bool', 'tf32': typing.Optional[str], 'torchdynamo': 'Optional[str]', 'tpu_metrics_debug': 'bool', 'tpu_num_cores': 'Optional[int]', 'use_ipex': 'bool', 'use_legacy_prediction_loop': 'bool', 'use_mps_device': 'bool', 'warmup_ratio': 'float', 'warmup_steps': 'int', 'weight_decay': 'float', 'xpu_backend': 'Optional[str]'}
__dataclass_fields__ = {'_n_gpu': Field(name='_n_gpu',type=<class 'int'>,default=-1,default_factory=<dataclasses._MISSING_TYPE object>,init=False,repr=False,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'adafactor': Field(name='adafactor',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to replace AdamW by Adafactor.'}),kw_only=False,_field_type=_FIELD), 'adam_beta1': Field(name='adam_beta1',type=<class 'float'>,default=0.9,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Beta1 for AdamW optimizer'}),kw_only=False,_field_type=_FIELD), 'adam_beta2': Field(name='adam_beta2',type=<class 'float'>,default=0.999,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Beta2 for AdamW optimizer'}),kw_only=False,_field_type=_FIELD), 'adam_epsilon': Field(name='adam_epsilon',type=<class 'float'>,default=1e-08,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Epsilon for AdamW optimizer.'}),kw_only=False,_field_type=_FIELD), 'auto_find_batch_size': Field(name='auto_find_batch_size',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to automatically decrease the batch size in half and rerun the training loop again each time a CUDA Out-of-Memory was reached'}),kw_only=False,_field_type=_FIELD), 'bf16': Field(name='bf16',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to use bf16 (mixed) precision instead of 32-bit. Requires Ampere or higher NVIDIA architecture or using CPU (no_cuda). This is an experimental API and it may change.'}),kw_only=False,_field_type=_FIELD), 'bf16_full_eval': Field(name='bf16_full_eval',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to use full bfloat16 evaluation instead of 32-bit. This is an experimental API and it may change.'}),kw_only=False,_field_type=_FIELD), 'data_seed': Field(name='data_seed',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Random seed to be used with data samplers.'}),kw_only=False,_field_type=_FIELD), 'dataloader_drop_last': Field(name='dataloader_drop_last',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Drop the last incomplete batch if it is not divisible by the batch size.'}),kw_only=False,_field_type=_FIELD), 'dataloader_num_workers': Field(name='dataloader_num_workers',type=<class 'int'>,default=0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process.'}),kw_only=False,_field_type=_FIELD), 'dataloader_pin_memory': Field(name='dataloader_pin_memory',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to pin memory for DataLoader.'}),kw_only=False,_field_type=_FIELD), 'ddp_bucket_cap_mb': Field(name='ddp_bucket_cap_mb',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When using distributed training, the value of the flag `bucket_cap_mb` passed to `DistributedDataParallel`.'}),kw_only=False,_field_type=_FIELD), 'ddp_find_unused_parameters': Field(name='ddp_find_unused_parameters',type=typing.Optional[str],default='no',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When using distributed training, the value of the flag `find_unused_parameters` passed to `DistributedDataParallel`.'}),kw_only=False,_field_type=_FIELD), 'ddp_timeout': Field(name='ddp_timeout',type=typing.Optional[int],default=1800,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Overrides the default timeout for distributed training (value should be given in seconds).'}),kw_only=False,_field_type=_FIELD), 'debug': Field(name='debug',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to enable debug mode. Current options: `underflow_overflow` (Detect underflow and overflow in activations and weights), `tpu_metrics_debug` (print debug metrics on TPU).'}),kw_only=False,_field_type=_FIELD), 'deepspeed': Field(name='deepspeed',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Enable deepspeed and pass the path to deepspeed json config file (e.g. ds_config.json) or an already loaded json file as a dict'}),kw_only=False,_field_type=_FIELD), 'disable_tqdm': Field(name='disable_tqdm',type=typing.Optional[str],default='no',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to disable the tqdm progress bars.'}),kw_only=False,_field_type=_FIELD), 'do_eval': Field(name='do_eval',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to run eval on the dev set.'}),kw_only=False,_field_type=_FIELD), 'do_predict': Field(name='do_predict',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to run predictions on the test set.'}),kw_only=False,_field_type=_FIELD), 'do_train': Field(name='do_train',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to run training.'}),kw_only=False,_field_type=_FIELD), 'eval_accumulation_steps': Field(name='eval_accumulation_steps',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of predictions steps to accumulate before moving the tensors to the CPU.'}),kw_only=False,_field_type=_FIELD), 'eval_delay': Field(name='eval_delay',type=typing.Optional[float],default=0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of epochs or steps to wait for before the first evaluation can be performed, depending on the evaluation_strategy.'}),kw_only=False,_field_type=_FIELD), 'eval_steps': Field(name='eval_steps',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Run an evaluation every X steps.'}),kw_only=False,_field_type=_FIELD), 'evaluation_strategy': Field(name='evaluation_strategy',type=typing.Optional[str],default='no',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "The evaluation strategy to adopt during training. Possible values are: - 'no': No evaluation is done during training. - 'steps'`: Evaluation is done (and logged) every `eval_steps`. - 'epoch': Evaluation is done at the end of each epoch."}),kw_only=False,_field_type=_FIELD), 'fp16': Field(name='fp16',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to use fp16 (mixed) precision instead of 32-bit'}),kw_only=False,_field_type=_FIELD), 'fp16_backend': Field(name='fp16_backend',type=<class 'str'>,default='auto',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Deprecated. Use half_precision_backend instead', 'choices': ['auto', 'cuda_amp', 'apex', 'cpu_amp']}),kw_only=False,_field_type=_FIELD), 'fp16_full_eval': Field(name='fp16_full_eval',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to use full float16 evaluation instead of 32-bit'}),kw_only=False,_field_type=_FIELD), 'fp16_opt_level': Field(name='fp16_opt_level',type=<class 'str'>,default='O1',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']. See details at https://nvidia.github.io/apex/amp.html"}),kw_only=False,_field_type=_FIELD), 'fsdp': Field(name='fsdp',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to use PyTorch Fully Sharded Data Parallel (FSDP) training (in distributed training only). The base option should be `full_shard`, `shard_grad_op` or `no_shard` and you can add CPU-offload to `full_shard` or `shard_grad_op` like this: full_shard offload` or `shard_grad_op offload`. You can add auto-wrap to `full_shard` or `shard_grad_op` with the same syntax: full_shard auto_wrap` or `shard_grad_op auto_wrap`.'}),kw_only=False,_field_type=_FIELD), 'fsdp_min_num_params': Field(name='fsdp_min_num_params',type=<class 'int'>,default=0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "FSDP's minimum number of parameters for Default Auto Wrapping. (useful only when `fsdp` field is passed)."}),kw_only=False,_field_type=_FIELD), 'fsdp_transformer_layer_cls_to_wrap': Field(name='fsdp_transformer_layer_cls_to_wrap',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Transformer layer class name (case-sensitive) to wrap ,e.g, `BertLayer`, `GPTJBlock`, `T5Block` .... (useful only when `fsdp` flag is passed).'}),kw_only=False,_field_type=_FIELD), 'full_determinism': Field(name='full_determinism',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to call enable_full_determinism instead of set_seed for reproducibility in distributed training'}),kw_only=False,_field_type=_FIELD), 'gradient_accumulation_steps': Field(name='gradient_accumulation_steps',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Number of updates steps to accumulate before performing a backward/update pass.'}),kw_only=False,_field_type=_FIELD), 'gradient_checkpointing': Field(name='gradient_checkpointing',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'If True, use gradient checkpointing to save memory at the expense of slower backward pass.'}),kw_only=False,_field_type=_FIELD), 'greater_is_better': Field(name='greater_is_better',type=typing.Optional[str],default='no',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether the `metric_for_best_model` should be maximized or not.'}),kw_only=False,_field_type=_FIELD), 'group_by_length': Field(name='group_by_length',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to group samples of roughly the same length together when batching.'}),kw_only=False,_field_type=_FIELD), 'half_precision_backend': Field(name='half_precision_backend',type=<class 'str'>,default='auto',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The backend to be used for half precision.', 'choices': ['auto', 'cuda_amp', 'apex', 'cpu_amp']}),kw_only=False,_field_type=_FIELD), 'hub_model_id': Field(name='hub_model_id',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The name of the repository to keep in sync with the local `output_dir`.'}),kw_only=False,_field_type=_FIELD), 'hub_private_repo': Field(name='hub_private_repo',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether the model repository is private or not.'}),kw_only=False,_field_type=_FIELD), 'hub_strategy': Field(name='hub_strategy',type=typing.Optional[str],default='every_save',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Optional, defaults to `every_save`. Defines the scope of what is pushed to the Hub and when. Possible values are: - `end`: push the model, its configuration, the tokenizer (if passed        along to the Trainer and a draft of a model card when the        Trainer.save_model method is called. - `every_save`: push the model, its configuration, the tokenizer (if        passed along to the Trainer and a draft of a model card each time        there is a model save. The pushes are asynchronous to not block        training, and in case the save are very frequent, a new push is        only attempted if the previous one is finished. A last push is made        with the final model at the end of training. - `checkpoint`: like `every_save` but the latest checkpoint is also        pushed in a subfolder named last-checkpoint, allowing you to resume        training easily with `trainer.train(resume_from_checkpoint)`. - `all_checkpoints`: like `checkpoint` but all checkpoints are pushed        like they appear in the output folder (so you will get one        checkpoint folder per folder in your final repository).'}),kw_only=False,_field_type=_FIELD), 'hub_token': Field(name='hub_token',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The token to use to push to the Model Hub.'}),kw_only=False,_field_type=_FIELD), 'ignore_data_skip': Field(name='ignore_data_skip',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When resuming training, whether or not to skip the first epochs and batches to get to the same training data.'}),kw_only=False,_field_type=_FIELD), 'include_inputs_for_metrics': Field(name='include_inputs_for_metrics',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not the inputs will be passed to the `compute_metrics` function.'}),kw_only=False,_field_type=_FIELD), 'jit_mode_eval': Field(name='jit_mode_eval',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to use PyTorch jit trace for inference'}),kw_only=False,_field_type=_FIELD), 'label_names': Field(name='label_names',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'A string containing keys in your dictionary of inputs that correspond to the labels.A single string, but can contain multiple keys separated with comma: `key1,key2`'}),kw_only=False,_field_type=_FIELD), 'label_smoothing_factor': Field(name='label_smoothing_factor',type=<class 'float'>,default=0.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The label smoothing epsilon to apply (zero means no label smoothing).'}),kw_only=False,_field_type=_FIELD), 'learning_rate': Field(name='learning_rate',type=<class 'float'>,default=5e-05,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The initial learning rate for AdamW.'}),kw_only=False,_field_type=_FIELD), 'length_column_name': Field(name='length_column_name',type=typing.Optional[str],default='length',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Column name with precomputed lengths to use when grouping by length.'}),kw_only=False,_field_type=_FIELD), 'load_best_model_at_end': Field(name='load_best_model_at_end',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to load the best model found during training at the end of training.'}),kw_only=False,_field_type=_FIELD), 'local_rank': Field(name='local_rank',type=<class 'int'>,default=-1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'For distributed training: local_rank'}),kw_only=False,_field_type=_FIELD), 'log_level': Field(name='log_level',type=typing.Optional[str],default='passive',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "Logger log level to use on the main node. Possible choices are the log levels as strings: 'debug', 'info', 'warning', 'error' and 'critical', plus a 'passive' level which doesn't set anything and lets the application set the level. Defaults to 'passive'.", 'choices': dict_keys(['debug', 'info', 'warning', 'error', 'critical', 'passive'])}),kw_only=False,_field_type=_FIELD), 'log_level_replica': Field(name='log_level_replica',type=typing.Optional[str],default='passive',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Logger log level to use on replica nodes. Same choices and defaults as ``log_level``', 'choices': dict_keys(['debug', 'info', 'warning', 'error', 'critical', 'passive'])}),kw_only=False,_field_type=_FIELD), 'log_on_each_node': Field(name='log_on_each_node',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When doing a multinode distributed training, whether to log once per node or just once on the main node.'}),kw_only=False,_field_type=_FIELD), 'logging_dir': Field(name='logging_dir',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Tensorboard log dir.'}),kw_only=False,_field_type=_FIELD), 'logging_first_step': Field(name='logging_first_step',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Log the first global_step'}),kw_only=False,_field_type=_FIELD), 'logging_nan_inf_filter': Field(name='logging_nan_inf_filter',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Filter nan and inf losses for logging.'}),kw_only=False,_field_type=_FIELD), 'logging_steps': Field(name='logging_steps',type=<class 'int'>,default=500,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Log every X updates steps.'}),kw_only=False,_field_type=_FIELD), 'logging_strategy': Field(name='logging_strategy',type=typing.Optional[str],default='steps',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "The logging strategy to adopt during training. Possible values are: - 'no': No logging is done during training. - 'steps'`: Logging is done every `logging_steps`. - 'epoch': Logging is done at the end of each epoch."}),kw_only=False,_field_type=_FIELD), 'lr_scheduler_type': Field(name='lr_scheduler_type',type=typing.Optional[str],default='linear',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The scheduler type to use. See the documentation of `transformers.SchedulerType` for all possible values.'}),kw_only=False,_field_type=_FIELD), 'max_grad_norm': Field(name='max_grad_norm',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Max gradient norm.'}),kw_only=False,_field_type=_FIELD), 'max_steps': Field(name='max_steps',type=<class 'int'>,default=-1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'If > 0: set total number of training steps to perform. Override num_train_epochs.'}),kw_only=False,_field_type=_FIELD), 'metric_for_best_model': Field(name='metric_for_best_model',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The metric to use to compare two different models.'}),kw_only=False,_field_type=_FIELD), 'mp_parameters': Field(name='mp_parameters',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Used by the SageMaker launcher to send mp-specific args. Ignored in Trainer'}),kw_only=False,_field_type=_FIELD), 'no_cuda': Field(name='no_cuda',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Do not use CUDA even when it is available'}),kw_only=False,_field_type=_FIELD), 'num_train_epochs': Field(name='num_train_epochs',type=<class 'float'>,default=3.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Total number of training epochs to perform.'}),kw_only=False,_field_type=_FIELD), 'optim': Field(name='optim',type=typing.Optional[str],default='adamw_hf',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The optimizer to use. One of  `adamw_hf`, `adamw_torch`, `adafactor` or `adamw_apex_fused`.'}),kw_only=False,_field_type=_FIELD), 'output_dir': Field(name='output_dir',type=<class 'str'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The output directory where the model predictions and checkpoints will be written.'}),kw_only=False,_field_type=_FIELD), 'overwrite_output_dir': Field(name='overwrite_output_dir',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Overwrite the content of the output directory. Use this to continue training if output_dir points to a checkpoint directory.'}),kw_only=False,_field_type=_FIELD), 'past_index': Field(name='past_index',type=<class 'int'>,default=-1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'If >=0, uses the corresponding part of the output as the past state for next step.'}),kw_only=False,_field_type=_FIELD), 'per_device_eval_batch_size': Field(name='per_device_eval_batch_size',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Batch size per GPU/TPU core/CPU for evaluation.'}),kw_only=False,_field_type=_FIELD), 'per_device_train_batch_size': Field(name='per_device_train_batch_size',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Batch size per GPU/TPU core/CPU for training.'}),kw_only=False,_field_type=_FIELD), 'per_gpu_eval_batch_size': Field(name='per_gpu_eval_batch_size',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Deprecated, the use of `--per_device_eval_batch_size` is preferred. Batch size per GPU/TPU core/CPU for evaluation.'}),kw_only=False,_field_type=_FIELD), 'per_gpu_train_batch_size': Field(name='per_gpu_train_batch_size',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Deprecated, the use of `--per_device_train_batch_size` is preferred. Batch size per GPU/TPU core/CPU for training.'}),kw_only=False,_field_type=_FIELD), 'prediction_loss_only': Field(name='prediction_loss_only',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When performing evaluation and predictions, only returns the loss.'}),kw_only=False,_field_type=_FIELD), 'push_to_hub': Field(name='push_to_hub',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to upload the trained model to the model hub after training.'}),kw_only=False,_field_type=_FIELD), 'push_to_hub_model_id': Field(name='push_to_hub_model_id',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The name of the repository to which push the `Trainer`.'}),kw_only=False,_field_type=_FIELD), 'push_to_hub_organization': Field(name='push_to_hub_organization',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The name of the organization in with to which push the `Trainer`.'}),kw_only=False,_field_type=_FIELD), 'push_to_hub_token': Field(name='push_to_hub_token',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The token to use to push to the Model Hub.'}),kw_only=False,_field_type=_FIELD), 'ray_scope': Field(name='ray_scope',type=typing.Optional[str],default='last',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The scope to use when doing hyperparameter search with Ray. By default, `"last"` will be used. Ray will then use the last checkpoint of all trials, compare those, and select the best one. However, other options are also available. See the Ray documentation (https://docs.ray.io/en/latest/tune/api_docs/analysis.html#ray.tune.ExperimentAnalysis.get_best_trial) for more options.'}),kw_only=False,_field_type=_FIELD), 'remove_unused_columns': Field(name='remove_unused_columns',type=typing.Optional[str],default='yes',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Remove columns not required by the model when using an nlp.Dataset.'}),kw_only=False,_field_type=_FIELD), 'report_to': Field(name='report_to',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The list of integrations to report the results and logs to.A single string, but can contain multiple keys separated with comma: `i1,i2`'}),kw_only=False,_field_type=_FIELD), 'resume_from_checkpoint': Field(name='resume_from_checkpoint',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The path to a folder with a valid checkpoint for your model.'}),kw_only=False,_field_type=_FIELD), 'run_name': Field(name='run_name',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'An optional descriptor for the run. Notably used for wandb logging.'}),kw_only=False,_field_type=_FIELD), 'save_on_each_node': Field(name='save_on_each_node',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'When doing multi-node distributed training, whether to save models and checkpoints on each node, or only on the main one'}),kw_only=False,_field_type=_FIELD), 'save_steps': Field(name='save_steps',type=<class 'int'>,default=500,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Save checkpoint every X updates steps.'}),kw_only=False,_field_type=_FIELD), 'save_strategy': Field(name='save_strategy',type=typing.Optional[str],default='steps',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "The saving strategy to adopt during training. Possible values are: - 'no': No saving is done during training. - 'steps'`: Saving is done every `saving_steps`. - 'epoch': Saving is done at the end of each epoch."}),kw_only=False,_field_type=_FIELD), 'save_total_limit': Field(name='save_total_limit',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Limit the total amount of checkpoints. Deletes the older checkpoints in the output_dir. Default is unlimited checkpoints'}),kw_only=False,_field_type=_FIELD), 'seed': Field(name='seed',type=<class 'int'>,default=42,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Random seed that will be set at the beginning of training.'}),kw_only=False,_field_type=_FIELD), 'sharded_ddp': Field(name='sharded_ddp',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to use sharded DDP training (in distributed training only). The base option should be `simple`, `zero_dp_2` or `zero_dp_3` and you can add CPU-offload to `zero_dp_2` or `zero_dp_3` like this: zero_dp_2 offload` or `zero_dp_3 offload`. You can add auto-wrap to `zero_dp_2` or with the same syntax: zero_dp_2 auto_wrap` or `zero_dp_3 auto_wrap`.'}),kw_only=False,_field_type=_FIELD), 'skip_memory_metrics': Field(name='skip_memory_metrics',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to skip adding of memory profiler reports to metrics.'}),kw_only=False,_field_type=_FIELD), 'tf32': Field(name='tf32',type=typing.Optional[str],default='no',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to enable tf32 mode, available in Ampere and newer GPU architectures. This is an experimental API and it may change.'}),kw_only=False,_field_type=_FIELD), 'torchdynamo': Field(name='torchdynamo',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Sets up the backend compiler for TorchDynamo. TorchDynamo is a Python level JIT compiler designed to make unmodified PyTorch programs faster. TorchDynamo dynamically modifies the Python bytecode right before its executed. It rewrites Python bytecode to extract sequences of PyTorch operations and lifts them up into Fx graph. We can then pass these Fx graphs to other backend compilers. There are two options - eager and nvfuser. Eager defaults to pytorch eager and is useful for debugging. nvfuser path uses AOT Autograd and nvfuser compiler to optimize the models.', 'choices': ['eager', 'nvfuser', 'fx2trt', 'fx2trt-fp16']}),kw_only=False,_field_type=_FIELD), 'tpu_metrics_debug': Field(name='tpu_metrics_debug',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Deprecated, the use of `--debug tpu_metrics_debug` is preferred. TPU: Whether to print debug metrics'}),kw_only=False,_field_type=_FIELD), 'tpu_num_cores': Field(name='tpu_num_cores',type=typing.Optional[int],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'TPU: Number of TPU cores (automatically passed by launcher script)'}),kw_only=False,_field_type=_FIELD), 'use_ipex': Field(name='use_ipex',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "Use Intel extension for PyTorch when it is available, installation: 'https://github.com/intel/intel-extension-for-pytorch'"}),kw_only=False,_field_type=_FIELD), 'use_legacy_prediction_loop': Field(name='use_legacy_prediction_loop',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to use the legacy prediction_loop in the Trainer.'}),kw_only=False,_field_type=_FIELD), 'use_mps_device': Field(name='use_mps_device',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether to use Apple Silicon chip based `mps` device.'}),kw_only=False,_field_type=_FIELD), 'warmup_ratio': Field(name='warmup_ratio',type=<class 'float'>,default=0.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Linear warmup over warmup_ratio fraction of total steps.'}),kw_only=False,_field_type=_FIELD), 'warmup_steps': Field(name='warmup_steps',type=<class 'int'>,default=0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Linear warmup over warmup_steps.'}),kw_only=False,_field_type=_FIELD), 'weight_decay': Field(name='weight_decay',type=<class 'float'>,default=0.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Weight decay for AdamW if we apply some.'}),kw_only=False,_field_type=_FIELD), 'xpu_backend': Field(name='xpu_backend',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'The backend to be used for distributed training on Intel XPU.', 'choices': ['mpi', 'ccl', 'gloo']}),kw_only=False,_field_type=_FIELD)}
__dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)
__doc__ = '\n    GT4SD ships with a CLI to launch training. This conflicts with some data types\n    native in `transformers.training_arguments.TrainingArguments` especially iterables\n    which cannot be easily passed from CLI.\n    Therefore, this class changes the affected attributes to CLI compatible datatypes.\n    '
__eq__(other)

Return self==value.

__hash__ = None
__init__(output_dir, overwrite_output_dir=False, do_train=False, do_eval=False, do_predict=False, evaluation_strategy='no', prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, eval_delay=0, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type='linear', warmup_ratio=0.0, warmup_steps=0, log_level='passive', log_level_replica='passive', log_on_each_node=True, logging_dir=None, logging_strategy='steps', logging_first_step=False, logging_steps=500, logging_nan_inf_filter=True, save_strategy='steps', save_steps=500, save_total_limit=None, save_on_each_node=False, no_cuda=False, use_mps_device=False, seed=42, data_seed=None, jit_mode_eval=False, use_ipex=False, bf16=False, fp16=False, fp16_opt_level='O1', half_precision_backend='auto', bf16_full_eval=False, fp16_full_eval=False, tf32='no', local_rank=-1, xpu_backend=None, tpu_num_cores=None, tpu_metrics_debug=False, debug='', dataloader_drop_last=False, eval_steps=None, dataloader_num_workers=0, past_index=-1, run_name=None, disable_tqdm='no', remove_unused_columns='yes', label_names=None, load_best_model_at_end=None, metric_for_best_model=None, greater_is_better='no', ignore_data_skip=False, sharded_ddp='', fsdp='', fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, deepspeed=None, label_smoothing_factor=0.0, optim='adamw_hf', adafactor=False, group_by_length=False, length_column_name='length', report_to=None, ddp_find_unused_parameters='no', ddp_bucket_cap_mb=None, dataloader_pin_memory=True, skip_memory_metrics=True, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, hub_model_id=None, hub_strategy='every_save', hub_token=None, hub_private_repo=False, gradient_checkpointing=False, include_inputs_for_metrics=False, fp16_backend='auto', push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=None, mp_parameters='', auto_find_batch_size=False, full_determinism=False, torchdynamo=None, ray_scope='last', ddp_timeout=1800)
__match_args__ = ('output_dir', 'overwrite_output_dir', 'do_train', 'do_eval', 'do_predict', 'evaluation_strategy', 'prediction_loss_only', 'per_device_train_batch_size', 'per_device_eval_batch_size', 'per_gpu_train_batch_size', 'per_gpu_eval_batch_size', 'gradient_accumulation_steps', 'eval_accumulation_steps', 'eval_delay', 'learning_rate', 'weight_decay', 'adam_beta1', 'adam_beta2', 'adam_epsilon', 'max_grad_norm', 'num_train_epochs', 'max_steps', 'lr_scheduler_type', 'warmup_ratio', 'warmup_steps', 'log_level', 'log_level_replica', 'log_on_each_node', 'logging_dir', 'logging_strategy', 'logging_first_step', 'logging_steps', 'logging_nan_inf_filter', 'save_strategy', 'save_steps', 'save_total_limit', 'save_on_each_node', 'no_cuda', 'use_mps_device', 'seed', 'data_seed', 'jit_mode_eval', 'use_ipex', 'bf16', 'fp16', 'fp16_opt_level', 'half_precision_backend', 'bf16_full_eval', 'fp16_full_eval', 'tf32', 'local_rank', 'xpu_backend', 'tpu_num_cores', 'tpu_metrics_debug', 'debug', 'dataloader_drop_last', 'eval_steps', 'dataloader_num_workers', 'past_index', 'run_name', 'disable_tqdm', 'remove_unused_columns', 'label_names', 'load_best_model_at_end', 'metric_for_best_model', 'greater_is_better', 'ignore_data_skip', 'sharded_ddp', 'fsdp', 'fsdp_min_num_params', 'fsdp_transformer_layer_cls_to_wrap', 'deepspeed', 'label_smoothing_factor', 'optim', 'adafactor', 'group_by_length', 'length_column_name', 'report_to', 'ddp_find_unused_parameters', 'ddp_bucket_cap_mb', 'dataloader_pin_memory', 'skip_memory_metrics', 'use_legacy_prediction_loop', 'push_to_hub', 'resume_from_checkpoint', 'hub_model_id', 'hub_strategy', 'hub_token', 'hub_private_repo', 'gradient_checkpointing', 'include_inputs_for_metrics', 'fp16_backend', 'push_to_hub_model_id', 'push_to_hub_organization', 'push_to_hub_token', 'mp_parameters', 'auto_find_batch_size', 'full_determinism', 'torchdynamo', 'ray_scope', 'ddp_timeout')
__module__ = 'gt4sd.training_pipelines.regression_transformer.utils'
__repr__()

Return repr(self).

get_hf_training_arg_object(training_args)[source]

A method to convert a training_args Dictionary into a HuggingFace TrainingArguments object. This routine also takes care of removing arguments that are not necessary.

Parameters

training_args (Dict[str, Any]) – A dictionary of training arguments.

Return type

TrainingArguments

Returns

object of type TrainingArguments.