gt4sd.algorithms.generation.hugging_face.core module¶
HuggingFace generation algorithm.
Summary¶
Classes:
Configuration to generate text using CTRL. |
|
Basic configuration for an hugging face algorithm. |
|
Configuration to generate text using GPT2. |
|
Configuration to generate text using OpenAIGPT. |
|
Configuration to generate text using Seq2Seq LMs. |
|
Configuration to generate text using TransfoXL. |
|
Configuration to generate text using XLM. |
|
Configuration to generate text using XLNet. |
Reference¶
- class HuggingFaceGenerationAlgorithm(configuration, target=None)[source]¶
Bases:
GeneratorAlgorithm
[S
,None
]- __init__(configuration, target=None)[source]¶
HuggingFace generation algorithm.
- Parameters
configuration (
AlgorithmConfiguration
) – domain and application specification, defining types and validations.target (
None
) – unused since it is not a conditional generator.
Example
An example for using a generative algorithm from HuggingFace:
configuration = HuggingFaceXLMGenerator() algorithm = HuggingFaceGenerationAlgorithm(configuration=configuration) items = list(algorithm.sample(1)) print(items)
- get_generator(configuration, target)[source]¶
Get the function to sample batches.
- Parameters
configuration (
AlgorithmConfiguration
[~S,None
]) – helps to set up the application.target (
None
) – context or condition for the generation. Unused in the algorithm.
- Return type
Callable
[[],Iterable
[Any
]]- Returns
callable generating a batch of items.
- validate_configuration(configuration)[source]¶
Overload to validate the a configuration for the algorithm.
- Parameters
configuration (
AlgorithmConfiguration
) – the algorithm configuration.- Raises
InvalidAlgorithmConfiguration – in case the configuration for the algorithm is invalid.
- Return type
- Returns
the validated configuration.
- __abstractmethods__ = frozenset({})¶
- __annotations__ = {'generate': 'Untargeted', 'generator': 'Union[Untargeted, Targeted[T]]', 'max_runtime': 'int', 'max_samples': 'int', 'target': 'Optional[T]'}¶
- __doc__ = None¶
- __module__ = 'gt4sd.algorithms.generation.hugging_face.core'¶
- __orig_bases__ = (gt4sd.algorithms.core.GeneratorAlgorithm[~S, NoneType],)¶
- __parameters__ = (~S,)¶
- _abc_impl = <_abc._abc_data object>¶
- class HuggingFaceConfiguration(*args, **kwargs)[source]¶
Bases:
HuggingFaceConfiguration
,Generic
[T
]Basic configuration for an hugging face algorithm.
- algorithm_type: ClassVar[str] = 'generation'¶
General type of generative algorithm.
- domain: ClassVar[str] = 'nlp'¶
General application domain. Hints at input/output types.
- model_type: str = ''¶
- prompt: str = "I'm a stochastic parrot."¶
- length: int = 20¶
- stop_token: str = ''¶
- num_beams: int = 1¶
- do_sample: bool = True¶
- temperature: float = 1.0¶
- repetition_penalty: float = 1.0¶
- k: int = 50¶
- p: float = 1.0¶
- prefix: str = ''¶
- number_of_sequences: int = 8¶
- get_target_description()[source]¶
Get description of the target for generation.
- Return type
Optional
[Dict
[str
,str
],None
]- Returns
target description, returns None in case no target is used.
- __annotations__ = {'algorithm_application': 'ClassVar[str]', 'algorithm_name': 'ClassVar[str]', 'algorithm_type': typing.ClassVar[str], 'algorithm_version': 'str', 'do_sample': <class 'bool'>, 'domain': typing.ClassVar[str], 'k': <class 'int'>, 'length': <class 'int'>, 'model_type': <class 'str'>, 'num_beams': <class 'int'>, 'number_of_sequences': <class 'int'>, 'p': <class 'float'>, 'prefix': <class 'str'>, 'prompt': <class 'str'>, 'repetition_penalty': <class 'float'>, 'stop_token': <class 'str'>, 'temperature': <class 'float'>}¶
- __dataclass_fields__ = {'algorithm_application': Field(name='algorithm_application',type=typing.ClassVar[str],default='HuggingFaceConfiguration',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_name': Field(name='algorithm_name',type=typing.ClassVar[str],default='HuggingFaceGenerationAlgorithm',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_type': Field(name='algorithm_type',type=typing.ClassVar[str],default='generation',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_version': Field(name='algorithm_version',type='str',default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'do_sample': Field(name='do_sample',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}),kw_only=False,_field_type=_FIELD), 'domain': Field(name='domain',type=typing.ClassVar[str],default='nlp',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'k': Field(name='k',type=<class 'int'>,default=50,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of top-k probability tokens to keep.'}),kw_only=False,_field_type=_FIELD), 'length': Field(name='length',type=<class 'int'>,default=20,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Length of the generated text.'}),kw_only=False,_field_type=_FIELD), 'model_type': Field(name='model_type',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Type of the model. Supported: gpt2, ctrl, openai-gpt, xlnet, transfo-xl, xlm, auto-seq2seq-lm'}),kw_only=False,_field_type=_FIELD), 'num_beams': Field(name='num_beams',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of beams for beam search.'}),kw_only=False,_field_type=_FIELD), 'number_of_sequences': Field(name='number_of_sequences',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of text sequences to generate.'}),kw_only=False,_field_type=_FIELD), 'p': Field(name='p',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}),kw_only=False,_field_type=_FIELD), 'prefix': Field(name='prefix',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Text defining context provided prior to the prompt.'}),kw_only=False,_field_type=_FIELD), 'prompt': Field(name='prompt',type=<class 'str'>,default="I'm a stochastic parrot.",default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Prompt for text generation.'}),kw_only=False,_field_type=_FIELD), 'repetition_penalty': Field(name='repetition_penalty',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}),kw_only=False,_field_type=_FIELD), 'stop_token': Field(name='stop_token',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Stop token for text generation.'}),kw_only=False,_field_type=_FIELD), 'temperature': Field(name='temperature',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Temperature for sampling, the lower the greedier the sampling.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Basic configuration for an hugging face algorithm.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(*args, **kwargs)¶
- __match_args__ = ('algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences')¶
- __module__ = 'gt4sd.algorithms.generation.hugging_face.core'¶
- __orig_bases__ = (<class 'types.HuggingFaceConfiguration'>, typing.Generic[~T])¶
- __parameters__ = (~T,)¶
- __pydantic_complete__ = True¶
- __pydantic_config__ = {}¶
- __pydantic_core_schema__ = {'cls': <class 'gt4sd.algorithms.generation.hugging_face.core.HuggingFaceConfiguration'>, 'config': {'title': 'HuggingFaceConfiguration'}, 'fields': ['algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences'], 'frozen': False, 'post_init': False, 'ref': 'types.HuggingFaceConfiguration:94427942054320', 'schema': {'collect_init_only': False, 'computed_fields': [], 'dataclass_name': 'HuggingFaceConfiguration', 'fields': [{'type': 'dataclass-field', 'name': 'algorithm_version', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'model_type', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Type of the model. Supported: gpt2, ctrl, openai-gpt, xlnet, transfo-xl, xlm, auto-seq2seq-lm'}}}, {'type': 'dataclass-field', 'name': 'prompt', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': "I'm a stochastic parrot."}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Prompt for text generation.'}}}, {'type': 'dataclass-field', 'name': 'length', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 20}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Length of the generated text.'}}}, {'type': 'dataclass-field', 'name': 'stop_token', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Stop token for text generation.'}}}, {'type': 'dataclass-field', 'name': 'num_beams', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 1}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of beams for beam search.'}}}, {'type': 'dataclass-field', 'name': 'do_sample', 'schema': {'type': 'default', 'schema': {'type': 'bool'}, 'default': True}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}}}, {'type': 'dataclass-field', 'name': 'temperature', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Temperature for sampling, the lower the greedier the sampling.'}}}, {'type': 'dataclass-field', 'name': 'repetition_penalty', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}}}, {'type': 'dataclass-field', 'name': 'k', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 50}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of top-k probability tokens to keep.'}}}, {'type': 'dataclass-field', 'name': 'p', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}}}, {'type': 'dataclass-field', 'name': 'prefix', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Text defining context provided prior to the prompt.'}}}, {'type': 'dataclass-field', 'name': 'number_of_sequences', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 8}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of text sequences to generate.'}}}], 'type': 'dataclass-args'}, 'slots': True, 'type': 'dataclass'}¶
- __pydantic_decorators__ = DecoratorInfos(validators={}, field_validators={}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})¶
- __pydantic_fields__ = {'algorithm_version': FieldInfo(annotation=str, required=False, default='', init=True, init_var=False, kw_only=False), 'do_sample': FieldInfo(annotation=bool, required=False, default=True, description='Whether or not to use sampling; use greedy decoding otherwise.', init=True, init_var=False, kw_only=False), 'k': FieldInfo(annotation=int, required=False, default=50, description='Number of top-k probability tokens to keep.', init=True, init_var=False, kw_only=False), 'length': FieldInfo(annotation=int, required=False, default=20, description='Length of the generated text.', init=True, init_var=False, kw_only=False), 'model_type': FieldInfo(annotation=str, required=False, default='', description='Type of the model. Supported: gpt2, ctrl, openai-gpt, xlnet, transfo-xl, xlm, auto-seq2seq-lm', init=True, init_var=False, kw_only=False), 'num_beams': FieldInfo(annotation=int, required=False, default=1, description='Number of beams for beam search.', init=True, init_var=False, kw_only=False), 'number_of_sequences': FieldInfo(annotation=int, required=False, default=8, description='Number of text sequences to generate.', init=True, init_var=False, kw_only=False), 'p': FieldInfo(annotation=float, required=False, default=1.0, description='Only tokens with cumulative probabilities summing up to this value are kept.', init=True, init_var=False, kw_only=False), 'prefix': FieldInfo(annotation=str, required=False, default='', description='Text defining context provided prior to the prompt.', init=True, init_var=False, kw_only=False), 'prompt': FieldInfo(annotation=str, required=False, default="I'm a stochastic parrot.", description='Prompt for text generation.', init=True, init_var=False, kw_only=False), 'repetition_penalty': FieldInfo(annotation=float, required=False, default=1.0, description='Primarily useful for CTRL model, where 1.2 should be used.', init=True, init_var=False, kw_only=False), 'stop_token': FieldInfo(annotation=str, required=False, default='', description='Stop token for text generation.', init=True, init_var=False, kw_only=False), 'temperature': FieldInfo(annotation=float, required=False, default=1.0, description='Temperature for sampling, the lower the greedier the sampling.', init=True, init_var=False, kw_only=False)}¶
- __pydantic_serializer__ = SchemaSerializer(serializer=Dataclass( DataclassSerializer { class: Py( 0x000055e1b7f169b0, ), serializer: Fields( GeneralFieldsSerializer { fields: { "prompt": SerField { key_py: Py( 0x00007f9dbb54a530, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbdfefa00, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "stop_token": SerField { key_py: Py( 0x00007f9dbb54a5b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "do_sample": SerField { key_py: Py( 0x00007f9dbb54a630, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x000055e1a1fe15a0, ), ), serializer: Bool( BoolSerializer, ), }, ), ), required: true, }, "k": SerField { key_py: Py( 0x00007f9e9d125130, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500710, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "p": SerField { key_py: Py( 0x00007f9e9d5cc670, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "temperature": SerField { key_py: Py( 0x00007f9dbb54a670, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "length": SerField { key_py: Py( 0x00007f9dbb54a570, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500350, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "num_beams": SerField { key_py: Py( 0x00007f9dbb54a5f0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5000f0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "prefix": SerField { key_py: Py( 0x00007f9dbb54a6b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "algorithm_version": SerField { key_py: Py( 0x00007f9dbe0669c0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "number_of_sequences": SerField { key_py: Py( 0x00007f9dbe066a60, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5001d0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "model_type": SerField { key_py: Py( 0x00007f9dbb54a4f0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "repetition_penalty": SerField { key_py: Py( 0x00007f9dbe066a10, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, }, computed_fields: Some( ComputedFields( [], ), ), mode: SimpleDict, extra_serializer: None, filter: SchemaFilter { include: None, exclude: None, }, required_fields: 13, }, ), fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], name: "HuggingFaceConfiguration", }, ), definitions=[])¶
- __pydantic_validator__ = SchemaValidator(title="HuggingFaceConfiguration", validator=Dataclass( DataclassValidator { strict: false, validator: DataclassArgs( DataclassArgsValidator { fields: [ Field { kw_only: false, name: "algorithm_version", py_name: Py( 0x00007f9e9963bc80, ), init: true, init_only: false, lookup_key: Simple { key: "algorithm_version", py_key: Py( 0x00007f9dbe066830, ), path: LookupPath( [ S( "algorithm_version", Py( 0x00007f9dbe066880, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "model_type", py_name: Py( 0x00007f9e9a4b3ab0, ), init: true, init_only: false, lookup_key: Simple { key: "model_type", py_key: Py( 0x00007f9dbb54a1b0, ), path: LookupPath( [ S( "model_type", Py( 0x00007f9dbb54a170, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prompt", py_name: Py( 0x00007f9e9d149570, ), init: true, init_only: false, lookup_key: Simple { key: "prompt", py_key: Py( 0x00007f9dbb54a130, ), path: LookupPath( [ S( "prompt", Py( 0x00007f9dbb54a0f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbdfefa00, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "length", py_name: Py( 0x00007f9e9ce83a30, ), init: true, init_only: false, lookup_key: Simple { key: "length", py_key: Py( 0x00007f9dbb54a1f0, ), path: LookupPath( [ S( "length", Py( 0x00007f9dbb54a230, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500350, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "stop_token", py_name: Py( 0x00007f9e8b9442f0, ), init: true, init_only: false, lookup_key: Simple { key: "stop_token", py_key: Py( 0x00007f9dbb54a270, ), path: LookupPath( [ S( "stop_token", Py( 0x00007f9dbb54a2b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "num_beams", py_name: Py( 0x00007f9dd96a5270, ), init: true, init_only: false, lookup_key: Simple { key: "num_beams", py_key: Py( 0x00007f9dbb54a2f0, ), path: LookupPath( [ S( "num_beams", Py( 0x00007f9dbb54a330, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5000f0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "do_sample", py_name: Py( 0x00007f9dd967f270, ), init: true, init_only: false, lookup_key: Simple { key: "do_sample", py_key: Py( 0x00007f9dbb54a370, ), path: LookupPath( [ S( "do_sample", Py( 0x00007f9dbb54a3b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x000055e1a1fe15a0, ), ), on_error: Raise, validator: Bool( BoolValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[bool]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "temperature", py_name: Py( 0x00007f9e7496ecf0, ), init: true, init_only: false, lookup_key: Simple { key: "temperature", py_key: Py( 0x00007f9dbb54a3f0, ), path: LookupPath( [ S( "temperature", Py( 0x00007f9dbb54a430, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "repetition_penalty", py_name: Py( 0x00007f9dd96777d0, ), init: true, init_only: false, lookup_key: Simple { key: "repetition_penalty", py_key: Py( 0x00007f9dbe066790, ), path: LookupPath( [ S( "repetition_penalty", Py( 0x00007f9dbe0668d0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "k", py_name: Py( 0x00007f9e9d125130, ), init: true, init_only: false, lookup_key: Simple { key: "k", py_key: Py( 0x00007f9e9d125130, ), path: LookupPath( [ S( "k", Py( 0x00007f9e9d125130, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500710, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "p", py_name: Py( 0x00007f9e9d5cc670, ), init: true, init_only: false, lookup_key: Simple { key: "p", py_key: Py( 0x00007f9e9d5cc670, ), path: LookupPath( [ S( "p", Py( 0x00007f9e9d5cc670, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prefix", py_name: Py( 0x00007f9e9d5b75f0, ), init: true, init_only: false, lookup_key: Simple { key: "prefix", py_key: Py( 0x00007f9dbb54a470, ), path: LookupPath( [ S( "prefix", Py( 0x00007f9dbb54a4b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "number_of_sequences", py_name: Py( 0x00007f9dbdfefbe0, ), init: true, init_only: false, lookup_key: Simple { key: "number_of_sequences", py_key: Py( 0x00007f9dbe066920, ), path: LookupPath( [ S( "number_of_sequences", Py( 0x00007f9dbe066970, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5001d0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, ], positional_count: 13, init_only_count: None, dataclass_name: "HuggingFaceConfiguration", validator_name: "dataclass-args[HuggingFaceConfiguration]", extra_behavior: Ignore, extras_validator: None, loc_by_alias: true, }, ), class: Py( 0x000055e1b7f169b0, ), generic_origin: None, fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], post_init: None, revalidate: Never, name: "HuggingFaceConfiguration", frozen: false, slots: true, }, ), definitions=[], cache_strings=True)¶
- __repr__()¶
Return repr(self).
- __signature__ = <Signature (algorithm_version: 'str' = '', model_type: str = '', prompt: str = "I'm a stochastic parrot.", length: int = 20, stop_token: str = '', num_beams: int = 1, do_sample: bool = True, temperature: float = 1.0, repetition_penalty: float = 1.0, k: int = 50, p: float = 1.0, prefix: str = '', number_of_sequences: int = 8) -> None>¶
- __wrapped__¶
alias of
HuggingFaceConfiguration
- class HuggingFaceXLMGenerator(*args, **kwargs)[source]¶
Bases:
HuggingFaceXLMGenerator
Configuration to generate text using XLM.
- algorithm_version: str = 'xlm-mlm-en-2048'¶
To differentiate between different versions of an application.
There is no imposed naming convention.
- model_type: str = 'xlm'¶
- classmethod list_versions()[source]¶
Get possible algorithm versions.
Standard S3 and cache search adding the version used in the configuration.
- Return type
Set
[str
]- Returns
viable values as
algorithm_version
for the environment.
- __annotations__ = {'algorithm_application': 'ClassVar[str]', 'algorithm_name': 'ClassVar[str]', 'algorithm_type': 'ClassVar[str]', 'algorithm_version': <class 'str'>, 'do_sample': 'bool', 'domain': 'ClassVar[str]', 'k': 'int', 'length': 'int', 'model_type': <class 'str'>, 'num_beams': 'int', 'number_of_sequences': 'int', 'p': 'float', 'prefix': 'str', 'prompt': 'str', 'repetition_penalty': 'float', 'stop_token': 'str', 'temperature': 'float'}¶
- __dataclass_fields__ = {'algorithm_application': Field(name='algorithm_application',type=typing.ClassVar[str],default='HuggingFaceXLMGenerator',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_name': Field(name='algorithm_name',type=typing.ClassVar[str],default='HuggingFaceGenerationAlgorithm',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_type': Field(name='algorithm_type',type=typing.ClassVar[str],default='generation',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_version': Field(name='algorithm_version',type=<class 'str'>,default='xlm-mlm-en-2048',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'do_sample': Field(name='do_sample',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}),kw_only=False,_field_type=_FIELD), 'domain': Field(name='domain',type=typing.ClassVar[str],default='nlp',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'k': Field(name='k',type=<class 'int'>,default=50,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of top-k probability tokens to keep.'}),kw_only=False,_field_type=_FIELD), 'length': Field(name='length',type=<class 'int'>,default=20,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Length of the generated text.'}),kw_only=False,_field_type=_FIELD), 'model_type': Field(name='model_type',type=<class 'str'>,default='xlm',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'num_beams': Field(name='num_beams',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of beams for beam search.'}),kw_only=False,_field_type=_FIELD), 'number_of_sequences': Field(name='number_of_sequences',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of text sequences to generate.'}),kw_only=False,_field_type=_FIELD), 'p': Field(name='p',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}),kw_only=False,_field_type=_FIELD), 'prefix': Field(name='prefix',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Text defining context provided prior to the prompt.'}),kw_only=False,_field_type=_FIELD), 'prompt': Field(name='prompt',type=<class 'str'>,default="I'm a stochastic parrot.",default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Prompt for text generation.'}),kw_only=False,_field_type=_FIELD), 'repetition_penalty': Field(name='repetition_penalty',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}),kw_only=False,_field_type=_FIELD), 'stop_token': Field(name='stop_token',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Stop token for text generation.'}),kw_only=False,_field_type=_FIELD), 'temperature': Field(name='temperature',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Temperature for sampling, the lower the greedier the sampling.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Configuration to generate text using XLM.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(*args, **kwargs)¶
- __match_args__ = ('algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences')¶
- __module__ = 'gt4sd.algorithms.generation.hugging_face.core'¶
- __parameters__ = (~T,)¶
- __pydantic_complete__ = True¶
- __pydantic_config__ = {}¶
- __pydantic_core_schema__ = {'cls': <class 'gt4sd.algorithms.generation.hugging_face.core.HuggingFaceXLMGenerator'>, 'config': {'title': 'HuggingFaceXLMGenerator'}, 'fields': ['algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences'], 'frozen': False, 'post_init': False, 'ref': 'types.HuggingFaceXLMGenerator:94427942146672', 'schema': {'collect_init_only': False, 'computed_fields': [], 'dataclass_name': 'HuggingFaceXLMGenerator', 'fields': [{'type': 'dataclass-field', 'name': 'algorithm_version', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'xlm-mlm-en-2048'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'model_type', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'xlm'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'prompt', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': "I'm a stochastic parrot."}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Prompt for text generation.'}}}, {'type': 'dataclass-field', 'name': 'length', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 20}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Length of the generated text.'}}}, {'type': 'dataclass-field', 'name': 'stop_token', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Stop token for text generation.'}}}, {'type': 'dataclass-field', 'name': 'num_beams', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 1}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of beams for beam search.'}}}, {'type': 'dataclass-field', 'name': 'do_sample', 'schema': {'type': 'default', 'schema': {'type': 'bool'}, 'default': True}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}}}, {'type': 'dataclass-field', 'name': 'temperature', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Temperature for sampling, the lower the greedier the sampling.'}}}, {'type': 'dataclass-field', 'name': 'repetition_penalty', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}}}, {'type': 'dataclass-field', 'name': 'k', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 50}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of top-k probability tokens to keep.'}}}, {'type': 'dataclass-field', 'name': 'p', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}}}, {'type': 'dataclass-field', 'name': 'prefix', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Text defining context provided prior to the prompt.'}}}, {'type': 'dataclass-field', 'name': 'number_of_sequences', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 8}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of text sequences to generate.'}}}], 'type': 'dataclass-args'}, 'slots': True, 'type': 'dataclass'}¶
- __pydantic_decorators__ = DecoratorInfos(validators={}, field_validators={}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})¶
- __pydantic_fields__ = {'algorithm_version': FieldInfo(annotation=str, required=False, default='xlm-mlm-en-2048', init=True, init_var=False, kw_only=False), 'do_sample': FieldInfo(annotation=bool, required=False, default=True, description='Whether or not to use sampling; use greedy decoding otherwise.', init=True, init_var=False, kw_only=False), 'k': FieldInfo(annotation=int, required=False, default=50, description='Number of top-k probability tokens to keep.', init=True, init_var=False, kw_only=False), 'length': FieldInfo(annotation=int, required=False, default=20, description='Length of the generated text.', init=True, init_var=False, kw_only=False), 'model_type': FieldInfo(annotation=str, required=False, default='xlm', init=True, init_var=False, kw_only=False), 'num_beams': FieldInfo(annotation=int, required=False, default=1, description='Number of beams for beam search.', init=True, init_var=False, kw_only=False), 'number_of_sequences': FieldInfo(annotation=int, required=False, default=8, description='Number of text sequences to generate.', init=True, init_var=False, kw_only=False), 'p': FieldInfo(annotation=float, required=False, default=1.0, description='Only tokens with cumulative probabilities summing up to this value are kept.', init=True, init_var=False, kw_only=False), 'prefix': FieldInfo(annotation=str, required=False, default='', description='Text defining context provided prior to the prompt.', init=True, init_var=False, kw_only=False), 'prompt': FieldInfo(annotation=str, required=False, default="I'm a stochastic parrot.", description='Prompt for text generation.', init=True, init_var=False, kw_only=False), 'repetition_penalty': FieldInfo(annotation=float, required=False, default=1.0, description='Primarily useful for CTRL model, where 1.2 should be used.', init=True, init_var=False, kw_only=False), 'stop_token': FieldInfo(annotation=str, required=False, default='', description='Stop token for text generation.', init=True, init_var=False, kw_only=False), 'temperature': FieldInfo(annotation=float, required=False, default=1.0, description='Temperature for sampling, the lower the greedier the sampling.', init=True, init_var=False, kw_only=False)}¶
- __pydantic_serializer__ = SchemaSerializer(serializer=Dataclass( DataclassSerializer { class: Py( 0x000055e1b7f2d270, ), serializer: Fields( GeneralFieldsSerializer { fields: { "number_of_sequences": SerField { key_py: Py( 0x00007f9dbe099070, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5001d0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "do_sample": SerField { key_py: Py( 0x00007f9dbb515270, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x000055e1a1fe15a0, ), ), serializer: Bool( BoolSerializer, ), }, ), ), required: true, }, "prefix": SerField { key_py: Py( 0x00007f9dbb5151f0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "length": SerField { key_py: Py( 0x00007f9dbb515330, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500350, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "p": SerField { key_py: Py( 0x00007f9e9d5cc670, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "repetition_penalty": SerField { key_py: Py( 0x00007f9dbe0990c0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "model_type": SerField { key_py: Py( 0x00007f9dbb5153b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dd91497b0, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "num_beams": SerField { key_py: Py( 0x00007f9dbb5152b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5000f0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "temperature": SerField { key_py: Py( 0x00007f9dbb515230, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "k": SerField { key_py: Py( 0x00007f9e9d125130, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500710, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "prompt": SerField { key_py: Py( 0x00007f9dbb515370, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbdfefa00, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "algorithm_version": SerField { key_py: Py( 0x00007f9dbe098cb0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbe1d9df0, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "stop_token": SerField { key_py: Py( 0x00007f9dbb5152f0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, }, computed_fields: Some( ComputedFields( [], ), ), mode: SimpleDict, extra_serializer: None, filter: SchemaFilter { include: None, exclude: None, }, required_fields: 13, }, ), fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], name: "HuggingFaceXLMGenerator", }, ), definitions=[])¶
- __pydantic_validator__ = SchemaValidator(title="HuggingFaceXLMGenerator", validator=Dataclass( DataclassValidator { strict: false, validator: DataclassArgs( DataclassArgsValidator { fields: [ Field { kw_only: false, name: "algorithm_version", py_name: Py( 0x00007f9e9963bc80, ), init: true, init_only: false, lookup_key: Simple { key: "algorithm_version", py_key: Py( 0x00007f9dbe099340, ), path: LookupPath( [ S( "algorithm_version", Py( 0x00007f9dbe098c10, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbe1d9df0, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "model_type", py_name: Py( 0x00007f9e9a4b3ab0, ), init: true, init_only: false, lookup_key: Simple { key: "model_type", py_key: Py( 0x00007f9dbb517330, ), path: LookupPath( [ S( "model_type", Py( 0x00007f9dbb5172b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dd91497b0, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prompt", py_name: Py( 0x00007f9e9d149570, ), init: true, init_only: false, lookup_key: Simple { key: "prompt", py_key: Py( 0x00007f9dbb517230, ), path: LookupPath( [ S( "prompt", Py( 0x00007f9dbb5171b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbdfefa00, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "length", py_name: Py( 0x00007f9e9ce83a30, ), init: true, init_only: false, lookup_key: Simple { key: "length", py_key: Py( 0x00007f9dbb5e4030, ), path: LookupPath( [ S( "length", Py( 0x00007f9dbb5e40b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500350, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "stop_token", py_name: Py( 0x00007f9e8b9442f0, ), init: true, init_only: false, lookup_key: Simple { key: "stop_token", py_key: Py( 0x00007f9dbb5e4130, ), path: LookupPath( [ S( "stop_token", Py( 0x00007f9dbb5e41b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "num_beams", py_name: Py( 0x00007f9dd96a5270, ), init: true, init_only: false, lookup_key: Simple { key: "num_beams", py_key: Py( 0x00007f9dbb5e4230, ), path: LookupPath( [ S( "num_beams", Py( 0x00007f9dbb5e42b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5000f0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "do_sample", py_name: Py( 0x00007f9dd967f270, ), init: true, init_only: false, lookup_key: Simple { key: "do_sample", py_key: Py( 0x00007f9dbb5e4330, ), path: LookupPath( [ S( "do_sample", Py( 0x00007f9dbb5e43b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x000055e1a1fe15a0, ), ), on_error: Raise, validator: Bool( BoolValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[bool]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "temperature", py_name: Py( 0x00007f9e7496ecf0, ), init: true, init_only: false, lookup_key: Simple { key: "temperature", py_key: Py( 0x00007f9dbb5e4430, ), path: LookupPath( [ S( "temperature", Py( 0x00007f9dbb5e44b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "repetition_penalty", py_name: Py( 0x00007f9dd96777d0, ), init: true, init_only: false, lookup_key: Simple { key: "repetition_penalty", py_key: Py( 0x00007f9dbe098b70, ), path: LookupPath( [ S( "repetition_penalty", Py( 0x00007f9dbe098bc0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "k", py_name: Py( 0x00007f9e9d125130, ), init: true, init_only: false, lookup_key: Simple { key: "k", py_key: Py( 0x00007f9e9d125130, ), path: LookupPath( [ S( "k", Py( 0x00007f9e9d125130, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500710, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "p", py_name: Py( 0x00007f9e9d5cc670, ), init: true, init_only: false, lookup_key: Simple { key: "p", py_key: Py( 0x00007f9e9d5cc670, ), path: LookupPath( [ S( "p", Py( 0x00007f9e9d5cc670, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prefix", py_name: Py( 0x00007f9e9d5b75f0, ), init: true, init_only: false, lookup_key: Simple { key: "prefix", py_key: Py( 0x00007f9dbb5e4530, ), path: LookupPath( [ S( "prefix", Py( 0x00007f9dbb5e50b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "number_of_sequences", py_name: Py( 0x00007f9dbdfefbe0, ), init: true, init_only: false, lookup_key: Simple { key: "number_of_sequences", py_key: Py( 0x00007f9dbe098ad0, ), path: LookupPath( [ S( "number_of_sequences", Py( 0x00007f9dbe098a80, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5001d0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, ], positional_count: 13, init_only_count: None, dataclass_name: "HuggingFaceXLMGenerator", validator_name: "dataclass-args[HuggingFaceXLMGenerator]", extra_behavior: Ignore, extras_validator: None, loc_by_alias: true, }, ), class: Py( 0x000055e1b7f2d270, ), generic_origin: None, fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], post_init: None, revalidate: Never, name: "HuggingFaceXLMGenerator", frozen: false, slots: true, }, ), definitions=[], cache_strings=True)¶
- __repr__()¶
Return repr(self).
- __signature__ = <Signature (*args: Any, algorithm_version: str = 'xlm-mlm-en-2048', model_type: str = 'xlm', prompt: str = "I'm a stochastic parrot.", length: int = 20, stop_token: str = '', num_beams: int = 1, do_sample: bool = True, temperature: float = 1.0, repetition_penalty: float = 1.0, k: int = 50, p: float = 1.0, prefix: str = '', number_of_sequences: int = 8) -> None>¶
- __wrapped__¶
alias of
HuggingFaceXLMGenerator
- algorithm_application: ClassVar[str] = 'HuggingFaceXLMGenerator'¶
Unique name for the application that is the use of this configuration together with a specific algorithm.
Will be set when registering to
ApplicationsRegistry
, but can be given by direct registration (Seeregister_algorithm_application
)
- algorithm_name: ClassVar[str] = 'HuggingFaceGenerationAlgorithm'¶
Name of the algorithm to use with this configuration.
Will be set when registering to
ApplicationsRegistry
- class HuggingFaceCTRLGenerator(*args, **kwargs)[source]¶
Bases:
HuggingFaceCTRLGenerator
Configuration to generate text using CTRL.
- algorithm_version: str = 'ctrl'¶
To differentiate between different versions of an application.
There is no imposed naming convention.
- model_type: str = 'ctrl'¶
- classmethod list_versions()[source]¶
Get possible algorithm versions.
Standard S3 and cache search adding the version used in the configuration.
- Return type
Set
[str
]- Returns
viable values as
algorithm_version
for the environment.
- __annotations__ = {'algorithm_application': 'ClassVar[str]', 'algorithm_name': 'ClassVar[str]', 'algorithm_type': 'ClassVar[str]', 'algorithm_version': <class 'str'>, 'do_sample': 'bool', 'domain': 'ClassVar[str]', 'k': 'int', 'length': 'int', 'model_type': <class 'str'>, 'num_beams': 'int', 'number_of_sequences': 'int', 'p': 'float', 'prefix': 'str', 'prompt': 'str', 'repetition_penalty': 'float', 'stop_token': 'str', 'temperature': 'float'}¶
- __dataclass_fields__ = {'algorithm_application': Field(name='algorithm_application',type=typing.ClassVar[str],default='HuggingFaceCTRLGenerator',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_name': Field(name='algorithm_name',type=typing.ClassVar[str],default='HuggingFaceGenerationAlgorithm',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_type': Field(name='algorithm_type',type=typing.ClassVar[str],default='generation',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_version': Field(name='algorithm_version',type=<class 'str'>,default='ctrl',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'do_sample': Field(name='do_sample',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}),kw_only=False,_field_type=_FIELD), 'domain': Field(name='domain',type=typing.ClassVar[str],default='nlp',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'k': Field(name='k',type=<class 'int'>,default=50,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of top-k probability tokens to keep.'}),kw_only=False,_field_type=_FIELD), 'length': Field(name='length',type=<class 'int'>,default=20,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Length of the generated text.'}),kw_only=False,_field_type=_FIELD), 'model_type': Field(name='model_type',type=<class 'str'>,default='ctrl',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'num_beams': Field(name='num_beams',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of beams for beam search.'}),kw_only=False,_field_type=_FIELD), 'number_of_sequences': Field(name='number_of_sequences',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of text sequences to generate.'}),kw_only=False,_field_type=_FIELD), 'p': Field(name='p',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}),kw_only=False,_field_type=_FIELD), 'prefix': Field(name='prefix',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Text defining context provided prior to the prompt.'}),kw_only=False,_field_type=_FIELD), 'prompt': Field(name='prompt',type=<class 'str'>,default="I'm a stochastic parrot.",default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Prompt for text generation.'}),kw_only=False,_field_type=_FIELD), 'repetition_penalty': Field(name='repetition_penalty',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}),kw_only=False,_field_type=_FIELD), 'stop_token': Field(name='stop_token',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Stop token for text generation.'}),kw_only=False,_field_type=_FIELD), 'temperature': Field(name='temperature',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Temperature for sampling, the lower the greedier the sampling.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Configuration to generate text using CTRL.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(*args, **kwargs)¶
- __match_args__ = ('algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences')¶
- __module__ = 'gt4sd.algorithms.generation.hugging_face.core'¶
- __parameters__ = (~T,)¶
- __pydantic_complete__ = True¶
- __pydantic_config__ = {}¶
- __pydantic_core_schema__ = {'cls': <class 'gt4sd.algorithms.generation.hugging_face.core.HuggingFaceCTRLGenerator'>, 'config': {'title': 'HuggingFaceCTRLGenerator'}, 'fields': ['algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences'], 'frozen': False, 'post_init': False, 'ref': 'types.HuggingFaceCTRLGenerator:94427942128832', 'schema': {'collect_init_only': False, 'computed_fields': [], 'dataclass_name': 'HuggingFaceCTRLGenerator', 'fields': [{'type': 'dataclass-field', 'name': 'algorithm_version', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'ctrl'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'model_type', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'ctrl'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'prompt', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': "I'm a stochastic parrot."}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Prompt for text generation.'}}}, {'type': 'dataclass-field', 'name': 'length', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 20}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Length of the generated text.'}}}, {'type': 'dataclass-field', 'name': 'stop_token', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Stop token for text generation.'}}}, {'type': 'dataclass-field', 'name': 'num_beams', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 1}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of beams for beam search.'}}}, {'type': 'dataclass-field', 'name': 'do_sample', 'schema': {'type': 'default', 'schema': {'type': 'bool'}, 'default': True}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}}}, {'type': 'dataclass-field', 'name': 'temperature', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Temperature for sampling, the lower the greedier the sampling.'}}}, {'type': 'dataclass-field', 'name': 'repetition_penalty', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}}}, {'type': 'dataclass-field', 'name': 'k', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 50}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of top-k probability tokens to keep.'}}}, {'type': 'dataclass-field', 'name': 'p', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}}}, {'type': 'dataclass-field', 'name': 'prefix', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Text defining context provided prior to the prompt.'}}}, {'type': 'dataclass-field', 'name': 'number_of_sequences', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 8}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of text sequences to generate.'}}}], 'type': 'dataclass-args'}, 'slots': True, 'type': 'dataclass'}¶
- __pydantic_decorators__ = DecoratorInfos(validators={}, field_validators={}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})¶
- __pydantic_fields__ = {'algorithm_version': FieldInfo(annotation=str, required=False, default='ctrl', init=True, init_var=False, kw_only=False), 'do_sample': FieldInfo(annotation=bool, required=False, default=True, description='Whether or not to use sampling; use greedy decoding otherwise.', init=True, init_var=False, kw_only=False), 'k': FieldInfo(annotation=int, required=False, default=50, description='Number of top-k probability tokens to keep.', init=True, init_var=False, kw_only=False), 'length': FieldInfo(annotation=int, required=False, default=20, description='Length of the generated text.', init=True, init_var=False, kw_only=False), 'model_type': FieldInfo(annotation=str, required=False, default='ctrl', init=True, init_var=False, kw_only=False), 'num_beams': FieldInfo(annotation=int, required=False, default=1, description='Number of beams for beam search.', init=True, init_var=False, kw_only=False), 'number_of_sequences': FieldInfo(annotation=int, required=False, default=8, description='Number of text sequences to generate.', init=True, init_var=False, kw_only=False), 'p': FieldInfo(annotation=float, required=False, default=1.0, description='Only tokens with cumulative probabilities summing up to this value are kept.', init=True, init_var=False, kw_only=False), 'prefix': FieldInfo(annotation=str, required=False, default='', description='Text defining context provided prior to the prompt.', init=True, init_var=False, kw_only=False), 'prompt': FieldInfo(annotation=str, required=False, default="I'm a stochastic parrot.", description='Prompt for text generation.', init=True, init_var=False, kw_only=False), 'repetition_penalty': FieldInfo(annotation=float, required=False, default=1.0, description='Primarily useful for CTRL model, where 1.2 should be used.', init=True, init_var=False, kw_only=False), 'stop_token': FieldInfo(annotation=str, required=False, default='', description='Stop token for text generation.', init=True, init_var=False, kw_only=False), 'temperature': FieldInfo(annotation=float, required=False, default=1.0, description='Temperature for sampling, the lower the greedier the sampling.', init=True, init_var=False, kw_only=False)}¶
- __pydantic_serializer__ = SchemaSerializer(serializer=Dataclass( DataclassSerializer { class: Py( 0x000055e1b7f28cc0, ), serializer: Fields( GeneralFieldsSerializer { fields: { "number_of_sequences": SerField { key_py: Py( 0x00007f9dbe09b050, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5001d0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "model_type": SerField { key_py: Py( 0x00007f9dbb5e7a30, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9ddd046b30, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "algorithm_version": SerField { key_py: Py( 0x00007f9dbe09b0f0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9ddd046b30, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "p": SerField { key_py: Py( 0x00007f9e9d5cc670, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "do_sample": SerField { key_py: Py( 0x00007f9dbb5e7cb0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x000055e1a1fe15a0, ), ), serializer: Bool( BoolSerializer, ), }, ), ), required: true, }, "repetition_penalty": SerField { key_py: Py( 0x00007f9dbe09b0a0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "prefix": SerField { key_py: Py( 0x00007f9dbb5e7ef0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "num_beams": SerField { key_py: Py( 0x00007f9dbb5e7830, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5000f0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "k": SerField { key_py: Py( 0x00007f9e9d125130, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500710, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "prompt": SerField { key_py: Py( 0x00007f9dbb5e79b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbdfefa00, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "stop_token": SerField { key_py: Py( 0x00007f9dbb5e78b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "temperature": SerField { key_py: Py( 0x00007f9dbb5e4a30, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "length": SerField { key_py: Py( 0x00007f9dbb5e7930, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500350, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, }, computed_fields: Some( ComputedFields( [], ), ), mode: SimpleDict, extra_serializer: None, filter: SchemaFilter { include: None, exclude: None, }, required_fields: 13, }, ), fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], name: "HuggingFaceCTRLGenerator", }, ), definitions=[])¶
- __pydantic_validator__ = SchemaValidator(title="HuggingFaceCTRLGenerator", validator=Dataclass( DataclassValidator { strict: false, validator: DataclassArgs( DataclassArgsValidator { fields: [ Field { kw_only: false, name: "algorithm_version", py_name: Py( 0x00007f9e9963bc80, ), init: true, init_only: false, lookup_key: Simple { key: "algorithm_version", py_key: Py( 0x00007f9dbe09b4b0, ), path: LookupPath( [ S( "algorithm_version", Py( 0x00007f9dbe09b9b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9ddd046b30, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "model_type", py_name: Py( 0x00007f9e9a4b3ab0, ), init: true, init_only: false, lookup_key: Simple { key: "model_type", py_key: Py( 0x00007f9dbb516770, ), path: LookupPath( [ S( "model_type", Py( 0x00007f9dbb516530, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9ddd046b30, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prompt", py_name: Py( 0x00007f9e9d149570, ), init: true, init_only: false, lookup_key: Simple { key: "prompt", py_key: Py( 0x00007f9dbb5167f0, ), path: LookupPath( [ S( "prompt", Py( 0x00007f9dbb5166f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbdfefa00, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "length", py_name: Py( 0x00007f9e9ce83a30, ), init: true, init_only: false, lookup_key: Simple { key: "length", py_key: Py( 0x00007f9dbb516830, ), path: LookupPath( [ S( "length", Py( 0x00007f9dbb516870, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500350, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "stop_token", py_name: Py( 0x00007f9e8b9442f0, ), init: true, init_only: false, lookup_key: Simple { key: "stop_token", py_key: Py( 0x00007f9dbb516730, ), path: LookupPath( [ S( "stop_token", Py( 0x00007f9dbb517270, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "num_beams", py_name: Py( 0x00007f9dd96a5270, ), init: true, init_only: false, lookup_key: Simple { key: "num_beams", py_key: Py( 0x00007f9dbb517570, ), path: LookupPath( [ S( "num_beams", Py( 0x00007f9dbb516e70, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5000f0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "do_sample", py_name: Py( 0x00007f9dd967f270, ), init: true, init_only: false, lookup_key: Simple { key: "do_sample", py_key: Py( 0x00007f9dbdeac130, ), path: LookupPath( [ S( "do_sample", Py( 0x00007f9dbdeacbf0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x000055e1a1fe15a0, ), ), on_error: Raise, validator: Bool( BoolValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[bool]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "temperature", py_name: Py( 0x00007f9e7496ecf0, ), init: true, init_only: false, lookup_key: Simple { key: "temperature", py_key: Py( 0x00007f9dbdeae6b0, ), path: LookupPath( [ S( "temperature", Py( 0x00007f9dbdeaf9f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "repetition_penalty", py_name: Py( 0x00007f9dd96777d0, ), init: true, init_only: false, lookup_key: Simple { key: "repetition_penalty", py_key: Py( 0x00007f9dbe09b7d0, ), path: LookupPath( [ S( "repetition_penalty", Py( 0x00007f9dbe09b460, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "k", py_name: Py( 0x00007f9e9d125130, ), init: true, init_only: false, lookup_key: Simple { key: "k", py_key: Py( 0x00007f9e9d125130, ), path: LookupPath( [ S( "k", Py( 0x00007f9e9d125130, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500710, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "p", py_name: Py( 0x00007f9e9d5cc670, ), init: true, init_only: false, lookup_key: Simple { key: "p", py_key: Py( 0x00007f9e9d5cc670, ), path: LookupPath( [ S( "p", Py( 0x00007f9e9d5cc670, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prefix", py_name: Py( 0x00007f9e9d5b75f0, ), init: true, init_only: false, lookup_key: Simple { key: "prefix", py_key: Py( 0x00007f9dbdeae230, ), path: LookupPath( [ S( "prefix", Py( 0x00007f9dbdeaf3b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "number_of_sequences", py_name: Py( 0x00007f9dbdfefbe0, ), init: true, init_only: false, lookup_key: Simple { key: "number_of_sequences", py_key: Py( 0x00007f9dbe09b000, ), path: LookupPath( [ S( "number_of_sequences", Py( 0x00007f9dbe09aec0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5001d0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, ], positional_count: 13, init_only_count: None, dataclass_name: "HuggingFaceCTRLGenerator", validator_name: "dataclass-args[HuggingFaceCTRLGenerator]", extra_behavior: Ignore, extras_validator: None, loc_by_alias: true, }, ), class: Py( 0x000055e1b7f28cc0, ), generic_origin: None, fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], post_init: None, revalidate: Never, name: "HuggingFaceCTRLGenerator", frozen: false, slots: true, }, ), definitions=[], cache_strings=True)¶
- __repr__()¶
Return repr(self).
- __signature__ = <Signature (*args: Any, algorithm_version: str = 'ctrl', model_type: str = 'ctrl', prompt: str = "I'm a stochastic parrot.", length: int = 20, stop_token: str = '', num_beams: int = 1, do_sample: bool = True, temperature: float = 1.0, repetition_penalty: float = 1.0, k: int = 50, p: float = 1.0, prefix: str = '', number_of_sequences: int = 8) -> None>¶
- __wrapped__¶
alias of
HuggingFaceCTRLGenerator
- algorithm_application: ClassVar[str] = 'HuggingFaceCTRLGenerator'¶
Unique name for the application that is the use of this configuration together with a specific algorithm.
Will be set when registering to
ApplicationsRegistry
, but can be given by direct registration (Seeregister_algorithm_application
)
- algorithm_name: ClassVar[str] = 'HuggingFaceGenerationAlgorithm'¶
Name of the algorithm to use with this configuration.
Will be set when registering to
ApplicationsRegistry
- class HuggingFaceGPT2Generator(*args, **kwargs)[source]¶
Bases:
HuggingFaceGPT2Generator
Configuration to generate text using GPT2.
- algorithm_version: str = 'gpt2'¶
To differentiate between different versions of an application.
There is no imposed naming convention.
- model_type: str = 'gpt2'¶
- classmethod list_versions()[source]¶
Get possible algorithm versions.
Standard S3 and cache search adding the version used in the configuration.
- Return type
Set
[str
]- Returns
viable values as
algorithm_version
for the environment.
- __annotations__ = {'algorithm_application': 'ClassVar[str]', 'algorithm_name': 'ClassVar[str]', 'algorithm_type': 'ClassVar[str]', 'algorithm_version': <class 'str'>, 'do_sample': 'bool', 'domain': 'ClassVar[str]', 'k': 'int', 'length': 'int', 'model_type': <class 'str'>, 'num_beams': 'int', 'number_of_sequences': 'int', 'p': 'float', 'prefix': 'str', 'prompt': 'str', 'repetition_penalty': 'float', 'stop_token': 'str', 'temperature': 'float'}¶
- __dataclass_fields__ = {'algorithm_application': Field(name='algorithm_application',type=typing.ClassVar[str],default='HuggingFaceGPT2Generator',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_name': Field(name='algorithm_name',type=typing.ClassVar[str],default='HuggingFaceGenerationAlgorithm',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_type': Field(name='algorithm_type',type=typing.ClassVar[str],default='generation',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_version': Field(name='algorithm_version',type=<class 'str'>,default='gpt2',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'do_sample': Field(name='do_sample',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}),kw_only=False,_field_type=_FIELD), 'domain': Field(name='domain',type=typing.ClassVar[str],default='nlp',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'k': Field(name='k',type=<class 'int'>,default=50,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of top-k probability tokens to keep.'}),kw_only=False,_field_type=_FIELD), 'length': Field(name='length',type=<class 'int'>,default=20,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Length of the generated text.'}),kw_only=False,_field_type=_FIELD), 'model_type': Field(name='model_type',type=<class 'str'>,default='gpt2',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'num_beams': Field(name='num_beams',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of beams for beam search.'}),kw_only=False,_field_type=_FIELD), 'number_of_sequences': Field(name='number_of_sequences',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of text sequences to generate.'}),kw_only=False,_field_type=_FIELD), 'p': Field(name='p',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}),kw_only=False,_field_type=_FIELD), 'prefix': Field(name='prefix',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Text defining context provided prior to the prompt.'}),kw_only=False,_field_type=_FIELD), 'prompt': Field(name='prompt',type=<class 'str'>,default="I'm a stochastic parrot.",default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Prompt for text generation.'}),kw_only=False,_field_type=_FIELD), 'repetition_penalty': Field(name='repetition_penalty',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}),kw_only=False,_field_type=_FIELD), 'stop_token': Field(name='stop_token',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Stop token for text generation.'}),kw_only=False,_field_type=_FIELD), 'temperature': Field(name='temperature',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Temperature for sampling, the lower the greedier the sampling.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Configuration to generate text using GPT2.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(*args, **kwargs)¶
- __match_args__ = ('algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences')¶
- __module__ = 'gt4sd.algorithms.generation.hugging_face.core'¶
- __parameters__ = (~T,)¶
- __pydantic_complete__ = True¶
- __pydantic_config__ = {}¶
- __pydantic_core_schema__ = {'cls': <class 'gt4sd.algorithms.generation.hugging_face.core.HuggingFaceGPT2Generator'>, 'config': {'title': 'HuggingFaceGPT2Generator'}, 'fields': ['algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences'], 'frozen': False, 'post_init': False, 'ref': 'types.HuggingFaceGPT2Generator:94427942099968', 'schema': {'collect_init_only': False, 'computed_fields': [], 'dataclass_name': 'HuggingFaceGPT2Generator', 'fields': [{'type': 'dataclass-field', 'name': 'algorithm_version', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'gpt2'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'model_type', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'gpt2'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'prompt', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': "I'm a stochastic parrot."}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Prompt for text generation.'}}}, {'type': 'dataclass-field', 'name': 'length', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 20}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Length of the generated text.'}}}, {'type': 'dataclass-field', 'name': 'stop_token', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Stop token for text generation.'}}}, {'type': 'dataclass-field', 'name': 'num_beams', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 1}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of beams for beam search.'}}}, {'type': 'dataclass-field', 'name': 'do_sample', 'schema': {'type': 'default', 'schema': {'type': 'bool'}, 'default': True}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}}}, {'type': 'dataclass-field', 'name': 'temperature', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Temperature for sampling, the lower the greedier the sampling.'}}}, {'type': 'dataclass-field', 'name': 'repetition_penalty', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}}}, {'type': 'dataclass-field', 'name': 'k', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 50}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of top-k probability tokens to keep.'}}}, {'type': 'dataclass-field', 'name': 'p', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}}}, {'type': 'dataclass-field', 'name': 'prefix', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Text defining context provided prior to the prompt.'}}}, {'type': 'dataclass-field', 'name': 'number_of_sequences', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 8}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of text sequences to generate.'}}}], 'type': 'dataclass-args'}, 'slots': True, 'type': 'dataclass'}¶
- __pydantic_decorators__ = DecoratorInfos(validators={}, field_validators={}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})¶
- __pydantic_fields__ = {'algorithm_version': FieldInfo(annotation=str, required=False, default='gpt2', init=True, init_var=False, kw_only=False), 'do_sample': FieldInfo(annotation=bool, required=False, default=True, description='Whether or not to use sampling; use greedy decoding otherwise.', init=True, init_var=False, kw_only=False), 'k': FieldInfo(annotation=int, required=False, default=50, description='Number of top-k probability tokens to keep.', init=True, init_var=False, kw_only=False), 'length': FieldInfo(annotation=int, required=False, default=20, description='Length of the generated text.', init=True, init_var=False, kw_only=False), 'model_type': FieldInfo(annotation=str, required=False, default='gpt2', init=True, init_var=False, kw_only=False), 'num_beams': FieldInfo(annotation=int, required=False, default=1, description='Number of beams for beam search.', init=True, init_var=False, kw_only=False), 'number_of_sequences': FieldInfo(annotation=int, required=False, default=8, description='Number of text sequences to generate.', init=True, init_var=False, kw_only=False), 'p': FieldInfo(annotation=float, required=False, default=1.0, description='Only tokens with cumulative probabilities summing up to this value are kept.', init=True, init_var=False, kw_only=False), 'prefix': FieldInfo(annotation=str, required=False, default='', description='Text defining context provided prior to the prompt.', init=True, init_var=False, kw_only=False), 'prompt': FieldInfo(annotation=str, required=False, default="I'm a stochastic parrot.", description='Prompt for text generation.', init=True, init_var=False, kw_only=False), 'repetition_penalty': FieldInfo(annotation=float, required=False, default=1.0, description='Primarily useful for CTRL model, where 1.2 should be used.', init=True, init_var=False, kw_only=False), 'stop_token': FieldInfo(annotation=str, required=False, default='', description='Stop token for text generation.', init=True, init_var=False, kw_only=False), 'temperature': FieldInfo(annotation=float, required=False, default=1.0, description='Temperature for sampling, the lower the greedier the sampling.', init=True, init_var=False, kw_only=False)}¶
- __pydantic_serializer__ = SchemaSerializer(serializer=Dataclass( DataclassSerializer { class: Py( 0x000055e1b7f21c00, ), serializer: Fields( GeneralFieldsSerializer { fields: { "k": SerField { key_py: Py( 0x00007f9e9d125130, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500710, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "temperature": SerField { key_py: Py( 0x00007f9dbdf66ab0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "algorithm_version": SerField { key_py: Py( 0x00007f9dbb5f11b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dd972d6f0, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "number_of_sequences": SerField { key_py: Py( 0x00007f9dbb5f1070, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5001d0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "num_beams": SerField { key_py: Py( 0x00007f9dbdf65270, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5000f0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "model_type": SerField { key_py: Py( 0x00007f9dbdf65330, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dd972d6f0, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "repetition_penalty": SerField { key_py: Py( 0x00007f9dbb5f1110, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "prompt": SerField { key_py: Py( 0x00007f9dbdf652f0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbdfefa00, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "length": SerField { key_py: Py( 0x00007f9dbdf66af0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500350, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "prefix": SerField { key_py: Py( 0x00007f9dbdf651f0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "stop_token": SerField { key_py: Py( 0x00007f9dbdf652b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "do_sample": SerField { key_py: Py( 0x00007f9dbdf65230, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x000055e1a1fe15a0, ), ), serializer: Bool( BoolSerializer, ), }, ), ), required: true, }, "p": SerField { key_py: Py( 0x00007f9e9d5cc670, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, }, computed_fields: Some( ComputedFields( [], ), ), mode: SimpleDict, extra_serializer: None, filter: SchemaFilter { include: None, exclude: None, }, required_fields: 13, }, ), fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], name: "HuggingFaceGPT2Generator", }, ), definitions=[])¶
- __pydantic_validator__ = SchemaValidator(title="HuggingFaceGPT2Generator", validator=Dataclass( DataclassValidator { strict: false, validator: DataclassArgs( DataclassArgsValidator { fields: [ Field { kw_only: false, name: "algorithm_version", py_name: Py( 0x00007f9e9963bc80, ), init: true, init_only: false, lookup_key: Simple { key: "algorithm_version", py_key: Py( 0x00007f9dbb5f17a0, ), path: LookupPath( [ S( "algorithm_version", Py( 0x00007f9dbb5f0f30, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dd972d6f0, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "model_type", py_name: Py( 0x00007f9e9a4b3ab0, ), init: true, init_only: false, lookup_key: Simple { key: "model_type", py_key: Py( 0x00007f9dbdf5aaf0, ), path: LookupPath( [ S( "model_type", Py( 0x00007f9dbdf5aa70, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dd972d6f0, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prompt", py_name: Py( 0x00007f9e9d149570, ), init: true, init_only: false, lookup_key: Simple { key: "prompt", py_key: Py( 0x00007f9dbdf5a970, ), path: LookupPath( [ S( "prompt", Py( 0x00007f9dbdf5a1b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbdfefa00, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "length", py_name: Py( 0x00007f9e9ce83a30, ), init: true, init_only: false, lookup_key: Simple { key: "length", py_key: Py( 0x00007f9dbdf5ab70, ), path: LookupPath( [ S( "length", Py( 0x00007f9dbdf5abf0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500350, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "stop_token", py_name: Py( 0x00007f9e8b9442f0, ), init: true, init_only: false, lookup_key: Simple { key: "stop_token", py_key: Py( 0x00007f9dbdf5ac70, ), path: LookupPath( [ S( "stop_token", Py( 0x00007f9dbdf5acf0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "num_beams", py_name: Py( 0x00007f9dd96a5270, ), init: true, init_only: false, lookup_key: Simple { key: "num_beams", py_key: Py( 0x00007f9dbdf5ad70, ), path: LookupPath( [ S( "num_beams", Py( 0x00007f9dbdf5b030, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5000f0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "do_sample", py_name: Py( 0x00007f9dd967f270, ), init: true, init_only: false, lookup_key: Simple { key: "do_sample", py_key: Py( 0x00007f9dbdf5bdb0, ), path: LookupPath( [ S( "do_sample", Py( 0x00007f9dbdf5beb0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x000055e1a1fe15a0, ), ), on_error: Raise, validator: Bool( BoolValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[bool]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "temperature", py_name: Py( 0x00007f9e7496ecf0, ), init: true, init_only: false, lookup_key: Simple { key: "temperature", py_key: Py( 0x00007f9dbdf5bfb0, ), path: LookupPath( [ S( "temperature", Py( 0x00007f9dbdf58130, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "repetition_penalty", py_name: Py( 0x00007f9dd96777d0, ), init: true, init_only: false, lookup_key: Simple { key: "repetition_penalty", py_key: Py( 0x00007f9dbb5f0e90, ), path: LookupPath( [ S( "repetition_penalty", Py( 0x00007f9dbb5f0ee0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "k", py_name: Py( 0x00007f9e9d125130, ), init: true, init_only: false, lookup_key: Simple { key: "k", py_key: Py( 0x00007f9e9d125130, ), path: LookupPath( [ S( "k", Py( 0x00007f9e9d125130, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500710, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "p", py_name: Py( 0x00007f9e9d5cc670, ), init: true, init_only: false, lookup_key: Simple { key: "p", py_key: Py( 0x00007f9e9d5cc670, ), path: LookupPath( [ S( "p", Py( 0x00007f9e9d5cc670, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prefix", py_name: Py( 0x00007f9e9d5b75f0, ), init: true, init_only: false, lookup_key: Simple { key: "prefix", py_key: Py( 0x00007f9dbdf58030, ), path: LookupPath( [ S( "prefix", Py( 0x00007f9dbdf580f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "number_of_sequences", py_name: Py( 0x00007f9dbdfefbe0, ), init: true, init_only: false, lookup_key: Simple { key: "number_of_sequences", py_key: Py( 0x00007f9dbb5f14d0, ), path: LookupPath( [ S( "number_of_sequences", Py( 0x00007f9dbb5f1520, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5001d0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, ], positional_count: 13, init_only_count: None, dataclass_name: "HuggingFaceGPT2Generator", validator_name: "dataclass-args[HuggingFaceGPT2Generator]", extra_behavior: Ignore, extras_validator: None, loc_by_alias: true, }, ), class: Py( 0x000055e1b7f21c00, ), generic_origin: None, fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], post_init: None, revalidate: Never, name: "HuggingFaceGPT2Generator", frozen: false, slots: true, }, ), definitions=[], cache_strings=True)¶
- __repr__()¶
Return repr(self).
- __signature__ = <Signature (*args: Any, algorithm_version: str = 'gpt2', model_type: str = 'gpt2', prompt: str = "I'm a stochastic parrot.", length: int = 20, stop_token: str = '', num_beams: int = 1, do_sample: bool = True, temperature: float = 1.0, repetition_penalty: float = 1.0, k: int = 50, p: float = 1.0, prefix: str = '', number_of_sequences: int = 8) -> None>¶
- __wrapped__¶
alias of
HuggingFaceGPT2Generator
- algorithm_application: ClassVar[str] = 'HuggingFaceGPT2Generator'¶
Unique name for the application that is the use of this configuration together with a specific algorithm.
Will be set when registering to
ApplicationsRegistry
, but can be given by direct registration (Seeregister_algorithm_application
)
- algorithm_name: ClassVar[str] = 'HuggingFaceGenerationAlgorithm'¶
Name of the algorithm to use with this configuration.
Will be set when registering to
ApplicationsRegistry
- class HuggingFaceOpenAIGPTGenerator(*args, **kwargs)[source]¶
Bases:
HuggingFaceOpenAIGPTGenerator
Configuration to generate text using OpenAIGPT.
- algorithm_version: str = 'openai-gpt'¶
To differentiate between different versions of an application.
There is no imposed naming convention.
- model_type: str = 'openai-gpt'¶
- classmethod list_versions()[source]¶
Get possible algorithm versions.
Standard S3 and cache search adding the version used in the configuration.
- Return type
Set
[str
]- Returns
viable values as
algorithm_version
for the environment.
- __annotations__ = {'algorithm_application': 'ClassVar[str]', 'algorithm_name': 'ClassVar[str]', 'algorithm_type': 'ClassVar[str]', 'algorithm_version': <class 'str'>, 'do_sample': 'bool', 'domain': 'ClassVar[str]', 'k': 'int', 'length': 'int', 'model_type': <class 'str'>, 'num_beams': 'int', 'number_of_sequences': 'int', 'p': 'float', 'prefix': 'str', 'prompt': 'str', 'repetition_penalty': 'float', 'stop_token': 'str', 'temperature': 'float'}¶
- __dataclass_fields__ = {'algorithm_application': Field(name='algorithm_application',type=typing.ClassVar[str],default='HuggingFaceOpenAIGPTGenerator',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_name': Field(name='algorithm_name',type=typing.ClassVar[str],default='HuggingFaceGenerationAlgorithm',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_type': Field(name='algorithm_type',type=typing.ClassVar[str],default='generation',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_version': Field(name='algorithm_version',type=<class 'str'>,default='openai-gpt',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'do_sample': Field(name='do_sample',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}),kw_only=False,_field_type=_FIELD), 'domain': Field(name='domain',type=typing.ClassVar[str],default='nlp',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'k': Field(name='k',type=<class 'int'>,default=50,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of top-k probability tokens to keep.'}),kw_only=False,_field_type=_FIELD), 'length': Field(name='length',type=<class 'int'>,default=20,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Length of the generated text.'}),kw_only=False,_field_type=_FIELD), 'model_type': Field(name='model_type',type=<class 'str'>,default='openai-gpt',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'num_beams': Field(name='num_beams',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of beams for beam search.'}),kw_only=False,_field_type=_FIELD), 'number_of_sequences': Field(name='number_of_sequences',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of text sequences to generate.'}),kw_only=False,_field_type=_FIELD), 'p': Field(name='p',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}),kw_only=False,_field_type=_FIELD), 'prefix': Field(name='prefix',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Text defining context provided prior to the prompt.'}),kw_only=False,_field_type=_FIELD), 'prompt': Field(name='prompt',type=<class 'str'>,default="I'm a stochastic parrot.",default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Prompt for text generation.'}),kw_only=False,_field_type=_FIELD), 'repetition_penalty': Field(name='repetition_penalty',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}),kw_only=False,_field_type=_FIELD), 'stop_token': Field(name='stop_token',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Stop token for text generation.'}),kw_only=False,_field_type=_FIELD), 'temperature': Field(name='temperature',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Temperature for sampling, the lower the greedier the sampling.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Configuration to generate text using OpenAIGPT.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(*args, **kwargs)¶
- __match_args__ = ('algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences')¶
- __module__ = 'gt4sd.algorithms.generation.hugging_face.core'¶
- __parameters__ = (~T,)¶
- __pydantic_complete__ = True¶
- __pydantic_config__ = {}¶
- __pydantic_core_schema__ = {'cls': <class 'gt4sd.algorithms.generation.hugging_face.core.HuggingFaceOpenAIGPTGenerator'>, 'config': {'title': 'HuggingFaceOpenAIGPTGenerator'}, 'fields': ['algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences'], 'frozen': False, 'post_init': False, 'ref': 'types.HuggingFaceOpenAIGPTGenerator:94427942165328', 'schema': {'collect_init_only': False, 'computed_fields': [], 'dataclass_name': 'HuggingFaceOpenAIGPTGenerator', 'fields': [{'type': 'dataclass-field', 'name': 'algorithm_version', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'openai-gpt'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'model_type', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'openai-gpt'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'prompt', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': "I'm a stochastic parrot."}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Prompt for text generation.'}}}, {'type': 'dataclass-field', 'name': 'length', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 20}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Length of the generated text.'}}}, {'type': 'dataclass-field', 'name': 'stop_token', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Stop token for text generation.'}}}, {'type': 'dataclass-field', 'name': 'num_beams', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 1}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of beams for beam search.'}}}, {'type': 'dataclass-field', 'name': 'do_sample', 'schema': {'type': 'default', 'schema': {'type': 'bool'}, 'default': True}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}}}, {'type': 'dataclass-field', 'name': 'temperature', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Temperature for sampling, the lower the greedier the sampling.'}}}, {'type': 'dataclass-field', 'name': 'repetition_penalty', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}}}, {'type': 'dataclass-field', 'name': 'k', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 50}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of top-k probability tokens to keep.'}}}, {'type': 'dataclass-field', 'name': 'p', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}}}, {'type': 'dataclass-field', 'name': 'prefix', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Text defining context provided prior to the prompt.'}}}, {'type': 'dataclass-field', 'name': 'number_of_sequences', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 8}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of text sequences to generate.'}}}], 'type': 'dataclass-args'}, 'slots': True, 'type': 'dataclass'}¶
- __pydantic_decorators__ = DecoratorInfos(validators={}, field_validators={}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})¶
- __pydantic_fields__ = {'algorithm_version': FieldInfo(annotation=str, required=False, default='openai-gpt', init=True, init_var=False, kw_only=False), 'do_sample': FieldInfo(annotation=bool, required=False, default=True, description='Whether or not to use sampling; use greedy decoding otherwise.', init=True, init_var=False, kw_only=False), 'k': FieldInfo(annotation=int, required=False, default=50, description='Number of top-k probability tokens to keep.', init=True, init_var=False, kw_only=False), 'length': FieldInfo(annotation=int, required=False, default=20, description='Length of the generated text.', init=True, init_var=False, kw_only=False), 'model_type': FieldInfo(annotation=str, required=False, default='openai-gpt', init=True, init_var=False, kw_only=False), 'num_beams': FieldInfo(annotation=int, required=False, default=1, description='Number of beams for beam search.', init=True, init_var=False, kw_only=False), 'number_of_sequences': FieldInfo(annotation=int, required=False, default=8, description='Number of text sequences to generate.', init=True, init_var=False, kw_only=False), 'p': FieldInfo(annotation=float, required=False, default=1.0, description='Only tokens with cumulative probabilities summing up to this value are kept.', init=True, init_var=False, kw_only=False), 'prefix': FieldInfo(annotation=str, required=False, default='', description='Text defining context provided prior to the prompt.', init=True, init_var=False, kw_only=False), 'prompt': FieldInfo(annotation=str, required=False, default="I'm a stochastic parrot.", description='Prompt for text generation.', init=True, init_var=False, kw_only=False), 'repetition_penalty': FieldInfo(annotation=float, required=False, default=1.0, description='Primarily useful for CTRL model, where 1.2 should be used.', init=True, init_var=False, kw_only=False), 'stop_token': FieldInfo(annotation=str, required=False, default='', description='Stop token for text generation.', init=True, init_var=False, kw_only=False), 'temperature': FieldInfo(annotation=float, required=False, default=1.0, description='Temperature for sampling, the lower the greedier the sampling.', init=True, init_var=False, kw_only=False)}¶
- __pydantic_serializer__ = SchemaSerializer(serializer=Dataclass( DataclassSerializer { class: Py( 0x000055e1b7f31b50, ), serializer: Fields( GeneralFieldsSerializer { fields: { "repetition_penalty": SerField { key_py: Py( 0x00007f9dbb5f3280, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "k": SerField { key_py: Py( 0x00007f9e9d125130, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500710, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "prompt": SerField { key_py: Py( 0x00007f9dbdf67030, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbdfefa00, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "stop_token": SerField { key_py: Py( 0x00007f9dbdf675b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "p": SerField { key_py: Py( 0x00007f9e9d5cc670, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "prefix": SerField { key_py: Py( 0x00007f9dbdf65ab0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "number_of_sequences": SerField { key_py: Py( 0x00007f9dbb5f3230, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5001d0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "model_type": SerField { key_py: Py( 0x00007f9dbdf673b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbe1057f0, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "temperature": SerField { key_py: Py( 0x00007f9dbdf65eb0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "algorithm_version": SerField { key_py: Py( 0x00007f9dbb5f32d0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbe1057f0, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "num_beams": SerField { key_py: Py( 0x00007f9dbdf67630, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5000f0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "do_sample": SerField { key_py: Py( 0x00007f9dbdf66d30, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x000055e1a1fe15a0, ), ), serializer: Bool( BoolSerializer, ), }, ), ), required: true, }, "length": SerField { key_py: Py( 0x00007f9dbdf67370, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500350, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, }, computed_fields: Some( ComputedFields( [], ), ), mode: SimpleDict, extra_serializer: None, filter: SchemaFilter { include: None, exclude: None, }, required_fields: 13, }, ), fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], name: "HuggingFaceOpenAIGPTGenerator", }, ), definitions=[])¶
- __pydantic_validator__ = SchemaValidator(title="HuggingFaceOpenAIGPTGenerator", validator=Dataclass( DataclassValidator { strict: false, validator: DataclassArgs( DataclassArgsValidator { fields: [ Field { kw_only: false, name: "algorithm_version", py_name: Py( 0x00007f9e9963bc80, ), init: true, init_only: false, lookup_key: Simple { key: "algorithm_version", py_key: Py( 0x00007f9dbb5f3870, ), path: LookupPath( [ S( "algorithm_version", Py( 0x00007f9dbb5f2f10, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbe1057f0, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "model_type", py_name: Py( 0x00007f9e9a4b3ab0, ), init: true, init_only: false, lookup_key: Simple { key: "model_type", py_key: Py( 0x00007f9dbdf59e70, ), path: LookupPath( [ S( "model_type", Py( 0x00007f9dbdf59db0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbe1057f0, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prompt", py_name: Py( 0x00007f9e9d149570, ), init: true, init_only: false, lookup_key: Simple { key: "prompt", py_key: Py( 0x00007f9dbdf59d30, ), path: LookupPath( [ S( "prompt", Py( 0x00007f9dbdf59cb0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbdfefa00, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "length", py_name: Py( 0x00007f9e9ce83a30, ), init: true, init_only: false, lookup_key: Simple { key: "length", py_key: Py( 0x00007f9dbdf59ef0, ), path: LookupPath( [ S( "length", Py( 0x00007f9dbdf59fb0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500350, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "stop_token", py_name: Py( 0x00007f9e8b9442f0, ), init: true, init_only: false, lookup_key: Simple { key: "stop_token", py_key: Py( 0x00007f9dbdf59ff0, ), path: LookupPath( [ S( "stop_token", Py( 0x00007f9dbdf5a030, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "num_beams", py_name: Py( 0x00007f9dd96a5270, ), init: true, init_only: false, lookup_key: Simple { key: "num_beams", py_key: Py( 0x00007f9dbdf5a070, ), path: LookupPath( [ S( "num_beams", Py( 0x00007f9dbdf5a0b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5000f0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "do_sample", py_name: Py( 0x00007f9dd967f270, ), init: true, init_only: false, lookup_key: Simple { key: "do_sample", py_key: Py( 0x00007f9dbdf5a0f0, ), path: LookupPath( [ S( "do_sample", Py( 0x00007f9dbdf5a130, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x000055e1a1fe15a0, ), ), on_error: Raise, validator: Bool( BoolValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[bool]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "temperature", py_name: Py( 0x00007f9e7496ecf0, ), init: true, init_only: false, lookup_key: Simple { key: "temperature", py_key: Py( 0x00007f9dbdf5a170, ), path: LookupPath( [ S( "temperature", Py( 0x00007f9dbdf5a1f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "repetition_penalty", py_name: Py( 0x00007f9dd96777d0, ), init: true, init_only: false, lookup_key: Simple { key: "repetition_penalty", py_key: Py( 0x00007f9dbb5f1660, ), path: LookupPath( [ S( "repetition_penalty", Py( 0x00007f9dbb5f2ec0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "k", py_name: Py( 0x00007f9e9d125130, ), init: true, init_only: false, lookup_key: Simple { key: "k", py_key: Py( 0x00007f9e9d125130, ), path: LookupPath( [ S( "k", Py( 0x00007f9e9d125130, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500710, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "p", py_name: Py( 0x00007f9e9d5cc670, ), init: true, init_only: false, lookup_key: Simple { key: "p", py_key: Py( 0x00007f9e9d5cc670, ), path: LookupPath( [ S( "p", Py( 0x00007f9e9d5cc670, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prefix", py_name: Py( 0x00007f9e9d5b75f0, ), init: true, init_only: false, lookup_key: Simple { key: "prefix", py_key: Py( 0x00007f9dbdf5a230, ), path: LookupPath( [ S( "prefix", Py( 0x00007f9dbdf5a270, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "number_of_sequences", py_name: Py( 0x00007f9dbdfefbe0, ), init: true, init_only: false, lookup_key: Simple { key: "number_of_sequences", py_key: Py( 0x00007f9dbb5f35a0, ), path: LookupPath( [ S( "number_of_sequences", Py( 0x00007f9dbb5f35f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5001d0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, ], positional_count: 13, init_only_count: None, dataclass_name: "HuggingFaceOpenAIGPTGenerator", validator_name: "dataclass-args[HuggingFaceOpenAIGPTGenerator]", extra_behavior: Ignore, extras_validator: None, loc_by_alias: true, }, ), class: Py( 0x000055e1b7f31b50, ), generic_origin: None, fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], post_init: None, revalidate: Never, name: "HuggingFaceOpenAIGPTGenerator", frozen: false, slots: true, }, ), definitions=[], cache_strings=True)¶
- __repr__()¶
Return repr(self).
- __signature__ = <Signature (*args: Any, algorithm_version: str = 'openai-gpt', model_type: str = 'openai-gpt', prompt: str = "I'm a stochastic parrot.", length: int = 20, stop_token: str = '', num_beams: int = 1, do_sample: bool = True, temperature: float = 1.0, repetition_penalty: float = 1.0, k: int = 50, p: float = 1.0, prefix: str = '', number_of_sequences: int = 8) -> None>¶
- __wrapped__¶
alias of
HuggingFaceOpenAIGPTGenerator
- algorithm_application: ClassVar[str] = 'HuggingFaceOpenAIGPTGenerator'¶
Unique name for the application that is the use of this configuration together with a specific algorithm.
Will be set when registering to
ApplicationsRegistry
, but can be given by direct registration (Seeregister_algorithm_application
)
- algorithm_name: ClassVar[str] = 'HuggingFaceGenerationAlgorithm'¶
Name of the algorithm to use with this configuration.
Will be set when registering to
ApplicationsRegistry
- class HuggingFaceXLNetGenerator(*args, **kwargs)[source]¶
Bases:
HuggingFaceXLNetGenerator
Configuration to generate text using XLNet.
- algorithm_version: str = 'xlnet-large-cased'¶
To differentiate between different versions of an application.
There is no imposed naming convention.
- model_type: str = 'xlnet'¶
- classmethod list_versions()[source]¶
Get possible algorithm versions.
Standard S3 and cache search adding the version used in the configuration.
- Return type
Set
[str
]- Returns
viable values as
algorithm_version
for the environment.
- __annotations__ = {'algorithm_application': 'ClassVar[str]', 'algorithm_name': 'ClassVar[str]', 'algorithm_type': 'ClassVar[str]', 'algorithm_version': <class 'str'>, 'do_sample': 'bool', 'domain': 'ClassVar[str]', 'k': 'int', 'length': 'int', 'model_type': <class 'str'>, 'num_beams': 'int', 'number_of_sequences': 'int', 'p': 'float', 'prefix': 'str', 'prompt': 'str', 'repetition_penalty': 'float', 'stop_token': 'str', 'temperature': 'float'}¶
- __dataclass_fields__ = {'algorithm_application': Field(name='algorithm_application',type=typing.ClassVar[str],default='HuggingFaceXLNetGenerator',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_name': Field(name='algorithm_name',type=typing.ClassVar[str],default='HuggingFaceGenerationAlgorithm',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_type': Field(name='algorithm_type',type=typing.ClassVar[str],default='generation',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_version': Field(name='algorithm_version',type=<class 'str'>,default='xlnet-large-cased',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'do_sample': Field(name='do_sample',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}),kw_only=False,_field_type=_FIELD), 'domain': Field(name='domain',type=typing.ClassVar[str],default='nlp',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'k': Field(name='k',type=<class 'int'>,default=50,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of top-k probability tokens to keep.'}),kw_only=False,_field_type=_FIELD), 'length': Field(name='length',type=<class 'int'>,default=20,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Length of the generated text.'}),kw_only=False,_field_type=_FIELD), 'model_type': Field(name='model_type',type=<class 'str'>,default='xlnet',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'num_beams': Field(name='num_beams',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of beams for beam search.'}),kw_only=False,_field_type=_FIELD), 'number_of_sequences': Field(name='number_of_sequences',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of text sequences to generate.'}),kw_only=False,_field_type=_FIELD), 'p': Field(name='p',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}),kw_only=False,_field_type=_FIELD), 'prefix': Field(name='prefix',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Text defining context provided prior to the prompt.'}),kw_only=False,_field_type=_FIELD), 'prompt': Field(name='prompt',type=<class 'str'>,default="I'm a stochastic parrot.",default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Prompt for text generation.'}),kw_only=False,_field_type=_FIELD), 'repetition_penalty': Field(name='repetition_penalty',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}),kw_only=False,_field_type=_FIELD), 'stop_token': Field(name='stop_token',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Stop token for text generation.'}),kw_only=False,_field_type=_FIELD), 'temperature': Field(name='temperature',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Temperature for sampling, the lower the greedier the sampling.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Configuration to generate text using XLNet.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(*args, **kwargs)¶
- __match_args__ = ('algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences')¶
- __module__ = 'gt4sd.algorithms.generation.hugging_face.core'¶
- __parameters__ = (~T,)¶
- __pydantic_complete__ = True¶
- __pydantic_config__ = {}¶
- __pydantic_core_schema__ = {'cls': <class 'gt4sd.algorithms.generation.hugging_face.core.HuggingFaceXLNetGenerator'>, 'config': {'title': 'HuggingFaceXLNetGenerator'}, 'fields': ['algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences'], 'frozen': False, 'post_init': False, 'ref': 'types.HuggingFaceXLNetGenerator:94427942234512', 'schema': {'collect_init_only': False, 'computed_fields': [], 'dataclass_name': 'HuggingFaceXLNetGenerator', 'fields': [{'type': 'dataclass-field', 'name': 'algorithm_version', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'xlnet-large-cased'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'model_type', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'xlnet'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'prompt', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': "I'm a stochastic parrot."}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Prompt for text generation.'}}}, {'type': 'dataclass-field', 'name': 'length', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 20}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Length of the generated text.'}}}, {'type': 'dataclass-field', 'name': 'stop_token', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Stop token for text generation.'}}}, {'type': 'dataclass-field', 'name': 'num_beams', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 1}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of beams for beam search.'}}}, {'type': 'dataclass-field', 'name': 'do_sample', 'schema': {'type': 'default', 'schema': {'type': 'bool'}, 'default': True}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}}}, {'type': 'dataclass-field', 'name': 'temperature', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Temperature for sampling, the lower the greedier the sampling.'}}}, {'type': 'dataclass-field', 'name': 'repetition_penalty', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}}}, {'type': 'dataclass-field', 'name': 'k', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 50}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of top-k probability tokens to keep.'}}}, {'type': 'dataclass-field', 'name': 'p', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}}}, {'type': 'dataclass-field', 'name': 'prefix', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Text defining context provided prior to the prompt.'}}}, {'type': 'dataclass-field', 'name': 'number_of_sequences', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 8}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of text sequences to generate.'}}}], 'type': 'dataclass-args'}, 'slots': True, 'type': 'dataclass'}¶
- __pydantic_decorators__ = DecoratorInfos(validators={}, field_validators={}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})¶
- __pydantic_fields__ = {'algorithm_version': FieldInfo(annotation=str, required=False, default='xlnet-large-cased', init=True, init_var=False, kw_only=False), 'do_sample': FieldInfo(annotation=bool, required=False, default=True, description='Whether or not to use sampling; use greedy decoding otherwise.', init=True, init_var=False, kw_only=False), 'k': FieldInfo(annotation=int, required=False, default=50, description='Number of top-k probability tokens to keep.', init=True, init_var=False, kw_only=False), 'length': FieldInfo(annotation=int, required=False, default=20, description='Length of the generated text.', init=True, init_var=False, kw_only=False), 'model_type': FieldInfo(annotation=str, required=False, default='xlnet', init=True, init_var=False, kw_only=False), 'num_beams': FieldInfo(annotation=int, required=False, default=1, description='Number of beams for beam search.', init=True, init_var=False, kw_only=False), 'number_of_sequences': FieldInfo(annotation=int, required=False, default=8, description='Number of text sequences to generate.', init=True, init_var=False, kw_only=False), 'p': FieldInfo(annotation=float, required=False, default=1.0, description='Only tokens with cumulative probabilities summing up to this value are kept.', init=True, init_var=False, kw_only=False), 'prefix': FieldInfo(annotation=str, required=False, default='', description='Text defining context provided prior to the prompt.', init=True, init_var=False, kw_only=False), 'prompt': FieldInfo(annotation=str, required=False, default="I'm a stochastic parrot.", description='Prompt for text generation.', init=True, init_var=False, kw_only=False), 'repetition_penalty': FieldInfo(annotation=float, required=False, default=1.0, description='Primarily useful for CTRL model, where 1.2 should be used.', init=True, init_var=False, kw_only=False), 'stop_token': FieldInfo(annotation=str, required=False, default='', description='Stop token for text generation.', init=True, init_var=False, kw_only=False), 'temperature': FieldInfo(annotation=float, required=False, default=1.0, description='Temperature for sampling, the lower the greedier the sampling.', init=True, init_var=False, kw_only=False)}¶
- __pydantic_serializer__ = SchemaSerializer(serializer=Dataclass( DataclassSerializer { class: Py( 0x000055e1b7f42990, ), serializer: Fields( GeneralFieldsSerializer { fields: { "model_type": SerField { key_py: Py( 0x00007f9dbb514230, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dd91498b0, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "stop_token": SerField { key_py: Py( 0x00007f9dbb54ab30, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "length": SerField { key_py: Py( 0x00007f9dbb548b30, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500350, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "temperature": SerField { key_py: Py( 0x00007f9dbb548570, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "repetition_penalty": SerField { key_py: Py( 0x00007f9dbe16a920, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "num_beams": SerField { key_py: Py( 0x00007f9dbb548c30, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5000f0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "k": SerField { key_py: Py( 0x00007f9e9d125130, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500710, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "do_sample": SerField { key_py: Py( 0x00007f9dbb5489f0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x000055e1a1fe15a0, ), ), serializer: Bool( BoolSerializer, ), }, ), ), required: true, }, "algorithm_version": SerField { key_py: Py( 0x00007f9dbe16b050, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbdfefcd0, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "prompt": SerField { key_py: Py( 0x00007f9dbb549df0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbdfefa00, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "number_of_sequences": SerField { key_py: Py( 0x00007f9dbb5f3500, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5001d0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "p": SerField { key_py: Py( 0x00007f9e9d5cc670, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "prefix": SerField { key_py: Py( 0x00007f9dbb548030, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, }, computed_fields: Some( ComputedFields( [], ), ), mode: SimpleDict, extra_serializer: None, filter: SchemaFilter { include: None, exclude: None, }, required_fields: 13, }, ), fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], name: "HuggingFaceXLNetGenerator", }, ), definitions=[])¶
- __pydantic_validator__ = SchemaValidator(title="HuggingFaceXLNetGenerator", validator=Dataclass( DataclassValidator { strict: false, validator: DataclassArgs( DataclassArgsValidator { fields: [ Field { kw_only: false, name: "algorithm_version", py_name: Py( 0x00007f9e9963bc80, ), init: true, init_only: false, lookup_key: Simple { key: "algorithm_version", py_key: Py( 0x00007f9dbb5bd890, ), path: LookupPath( [ S( "algorithm_version", Py( 0x00007f9dbb5bd520, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbdfefcd0, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "model_type", py_name: Py( 0x00007f9e9a4b3ab0, ), init: true, init_only: false, lookup_key: Simple { key: "model_type", py_key: Py( 0x00007f9dbdf06270, ), path: LookupPath( [ S( "model_type", Py( 0x00007f9dbdf06170, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dd91498b0, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prompt", py_name: Py( 0x00007f9e9d149570, ), init: true, init_only: false, lookup_key: Simple { key: "prompt", py_key: Py( 0x00007f9dbdf06070, ), path: LookupPath( [ S( "prompt", Py( 0x00007f9dbdf05ff0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbdfefa00, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "length", py_name: Py( 0x00007f9e9ce83a30, ), init: true, init_only: false, lookup_key: Simple { key: "length", py_key: Py( 0x00007f9dbdf06330, ), path: LookupPath( [ S( "length", Py( 0x00007f9dbdf063b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500350, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "stop_token", py_name: Py( 0x00007f9e8b9442f0, ), init: true, init_only: false, lookup_key: Simple { key: "stop_token", py_key: Py( 0x00007f9dbdf06430, ), path: LookupPath( [ S( "stop_token", Py( 0x00007f9dbdf06670, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "num_beams", py_name: Py( 0x00007f9dd96a5270, ), init: true, init_only: false, lookup_key: Simple { key: "num_beams", py_key: Py( 0x00007f9dbdf070f0, ), path: LookupPath( [ S( "num_beams", Py( 0x00007f9dbdf071f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5000f0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "do_sample", py_name: Py( 0x00007f9dd967f270, ), init: true, init_only: false, lookup_key: Simple { key: "do_sample", py_key: Py( 0x00007f9dbdf060b0, ), path: LookupPath( [ S( "do_sample", Py( 0x00007f9dbdf062b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x000055e1a1fe15a0, ), ), on_error: Raise, validator: Bool( BoolValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[bool]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "temperature", py_name: Py( 0x00007f9e7496ecf0, ), init: true, init_only: false, lookup_key: Simple { key: "temperature", py_key: Py( 0x00007f9dbdf072f0, ), path: LookupPath( [ S( "temperature", Py( 0x00007f9dbdf07370, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "repetition_penalty", py_name: Py( 0x00007f9dd96777d0, ), init: true, init_only: false, lookup_key: Simple { key: "repetition_penalty", py_key: Py( 0x00007f9dbb5bd2a0, ), path: LookupPath( [ S( "repetition_penalty", Py( 0x00007f9dbb5bd3e0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "k", py_name: Py( 0x00007f9e9d125130, ), init: true, init_only: false, lookup_key: Simple { key: "k", py_key: Py( 0x00007f9e9d125130, ), path: LookupPath( [ S( "k", Py( 0x00007f9e9d125130, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500710, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "p", py_name: Py( 0x00007f9e9d5cc670, ), init: true, init_only: false, lookup_key: Simple { key: "p", py_key: Py( 0x00007f9e9d5cc670, ), path: LookupPath( [ S( "p", Py( 0x00007f9e9d5cc670, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prefix", py_name: Py( 0x00007f9e9d5b75f0, ), init: true, init_only: false, lookup_key: Simple { key: "prefix", py_key: Py( 0x00007f9dbdf073f0, ), path: LookupPath( [ S( "prefix", Py( 0x00007f9dbdf07470, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "number_of_sequences", py_name: Py( 0x00007f9dbdfefbe0, ), init: true, init_only: false, lookup_key: Simple { key: "number_of_sequences", py_key: Py( 0x00007f9dbb5bd200, ), path: LookupPath( [ S( "number_of_sequences", Py( 0x00007f9dbb5bd1b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5001d0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, ], positional_count: 13, init_only_count: None, dataclass_name: "HuggingFaceXLNetGenerator", validator_name: "dataclass-args[HuggingFaceXLNetGenerator]", extra_behavior: Ignore, extras_validator: None, loc_by_alias: true, }, ), class: Py( 0x000055e1b7f42990, ), generic_origin: None, fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], post_init: None, revalidate: Never, name: "HuggingFaceXLNetGenerator", frozen: false, slots: true, }, ), definitions=[], cache_strings=True)¶
- __repr__()¶
Return repr(self).
- __signature__ = <Signature (*args: Any, algorithm_version: str = 'xlnet-large-cased', model_type: str = 'xlnet', prompt: str = "I'm a stochastic parrot.", length: int = 20, stop_token: str = '', num_beams: int = 1, do_sample: bool = True, temperature: float = 1.0, repetition_penalty: float = 1.0, k: int = 50, p: float = 1.0, prefix: str = '', number_of_sequences: int = 8) -> None>¶
- __wrapped__¶
alias of
HuggingFaceXLNetGenerator
- algorithm_application: ClassVar[str] = 'HuggingFaceXLNetGenerator'¶
Unique name for the application that is the use of this configuration together with a specific algorithm.
Will be set when registering to
ApplicationsRegistry
, but can be given by direct registration (Seeregister_algorithm_application
)
- algorithm_name: ClassVar[str] = 'HuggingFaceGenerationAlgorithm'¶
Name of the algorithm to use with this configuration.
Will be set when registering to
ApplicationsRegistry
- class HuggingFaceTransfoXLGenerator(*args, **kwargs)[source]¶
Bases:
HuggingFaceTransfoXLGenerator
Configuration to generate text using TransfoXL.
- algorithm_version: str = 'transfo-xl-wt103'¶
To differentiate between different versions of an application.
There is no imposed naming convention.
- model_type: str = 'transfo-xl'¶
- classmethod list_versions()[source]¶
Get possible algorithm versions.
Standard S3 and cache search adding the version used in the configuration.
- Return type
Set
[str
]- Returns
viable values as
algorithm_version
for the environment.
- __annotations__ = {'algorithm_application': 'ClassVar[str]', 'algorithm_name': 'ClassVar[str]', 'algorithm_type': 'ClassVar[str]', 'algorithm_version': <class 'str'>, 'do_sample': 'bool', 'domain': 'ClassVar[str]', 'k': 'int', 'length': 'int', 'model_type': <class 'str'>, 'num_beams': 'int', 'number_of_sequences': 'int', 'p': 'float', 'prefix': 'str', 'prompt': 'str', 'repetition_penalty': 'float', 'stop_token': 'str', 'temperature': 'float'}¶
- __dataclass_fields__ = {'algorithm_application': Field(name='algorithm_application',type=typing.ClassVar[str],default='HuggingFaceTransfoXLGenerator',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_name': Field(name='algorithm_name',type=typing.ClassVar[str],default='HuggingFaceGenerationAlgorithm',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_type': Field(name='algorithm_type',type=typing.ClassVar[str],default='generation',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_version': Field(name='algorithm_version',type=<class 'str'>,default='transfo-xl-wt103',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'do_sample': Field(name='do_sample',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}),kw_only=False,_field_type=_FIELD), 'domain': Field(name='domain',type=typing.ClassVar[str],default='nlp',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'k': Field(name='k',type=<class 'int'>,default=50,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of top-k probability tokens to keep.'}),kw_only=False,_field_type=_FIELD), 'length': Field(name='length',type=<class 'int'>,default=20,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Length of the generated text.'}),kw_only=False,_field_type=_FIELD), 'model_type': Field(name='model_type',type=<class 'str'>,default='transfo-xl',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'num_beams': Field(name='num_beams',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of beams for beam search.'}),kw_only=False,_field_type=_FIELD), 'number_of_sequences': Field(name='number_of_sequences',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of text sequences to generate.'}),kw_only=False,_field_type=_FIELD), 'p': Field(name='p',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}),kw_only=False,_field_type=_FIELD), 'prefix': Field(name='prefix',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Text defining context provided prior to the prompt.'}),kw_only=False,_field_type=_FIELD), 'prompt': Field(name='prompt',type=<class 'str'>,default="I'm a stochastic parrot.",default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Prompt for text generation.'}),kw_only=False,_field_type=_FIELD), 'repetition_penalty': Field(name='repetition_penalty',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}),kw_only=False,_field_type=_FIELD), 'stop_token': Field(name='stop_token',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Stop token for text generation.'}),kw_only=False,_field_type=_FIELD), 'temperature': Field(name='temperature',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Temperature for sampling, the lower the greedier the sampling.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Configuration to generate text using TransfoXL.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(*args, **kwargs)¶
- __match_args__ = ('algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences')¶
- __module__ = 'gt4sd.algorithms.generation.hugging_face.core'¶
- __parameters__ = (~T,)¶
- __pydantic_complete__ = True¶
- __pydantic_config__ = {}¶
- __pydantic_core_schema__ = {'cls': <class 'gt4sd.algorithms.generation.hugging_face.core.HuggingFaceTransfoXLGenerator'>, 'config': {'title': 'HuggingFaceTransfoXLGenerator'}, 'fields': ['algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences'], 'frozen': False, 'post_init': False, 'ref': 'types.HuggingFaceTransfoXLGenerator:94427942250416', 'schema': {'collect_init_only': False, 'computed_fields': [], 'dataclass_name': 'HuggingFaceTransfoXLGenerator', 'fields': [{'type': 'dataclass-field', 'name': 'algorithm_version', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'transfo-xl-wt103'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'model_type', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'transfo-xl'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'prompt', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': "I'm a stochastic parrot."}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Prompt for text generation.'}}}, {'type': 'dataclass-field', 'name': 'length', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 20}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Length of the generated text.'}}}, {'type': 'dataclass-field', 'name': 'stop_token', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Stop token for text generation.'}}}, {'type': 'dataclass-field', 'name': 'num_beams', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 1}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of beams for beam search.'}}}, {'type': 'dataclass-field', 'name': 'do_sample', 'schema': {'type': 'default', 'schema': {'type': 'bool'}, 'default': True}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}}}, {'type': 'dataclass-field', 'name': 'temperature', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Temperature for sampling, the lower the greedier the sampling.'}}}, {'type': 'dataclass-field', 'name': 'repetition_penalty', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}}}, {'type': 'dataclass-field', 'name': 'k', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 50}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of top-k probability tokens to keep.'}}}, {'type': 'dataclass-field', 'name': 'p', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}}}, {'type': 'dataclass-field', 'name': 'prefix', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Text defining context provided prior to the prompt.'}}}, {'type': 'dataclass-field', 'name': 'number_of_sequences', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 8}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of text sequences to generate.'}}}], 'type': 'dataclass-args'}, 'slots': True, 'type': 'dataclass'}¶
- __pydantic_decorators__ = DecoratorInfos(validators={}, field_validators={}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})¶
- __pydantic_fields__ = {'algorithm_version': FieldInfo(annotation=str, required=False, default='transfo-xl-wt103', init=True, init_var=False, kw_only=False), 'do_sample': FieldInfo(annotation=bool, required=False, default=True, description='Whether or not to use sampling; use greedy decoding otherwise.', init=True, init_var=False, kw_only=False), 'k': FieldInfo(annotation=int, required=False, default=50, description='Number of top-k probability tokens to keep.', init=True, init_var=False, kw_only=False), 'length': FieldInfo(annotation=int, required=False, default=20, description='Length of the generated text.', init=True, init_var=False, kw_only=False), 'model_type': FieldInfo(annotation=str, required=False, default='transfo-xl', init=True, init_var=False, kw_only=False), 'num_beams': FieldInfo(annotation=int, required=False, default=1, description='Number of beams for beam search.', init=True, init_var=False, kw_only=False), 'number_of_sequences': FieldInfo(annotation=int, required=False, default=8, description='Number of text sequences to generate.', init=True, init_var=False, kw_only=False), 'p': FieldInfo(annotation=float, required=False, default=1.0, description='Only tokens with cumulative probabilities summing up to this value are kept.', init=True, init_var=False, kw_only=False), 'prefix': FieldInfo(annotation=str, required=False, default='', description='Text defining context provided prior to the prompt.', init=True, init_var=False, kw_only=False), 'prompt': FieldInfo(annotation=str, required=False, default="I'm a stochastic parrot.", description='Prompt for text generation.', init=True, init_var=False, kw_only=False), 'repetition_penalty': FieldInfo(annotation=float, required=False, default=1.0, description='Primarily useful for CTRL model, where 1.2 should be used.', init=True, init_var=False, kw_only=False), 'stop_token': FieldInfo(annotation=str, required=False, default='', description='Stop token for text generation.', init=True, init_var=False, kw_only=False), 'temperature': FieldInfo(annotation=float, required=False, default=1.0, description='Temperature for sampling, the lower the greedier the sampling.', init=True, init_var=False, kw_only=False)}¶
- __pydantic_serializer__ = SchemaSerializer(serializer=Dataclass( DataclassSerializer { class: Py( 0x000055e1b7f467b0, ), serializer: Fields( GeneralFieldsSerializer { fields: { "model_type": SerField { key_py: Py( 0x00007f9dbe179cf0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbe104530, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "num_beams": SerField { key_py: Py( 0x00007f9dbe179e70, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5000f0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "algorithm_version": SerField { key_py: Py( 0x00007f9dbb5bf500, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbdfefd20, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "do_sample": SerField { key_py: Py( 0x00007f9dbe17a430, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x000055e1a1fe15a0, ), ), serializer: Bool( BoolSerializer, ), }, ), ), required: true, }, "prompt": SerField { key_py: Py( 0x00007f9dbe179c70, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbdfefa00, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "repetition_penalty": SerField { key_py: Py( 0x00007f9dbb5bf410, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "temperature": SerField { key_py: Py( 0x00007f9dbe17a270, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "k": SerField { key_py: Py( 0x00007f9e9d125130, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500710, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "p": SerField { key_py: Py( 0x00007f9e9d5cc670, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "prefix": SerField { key_py: Py( 0x00007f9dbe179ef0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "number_of_sequences": SerField { key_py: Py( 0x00007f9dbb5bf3c0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5001d0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "stop_token": SerField { key_py: Py( 0x00007f9dbe179970, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "length": SerField { key_py: Py( 0x00007f9dbe179bf0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500350, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, }, computed_fields: Some( ComputedFields( [], ), ), mode: SimpleDict, extra_serializer: None, filter: SchemaFilter { include: None, exclude: None, }, required_fields: 13, }, ), fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], name: "HuggingFaceTransfoXLGenerator", }, ), definitions=[])¶
- __pydantic_validator__ = SchemaValidator(title="HuggingFaceTransfoXLGenerator", validator=Dataclass( DataclassValidator { strict: false, validator: DataclassArgs( DataclassArgsValidator { fields: [ Field { kw_only: false, name: "algorithm_version", py_name: Py( 0x00007f9e9963bc80, ), init: true, init_only: false, lookup_key: Simple { key: "algorithm_version", py_key: Py( 0x00007f9dbe16b5f0, ), path: LookupPath( [ S( "algorithm_version", Py( 0x00007f9dbe16aa60, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbdfefd20, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "model_type", py_name: Py( 0x00007f9e9a4b3ab0, ), init: true, init_only: false, lookup_key: Simple { key: "model_type", py_key: Py( 0x00007f9dbe717470, ), path: LookupPath( [ S( "model_type", Py( 0x00007f9dc19aaeb0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbe104530, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prompt", py_name: Py( 0x00007f9e9d149570, ), init: true, init_only: false, lookup_key: Simple { key: "prompt", py_key: Py( 0x00007f9dbea99ef0, ), path: LookupPath( [ S( "prompt", Py( 0x00007f9dbb548b70, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbdfefa00, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "length", py_name: Py( 0x00007f9e9ce83a30, ), init: true, init_only: false, lookup_key: Simple { key: "length", py_key: Py( 0x00007f9dbb548730, ), path: LookupPath( [ S( "length", Py( 0x00007f9dbb5484f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500350, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "stop_token", py_name: Py( 0x00007f9e8b9442f0, ), init: true, init_only: false, lookup_key: Simple { key: "stop_token", py_key: Py( 0x00007f9dbb515c30, ), path: LookupPath( [ S( "stop_token", Py( 0x00007f9dbb514af0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "num_beams", py_name: Py( 0x00007f9dd96a5270, ), init: true, init_only: false, lookup_key: Simple { key: "num_beams", py_key: Py( 0x00007f9dbb5169b0, ), path: LookupPath( [ S( "num_beams", Py( 0x00007f9dbb514db0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5000f0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "do_sample", py_name: Py( 0x00007f9dd967f270, ), init: true, init_only: false, lookup_key: Simple { key: "do_sample", py_key: Py( 0x00007f9dbb5160f0, ), path: LookupPath( [ S( "do_sample", Py( 0x00007f9dbb514970, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x000055e1a1fe15a0, ), ), on_error: Raise, validator: Bool( BoolValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[bool]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "temperature", py_name: Py( 0x00007f9e7496ecf0, ), init: true, init_only: false, lookup_key: Simple { key: "temperature", py_key: Py( 0x00007f9dbb5a6ff0, ), path: LookupPath( [ S( "temperature", Py( 0x00007f9dbb5e75f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "repetition_penalty", py_name: Py( 0x00007f9dd96777d0, ), init: true, init_only: false, lookup_key: Simple { key: "repetition_penalty", py_key: Py( 0x00007f9dbb5f34b0, ), path: LookupPath( [ S( "repetition_penalty", Py( 0x00007f9dbb5f3d70, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "k", py_name: Py( 0x00007f9e9d125130, ), init: true, init_only: false, lookup_key: Simple { key: "k", py_key: Py( 0x00007f9e9d125130, ), path: LookupPath( [ S( "k", Py( 0x00007f9e9d125130, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500710, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "p", py_name: Py( 0x00007f9e9d5cc670, ), init: true, init_only: false, lookup_key: Simple { key: "p", py_key: Py( 0x00007f9e9d5cc670, ), path: LookupPath( [ S( "p", Py( 0x00007f9e9d5cc670, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prefix", py_name: Py( 0x00007f9e9d5b75f0, ), init: true, init_only: false, lookup_key: Simple { key: "prefix", py_key: Py( 0x00007f9dbb5e4d70, ), path: LookupPath( [ S( "prefix", Py( 0x00007f9dbb5e5970, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "number_of_sequences", py_name: Py( 0x00007f9dbdfefbe0, ), init: true, init_only: false, lookup_key: Simple { key: "number_of_sequences", py_key: Py( 0x00007f9dbb5bf0f0, ), path: LookupPath( [ S( "number_of_sequences", Py( 0x00007f9dbb5bfaa0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5001d0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, ], positional_count: 13, init_only_count: None, dataclass_name: "HuggingFaceTransfoXLGenerator", validator_name: "dataclass-args[HuggingFaceTransfoXLGenerator]", extra_behavior: Ignore, extras_validator: None, loc_by_alias: true, }, ), class: Py( 0x000055e1b7f467b0, ), generic_origin: None, fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], post_init: None, revalidate: Never, name: "HuggingFaceTransfoXLGenerator", frozen: false, slots: true, }, ), definitions=[], cache_strings=True)¶
- __repr__()¶
Return repr(self).
- __signature__ = <Signature (*args: Any, algorithm_version: str = 'transfo-xl-wt103', model_type: str = 'transfo-xl', prompt: str = "I'm a stochastic parrot.", length: int = 20, stop_token: str = '', num_beams: int = 1, do_sample: bool = True, temperature: float = 1.0, repetition_penalty: float = 1.0, k: int = 50, p: float = 1.0, prefix: str = '', number_of_sequences: int = 8) -> None>¶
- __wrapped__¶
alias of
HuggingFaceTransfoXLGenerator
- algorithm_application: ClassVar[str] = 'HuggingFaceTransfoXLGenerator'¶
Unique name for the application that is the use of this configuration together with a specific algorithm.
Will be set when registering to
ApplicationsRegistry
, but can be given by direct registration (Seeregister_algorithm_application
)
- algorithm_name: ClassVar[str] = 'HuggingFaceGenerationAlgorithm'¶
Name of the algorithm to use with this configuration.
Will be set when registering to
ApplicationsRegistry
- class HuggingFaceSeq2SeqGenerator(*args, **kwargs)[source]¶
Bases:
HuggingFaceSeq2SeqGenerator
Configuration to generate text using Seq2Seq LMs.
- algorithm_version: str = 't5-small'¶
To differentiate between different versions of an application.
There is no imposed naming convention.
- model_type: str = 'auto-seq2seq-lm'¶
- classmethod list_versions()[source]¶
Get possible algorithm versions.
Standard S3 and cache search adding the version used in the configuration.
- Return type
Set
[str
]- Returns
viable values as
algorithm_version
for the environment.
- __annotations__ = {'algorithm_application': 'ClassVar[str]', 'algorithm_name': 'ClassVar[str]', 'algorithm_type': 'ClassVar[str]', 'algorithm_version': <class 'str'>, 'do_sample': 'bool', 'domain': 'ClassVar[str]', 'k': 'int', 'length': 'int', 'model_type': <class 'str'>, 'num_beams': 'int', 'number_of_sequences': 'int', 'p': 'float', 'prefix': 'str', 'prompt': 'str', 'repetition_penalty': 'float', 'stop_token': 'str', 'temperature': 'float'}¶
- __dataclass_fields__ = {'algorithm_application': Field(name='algorithm_application',type=typing.ClassVar[str],default='HuggingFaceSeq2SeqGenerator',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_name': Field(name='algorithm_name',type=typing.ClassVar[str],default='HuggingFaceGenerationAlgorithm',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_type': Field(name='algorithm_type',type=typing.ClassVar[str],default='generation',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_version': Field(name='algorithm_version',type=<class 'str'>,default='t5-small',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'do_sample': Field(name='do_sample',type=<class 'bool'>,default=True,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}),kw_only=False,_field_type=_FIELD), 'domain': Field(name='domain',type=typing.ClassVar[str],default='nlp',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'k': Field(name='k',type=<class 'int'>,default=50,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of top-k probability tokens to keep.'}),kw_only=False,_field_type=_FIELD), 'length': Field(name='length',type=<class 'int'>,default=20,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Length of the generated text.'}),kw_only=False,_field_type=_FIELD), 'model_type': Field(name='model_type',type=<class 'str'>,default='auto-seq2seq-lm',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'num_beams': Field(name='num_beams',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of beams for beam search.'}),kw_only=False,_field_type=_FIELD), 'number_of_sequences': Field(name='number_of_sequences',type=<class 'int'>,default=8,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of text sequences to generate.'}),kw_only=False,_field_type=_FIELD), 'p': Field(name='p',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}),kw_only=False,_field_type=_FIELD), 'prefix': Field(name='prefix',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Text defining context provided prior to the prompt.'}),kw_only=False,_field_type=_FIELD), 'prompt': Field(name='prompt',type=<class 'str'>,default="I'm a stochastic parrot.",default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Prompt for text generation.'}),kw_only=False,_field_type=_FIELD), 'repetition_penalty': Field(name='repetition_penalty',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}),kw_only=False,_field_type=_FIELD), 'stop_token': Field(name='stop_token',type=<class 'str'>,default='',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Stop token for text generation.'}),kw_only=False,_field_type=_FIELD), 'temperature': Field(name='temperature',type=<class 'float'>,default=1.0,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Temperature for sampling, the lower the greedier the sampling.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Configuration to generate text using Seq2Seq LMs.'¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(*args, **kwargs)¶
- __match_args__ = ('algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences')¶
- __module__ = 'gt4sd.algorithms.generation.hugging_face.core'¶
- __parameters__ = (~T,)¶
- __pydantic_complete__ = True¶
- __pydantic_config__ = {}¶
- __pydantic_core_schema__ = {'cls': <class 'gt4sd.algorithms.generation.hugging_face.core.HuggingFaceSeq2SeqGenerator'>, 'config': {'title': 'HuggingFaceSeq2SeqGenerator'}, 'fields': ['algorithm_version', 'model_type', 'prompt', 'length', 'stop_token', 'num_beams', 'do_sample', 'temperature', 'repetition_penalty', 'k', 'p', 'prefix', 'number_of_sequences'], 'frozen': False, 'post_init': False, 'ref': 'types.HuggingFaceSeq2SeqGenerator:94427942298560', 'schema': {'collect_init_only': False, 'computed_fields': [], 'dataclass_name': 'HuggingFaceSeq2SeqGenerator', 'fields': [{'type': 'dataclass-field', 'name': 'algorithm_version', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 't5-small'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'model_type', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'auto-seq2seq-lm'}, 'kw_only': False, 'init': True, 'metadata': {}}, {'type': 'dataclass-field', 'name': 'prompt', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': "I'm a stochastic parrot."}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Prompt for text generation.'}}}, {'type': 'dataclass-field', 'name': 'length', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 20}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Length of the generated text.'}}}, {'type': 'dataclass-field', 'name': 'stop_token', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Stop token for text generation.'}}}, {'type': 'dataclass-field', 'name': 'num_beams', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 1}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of beams for beam search.'}}}, {'type': 'dataclass-field', 'name': 'do_sample', 'schema': {'type': 'default', 'schema': {'type': 'bool'}, 'default': True}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Whether or not to use sampling; use greedy decoding otherwise.'}}}, {'type': 'dataclass-field', 'name': 'temperature', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Temperature for sampling, the lower the greedier the sampling.'}}}, {'type': 'dataclass-field', 'name': 'repetition_penalty', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Primarily useful for CTRL model, where 1.2 should be used.'}}}, {'type': 'dataclass-field', 'name': 'k', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 50}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of top-k probability tokens to keep.'}}}, {'type': 'dataclass-field', 'name': 'p', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 1.0}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Only tokens with cumulative probabilities summing up to this value are kept.'}}}, {'type': 'dataclass-field', 'name': 'prefix', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': ''}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Text defining context provided prior to the prompt.'}}}, {'type': 'dataclass-field', 'name': 'number_of_sequences', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 8}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_updates': {'description': 'Number of text sequences to generate.'}}}], 'type': 'dataclass-args'}, 'slots': True, 'type': 'dataclass'}¶
- __pydantic_decorators__ = DecoratorInfos(validators={}, field_validators={}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})¶
- __pydantic_fields__ = {'algorithm_version': FieldInfo(annotation=str, required=False, default='t5-small', init=True, init_var=False, kw_only=False), 'do_sample': FieldInfo(annotation=bool, required=False, default=True, description='Whether or not to use sampling; use greedy decoding otherwise.', init=True, init_var=False, kw_only=False), 'k': FieldInfo(annotation=int, required=False, default=50, description='Number of top-k probability tokens to keep.', init=True, init_var=False, kw_only=False), 'length': FieldInfo(annotation=int, required=False, default=20, description='Length of the generated text.', init=True, init_var=False, kw_only=False), 'model_type': FieldInfo(annotation=str, required=False, default='auto-seq2seq-lm', init=True, init_var=False, kw_only=False), 'num_beams': FieldInfo(annotation=int, required=False, default=1, description='Number of beams for beam search.', init=True, init_var=False, kw_only=False), 'number_of_sequences': FieldInfo(annotation=int, required=False, default=8, description='Number of text sequences to generate.', init=True, init_var=False, kw_only=False), 'p': FieldInfo(annotation=float, required=False, default=1.0, description='Only tokens with cumulative probabilities summing up to this value are kept.', init=True, init_var=False, kw_only=False), 'prefix': FieldInfo(annotation=str, required=False, default='', description='Text defining context provided prior to the prompt.', init=True, init_var=False, kw_only=False), 'prompt': FieldInfo(annotation=str, required=False, default="I'm a stochastic parrot.", description='Prompt for text generation.', init=True, init_var=False, kw_only=False), 'repetition_penalty': FieldInfo(annotation=float, required=False, default=1.0, description='Primarily useful for CTRL model, where 1.2 should be used.', init=True, init_var=False, kw_only=False), 'stop_token': FieldInfo(annotation=str, required=False, default='', description='Stop token for text generation.', init=True, init_var=False, kw_only=False), 'temperature': FieldInfo(annotation=float, required=False, default=1.0, description='Temperature for sampling, the lower the greedier the sampling.', init=True, init_var=False, kw_only=False)}¶
- __pydantic_serializer__ = SchemaSerializer(serializer=Dataclass( DataclassSerializer { class: Py( 0x000055e1b7f523c0, ), serializer: Fields( GeneralFieldsSerializer { fields: { "prompt": SerField { key_py: Py( 0x00007f9dbdf5b330, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbdfefa00, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "temperature": SerField { key_py: Py( 0x00007f9dbdf5b8b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "length": SerField { key_py: Py( 0x00007f9dbdf58230, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500350, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "num_beams": SerField { key_py: Py( 0x00007f9dbdf582f0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5000f0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "model_type": SerField { key_py: Py( 0x00007f9dbdf5b9f0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbe106db0, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "algorithm_version": SerField { key_py: Py( 0x00007f9dbb5594d0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbe1041b0, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "repetition_penalty": SerField { key_py: Py( 0x00007f9dbb559430, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "prefix": SerField { key_py: Py( 0x00007f9dbdf5aeb0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "stop_token": SerField { key_py: Py( 0x00007f9dbdf5bcb0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d508030, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "number_of_sequences": SerField { key_py: Py( 0x00007f9dbb559390, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d5001d0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "p": SerField { key_py: Py( 0x00007f9e9d5cc670, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9dbecaebb0, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "k": SerField { key_py: Py( 0x00007f9e9d125130, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f9e9d500710, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "do_sample": SerField { key_py: Py( 0x00007f9dbdf5b3b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x000055e1a1fe15a0, ), ), serializer: Bool( BoolSerializer, ), }, ), ), required: true, }, }, computed_fields: Some( ComputedFields( [], ), ), mode: SimpleDict, extra_serializer: None, filter: SchemaFilter { include: None, exclude: None, }, required_fields: 13, }, ), fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], name: "HuggingFaceSeq2SeqGenerator", }, ), definitions=[])¶
- __pydantic_validator__ = SchemaValidator(title="HuggingFaceSeq2SeqGenerator", validator=Dataclass( DataclassValidator { strict: false, validator: DataclassArgs( DataclassArgsValidator { fields: [ Field { kw_only: false, name: "algorithm_version", py_name: Py( 0x00007f9e9963bc80, ), init: true, init_only: false, lookup_key: Simple { key: "algorithm_version", py_key: Py( 0x00007f9dbb559ac0, ), path: LookupPath( [ S( "algorithm_version", Py( 0x00007f9dbb559250, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbe1041b0, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "model_type", py_name: Py( 0x00007f9e9a4b3ab0, ), init: true, init_only: false, lookup_key: Simple { key: "model_type", py_key: Py( 0x00007f9dbb5b58f0, ), path: LookupPath( [ S( "model_type", Py( 0x00007f9dbb5b58b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbe106db0, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prompt", py_name: Py( 0x00007f9e9d149570, ), init: true, init_only: false, lookup_key: Simple { key: "prompt", py_key: Py( 0x00007f9dbb5b5830, ), path: LookupPath( [ S( "prompt", Py( 0x00007f9dbb5b57f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbdfefa00, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "length", py_name: Py( 0x00007f9e9ce83a30, ), init: true, init_only: false, lookup_key: Simple { key: "length", py_key: Py( 0x00007f9dbb5b5930, ), path: LookupPath( [ S( "length", Py( 0x00007f9dbb5b5970, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500350, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "stop_token", py_name: Py( 0x00007f9e8b9442f0, ), init: true, init_only: false, lookup_key: Simple { key: "stop_token", py_key: Py( 0x00007f9dbb5b59b0, ), path: LookupPath( [ S( "stop_token", Py( 0x00007f9dbb5b59f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "num_beams", py_name: Py( 0x00007f9dd96a5270, ), init: true, init_only: false, lookup_key: Simple { key: "num_beams", py_key: Py( 0x00007f9dbb5b5a30, ), path: LookupPath( [ S( "num_beams", Py( 0x00007f9dbb5b5a70, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5000f0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "do_sample", py_name: Py( 0x00007f9dd967f270, ), init: true, init_only: false, lookup_key: Simple { key: "do_sample", py_key: Py( 0x00007f9dbb5b5ab0, ), path: LookupPath( [ S( "do_sample", Py( 0x00007f9dbb5b5af0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x000055e1a1fe15a0, ), ), on_error: Raise, validator: Bool( BoolValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[bool]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "temperature", py_name: Py( 0x00007f9e7496ecf0, ), init: true, init_only: false, lookup_key: Simple { key: "temperature", py_key: Py( 0x00007f9dbb5b5b30, ), path: LookupPath( [ S( "temperature", Py( 0x00007f9dbb5b5bb0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "repetition_penalty", py_name: Py( 0x00007f9dd96777d0, ), init: true, init_only: false, lookup_key: Simple { key: "repetition_penalty", py_key: Py( 0x00007f9dbb5591b0, ), path: LookupPath( [ S( "repetition_penalty", Py( 0x00007f9dbb559200, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "k", py_name: Py( 0x00007f9e9d125130, ), init: true, init_only: false, lookup_key: Simple { key: "k", py_key: Py( 0x00007f9e9d125130, ), path: LookupPath( [ S( "k", Py( 0x00007f9e9d125130, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d500710, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "p", py_name: Py( 0x00007f9e9d5cc670, ), init: true, init_only: false, lookup_key: Simple { key: "p", py_key: Py( 0x00007f9e9d5cc670, ), path: LookupPath( [ S( "p", Py( 0x00007f9e9d5cc670, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9dbecaebb0, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "prefix", py_name: Py( 0x00007f9e9d5b75f0, ), init: true, init_only: false, lookup_key: Simple { key: "prefix", py_key: Py( 0x00007f9dbb5b5bf0, ), path: LookupPath( [ S( "prefix", Py( 0x00007f9dbb5b5c30, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d508030, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, Field { kw_only: false, name: "number_of_sequences", py_name: Py( 0x00007f9dbdfefbe0, ), init: true, init_only: false, lookup_key: Simple { key: "number_of_sequences", py_key: Py( 0x00007f9dbb5597f0, ), path: LookupPath( [ S( "number_of_sequences", Py( 0x00007f9dbb559840, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f9e9d5001d0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f9e9b5139a0, ), }, ), frozen: false, }, ], positional_count: 13, init_only_count: None, dataclass_name: "HuggingFaceSeq2SeqGenerator", validator_name: "dataclass-args[HuggingFaceSeq2SeqGenerator]", extra_behavior: Ignore, extras_validator: None, loc_by_alias: true, }, ), class: Py( 0x000055e1b7f523c0, ), generic_origin: None, fields: [ Py( 0x00007f9e9963bc80, ), Py( 0x00007f9e9a4b3ab0, ), Py( 0x00007f9e9d149570, ), Py( 0x00007f9e9ce83a30, ), Py( 0x00007f9e8b9442f0, ), Py( 0x00007f9dd96a5270, ), Py( 0x00007f9dd967f270, ), Py( 0x00007f9e7496ecf0, ), Py( 0x00007f9dd96777d0, ), Py( 0x00007f9e9d125130, ), Py( 0x00007f9e9d5cc670, ), Py( 0x00007f9e9d5b75f0, ), Py( 0x00007f9dbdfefbe0, ), ], post_init: None, revalidate: Never, name: "HuggingFaceSeq2SeqGenerator", frozen: false, slots: true, }, ), definitions=[], cache_strings=True)¶
- __repr__()¶
Return repr(self).
- __signature__ = <Signature (*args: Any, algorithm_version: str = 't5-small', model_type: str = 'auto-seq2seq-lm', prompt: str = "I'm a stochastic parrot.", length: int = 20, stop_token: str = '', num_beams: int = 1, do_sample: bool = True, temperature: float = 1.0, repetition_penalty: float = 1.0, k: int = 50, p: float = 1.0, prefix: str = '', number_of_sequences: int = 8) -> None>¶
- __wrapped__¶
alias of
HuggingFaceSeq2SeqGenerator
- algorithm_application: ClassVar[str] = 'HuggingFaceSeq2SeqGenerator'¶
Unique name for the application that is the use of this configuration together with a specific algorithm.
Will be set when registering to
ApplicationsRegistry
, but can be given by direct registration (Seeregister_algorithm_application
)
- algorithm_name: ClassVar[str] = 'HuggingFaceGenerationAlgorithm'¶
Name of the algorithm to use with this configuration.
Will be set when registering to
ApplicationsRegistry