gt4sd.algorithms.conditional_generation.key_bert.core module¶
Algortihms for keyword generation using BERT models.
Summary¶
Classes:
Configuration to generate keywords. |
|
Topics prediction algorithm. |
Reference¶
- class KeywordBERTGenerationAlgorithm(configuration, target)[source]¶
Bases:
GeneratorAlgorithm
[S
,T
]Topics prediction algorithm.
- __init__(configuration, target)[source]¶
Instantiate KeywordBERTGenerationAlgorithm ready to predict topics.
- Parameters
configuration (
AlgorithmConfiguration
[~S, ~T]) – domain and application specification defining parameters, types and validations.target (
Optional
[~T,None
]) – a target for which to generate items.
Example
An example for predicting topics for a given text:
config = KeyBERTGenerator() algorithm = KeywordBERTGenerationAlgorithm(configuration=config, target="This is a text I want to understand better") items = list(algorithm.sample(1)) print(items)
- get_generator(configuration, target)[source]¶
Get the function to perform the prediction via KeywordBERTGenerationAlgorithm’s generator.
- Parameters
configuration (
AlgorithmConfiguration
[~S, ~T]) – helps to set up specific application of KeywordBERTGenerationAlgorithm.target (
Optional
[~T,None
]) – context or condition for the generation.
- Return type
Callable
[[~T],Iterable
[Any
]]- Returns
callable with target generating keywords sorted by relevance.
- __abstractmethods__ = frozenset({})¶
- __annotations__ = {'generate': 'Untargeted', 'generator': 'Union[Untargeted, Targeted[T]]', 'max_runtime': 'int', 'max_samples': 'int', 'target': 'Optional[T]'}¶
- __doc__ = 'Topics prediction algorithm.'¶
- __module__ = 'gt4sd.algorithms.conditional_generation.key_bert.core'¶
- __orig_bases__ = (gt4sd.algorithms.core.GeneratorAlgorithm[~S, ~T],)¶
- __parameters__ = (~S, ~T)¶
- _abc_impl = <_abc._abc_data object>¶
- class KeyBERTGenerator(*args, **kwargs)[source]¶
Bases:
KeyBERTGenerator
,Generic
[T
]Configuration to generate keywords.
If the model is not found in the cache, models are collected from https://www.sbert.net/docs/pretrained_models.html. distilbert-base-nli-stsb-mean-tokens is recommended for english, while xlm-r-bert-base-nli-stsb-mean-tokens for all other languages as it support 100+ languages.
- algorithm_name: ClassVar[str] = 'KeywordBERTGenerationAlgorithm'¶
Name of the algorithm to use with this configuration.
Will be set when registering to
ApplicationsRegistry
- algorithm_type: ClassVar[str] = 'conditional_generation'¶
General type of generative algorithm.
- domain: ClassVar[str] = 'nlp'¶
General application domain. Hints at input/output types.
- algorithm_version: str = 'distilbert-base-nli-mean-tokens'¶
To differentiate between different versions of an application.
There is no imposed naming convention.
- minimum_keyphrase_ngram: int = 1¶
- maximum_keyphrase_ngram: int = 2¶
- stop_words: str = 'english'¶
- top_n: int = 10¶
- use_maxsum: bool = False¶
- use_mmr: bool = False¶
- diversity: float = 0.5¶
- number_of_candidates: int = 20¶
- get_target_description()[source]¶
Get description of the target for generation.
- Return type
Dict
[str
,str
]- Returns
target description.
- get_conditional_generator(resources_path)[source]¶
Instantiate the actual generator implementation.
- Parameters
resources_path (
str
) – local path to model files.- Return type
- Returns
instance with
generate_batch
method for targeted generation.
- classmethod list_versions()[source]¶
Get possible algorithm versions.
Standard S3 and cache search adding the version used in the configuration.
- Return type
Set
[str
]- Returns
viable values as
algorithm_version
for the environment.
- __annotations__ = {'algorithm_application': 'ClassVar[str]', 'algorithm_name': typing.ClassVar[str], 'algorithm_type': typing.ClassVar[str], 'algorithm_version': <class 'str'>, 'diversity': <class 'float'>, 'domain': typing.ClassVar[str], 'maximum_keyphrase_ngram': <class 'int'>, 'minimum_keyphrase_ngram': <class 'int'>, 'number_of_candidates': <class 'int'>, 'stop_words': <class 'str'>, 'top_n': <class 'int'>, 'use_maxsum': <class 'bool'>, 'use_mmr': <class 'bool'>}¶
- __dataclass_fields__ = {'algorithm_application': Field(name='algorithm_application',type=typing.ClassVar[str],default='KeyBERTGenerator',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_name': Field(name='algorithm_name',type=typing.ClassVar[str],default='KeywordBERTGenerationAlgorithm',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_type': Field(name='algorithm_type',type=typing.ClassVar[str],default='conditional_generation',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'algorithm_version': Field(name='algorithm_version',type=<class 'str'>,default='distilbert-base-nli-mean-tokens',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'diversity': Field(name='diversity',type=<class 'float'>,default=0.5,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Diversity for the results when enabling use_mmr.'}),kw_only=False,_field_type=_FIELD), 'domain': Field(name='domain',type=typing.ClassVar[str],default='nlp',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=_FIELD_CLASSVAR), 'maximum_keyphrase_ngram': Field(name='maximum_keyphrase_ngram',type=<class 'int'>,default=2,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Upper bound for phrase size.'}),kw_only=False,_field_type=_FIELD), 'minimum_keyphrase_ngram': Field(name='minimum_keyphrase_ngram',type=<class 'int'>,default=1,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Lower bound for phrase size.'}),kw_only=False,_field_type=_FIELD), 'number_of_candidates': Field(name='number_of_candidates',type=<class 'int'>,default=20,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Candidates considered when enabling use_maxsum.'}),kw_only=False,_field_type=_FIELD), 'stop_words': Field(name='stop_words',type=<class 'str'>,default='english',default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Language for the stop words removal.'}),kw_only=False,_field_type=_FIELD), 'top_n': Field(name='top_n',type=<class 'int'>,default=10,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Number of keywords to extract.'}),kw_only=False,_field_type=_FIELD), 'use_maxsum': Field(name='use_maxsum',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Control usage of max sum similarity for keywords generated.'}),kw_only=False,_field_type=_FIELD), 'use_mmr': Field(name='use_mmr',type=<class 'bool'>,default=False,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'description': 'Control usage of max marginal relevance for keywords generated.'}),kw_only=False,_field_type=_FIELD)}¶
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)¶
- __doc__ = 'Configuration to generate keywords.\n\n If the model is not found in the cache, models are collected from https://www.sbert.net/docs/pretrained_models.html.\n distilbert-base-nli-stsb-mean-tokens is recommended for english, while xlm-r-bert-base-nli-stsb-mean-tokens for all\n other languages as it support 100+ languages.\n '¶
- __eq__(other)¶
Return self==value.
- __hash__ = None¶
- __init__(*args, **kwargs)¶
- __match_args__ = ('algorithm_version', 'minimum_keyphrase_ngram', 'maximum_keyphrase_ngram', 'stop_words', 'top_n', 'use_maxsum', 'use_mmr', 'diversity', 'number_of_candidates')¶
- __module__ = 'gt4sd.algorithms.conditional_generation.key_bert.core'¶
- __orig_bases__ = (<class 'types.KeyBERTGenerator'>, typing.Generic[~T])¶
- __parameters__ = (~T,)¶
- __pydantic_complete__ = True¶
- __pydantic_config__ = {}¶
- __pydantic_core_schema__ = {'cls': <class 'gt4sd.algorithms.conditional_generation.key_bert.core.KeyBERTGenerator'>, 'config': {'title': 'KeyBERTGenerator'}, 'fields': ['algorithm_version', 'minimum_keyphrase_ngram', 'maximum_keyphrase_ngram', 'stop_words', 'top_n', 'use_maxsum', 'use_mmr', 'diversity', 'number_of_candidates'], 'frozen': False, 'metadata': {'pydantic_js_annotation_functions': [], 'pydantic_js_functions': [functools.partial(<function modify_model_json_schema>, cls=<class 'gt4sd.algorithms.conditional_generation.key_bert.core.KeyBERTGenerator'>, title=None)]}, 'post_init': False, 'ref': 'types.KeyBERTGenerator:94662816090784', 'schema': {'collect_init_only': False, 'computed_fields': [], 'dataclass_name': 'KeyBERTGenerator', 'fields': [{'type': 'dataclass-field', 'name': 'algorithm_version', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'distilbert-base-nli-mean-tokens'}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_functions': [], 'pydantic_js_annotation_functions': [<function get_json_schema_update_func.<locals>.json_schema_update_func>]}}, {'type': 'dataclass-field', 'name': 'minimum_keyphrase_ngram', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 1}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_functions': [], 'pydantic_js_annotation_functions': [<function get_json_schema_update_func.<locals>.json_schema_update_func>]}}, {'type': 'dataclass-field', 'name': 'maximum_keyphrase_ngram', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 2}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_functions': [], 'pydantic_js_annotation_functions': [<function get_json_schema_update_func.<locals>.json_schema_update_func>]}}, {'type': 'dataclass-field', 'name': 'stop_words', 'schema': {'type': 'default', 'schema': {'type': 'str'}, 'default': 'english'}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_functions': [], 'pydantic_js_annotation_functions': [<function get_json_schema_update_func.<locals>.json_schema_update_func>]}}, {'type': 'dataclass-field', 'name': 'top_n', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 10}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_functions': [], 'pydantic_js_annotation_functions': [<function get_json_schema_update_func.<locals>.json_schema_update_func>]}}, {'type': 'dataclass-field', 'name': 'use_maxsum', 'schema': {'type': 'default', 'schema': {'type': 'bool'}, 'default': False}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_functions': [], 'pydantic_js_annotation_functions': [<function get_json_schema_update_func.<locals>.json_schema_update_func>]}}, {'type': 'dataclass-field', 'name': 'use_mmr', 'schema': {'type': 'default', 'schema': {'type': 'bool'}, 'default': False}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_functions': [], 'pydantic_js_annotation_functions': [<function get_json_schema_update_func.<locals>.json_schema_update_func>]}}, {'type': 'dataclass-field', 'name': 'diversity', 'schema': {'type': 'default', 'schema': {'type': 'float'}, 'default': 0.5}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_functions': [], 'pydantic_js_annotation_functions': [<function get_json_schema_update_func.<locals>.json_schema_update_func>]}}, {'type': 'dataclass-field', 'name': 'number_of_candidates', 'schema': {'type': 'default', 'schema': {'type': 'int'}, 'default': 20}, 'kw_only': False, 'init': True, 'metadata': {'pydantic_js_functions': [], 'pydantic_js_annotation_functions': [<function get_json_schema_update_func.<locals>.json_schema_update_func>]}}], 'type': 'dataclass-args'}, 'slots': True, 'type': 'dataclass'}¶
- __pydantic_decorators__ = DecoratorInfos(validators={}, field_validators={}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})¶
- __pydantic_fields__ = {'algorithm_version': FieldInfo(annotation=str, required=False, default='distilbert-base-nli-mean-tokens', init=True, init_var=False, kw_only=False), 'diversity': FieldInfo(annotation=float, required=False, default=0.5, description='Diversity for the results when enabling use_mmr.', init=True, init_var=False, kw_only=False), 'maximum_keyphrase_ngram': FieldInfo(annotation=int, required=False, default=2, description='Upper bound for phrase size.', init=True, init_var=False, kw_only=False), 'minimum_keyphrase_ngram': FieldInfo(annotation=int, required=False, default=1, description='Lower bound for phrase size.', init=True, init_var=False, kw_only=False), 'number_of_candidates': FieldInfo(annotation=int, required=False, default=20, description='Candidates considered when enabling use_maxsum.', init=True, init_var=False, kw_only=False), 'stop_words': FieldInfo(annotation=str, required=False, default='english', description='Language for the stop words removal.', init=True, init_var=False, kw_only=False), 'top_n': FieldInfo(annotation=int, required=False, default=10, description='Number of keywords to extract.', init=True, init_var=False, kw_only=False), 'use_maxsum': FieldInfo(annotation=bool, required=False, default=False, description='Control usage of max sum similarity for keywords generated.', init=True, init_var=False, kw_only=False), 'use_mmr': FieldInfo(annotation=bool, required=False, default=False, description='Control usage of max marginal relevance for keywords generated.', init=True, init_var=False, kw_only=False)}¶
- __pydantic_serializer__ = SchemaSerializer(serializer=Dataclass( DataclassSerializer { class: Py( 0x000056186786caa0, ), serializer: Fields( GeneralFieldsSerializer { fields: { "stop_words": SerField { key_py: Py( 0x00007f1dcabe4430, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f1ea8cb55b0, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "number_of_candidates": SerField { key_py: Py( 0x00007f1dcabd34b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f1ea9468350, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "diversity": SerField { key_py: Py( 0x00007f1dcabe4530, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f1dcaf4aa70, ), ), serializer: Float( FloatSerializer { inf_nan_mode: Null, }, ), }, ), ), required: true, }, "use_maxsum": SerField { key_py: Py( 0x00007f1dcabe44b0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x000056185a463580, ), ), serializer: Bool( BoolSerializer, ), }, ), ), required: true, }, "algorithm_version": SerField { key_py: Py( 0x00007f1dcabd33c0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f1dcacc2010, ), ), serializer: Str( StrSerializer, ), }, ), ), required: true, }, "maximum_keyphrase_ngram": SerField { key_py: Py( 0x00007f1dcabd3460, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f1ea9468110, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "minimum_keyphrase_ngram": SerField { key_py: Py( 0x00007f1dcabd3410, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f1ea94680f0, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, "use_mmr": SerField { key_py: Py( 0x00007f1dcabe44f0, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x000056185a463580, ), ), serializer: Bool( BoolSerializer, ), }, ), ), required: true, }, "top_n": SerField { key_py: Py( 0x00007f1dcabe4470, ), alias: None, alias_py: None, serializer: Some( WithDefault( WithDefaultSerializer { default: Default( Py( 0x00007f1ea9468210, ), ), serializer: Int( IntSerializer, ), }, ), ), required: true, }, }, computed_fields: Some( ComputedFields( [], ), ), mode: SimpleDict, extra_serializer: None, filter: SchemaFilter { include: None, exclude: None, }, required_fields: 9, }, ), fields: [ Py( 0x00007f1ea52ed250, ), Py( 0x00007f1dcacc20b0, ), Py( 0x00007f1dcacc2150, ), Py( 0x00007f1dceecdbb0, ), Py( 0x00007f1dcae44b30, ), Py( 0x00007f1ea52d3f30, ), Py( 0x00007f1ea52d3e70, ), Py( 0x00007f1dcdc68230, ), Py( 0x00007f1dcacc21f0, ), ], name: "KeyBERTGenerator", }, ), definitions=[])¶
- __pydantic_validator__ = SchemaValidator(title="KeyBERTGenerator", validator=Dataclass( DataclassValidator { strict: false, validator: DataclassArgs( DataclassArgsValidator { fields: [ Field { kw_only: false, name: "algorithm_version", py_name: Py( 0x00007f1ea52ed250, ), init: true, init_only: false, lookup_key: Simple { key: "algorithm_version", py_key: Py( 0x00007f1dcabd3190, ), path: LookupPath( [ S( "algorithm_version", Py( 0x00007f1dcabd30a0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f1dcacc2010, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f1ea71db950, ), }, ), frozen: false, }, Field { kw_only: false, name: "minimum_keyphrase_ngram", py_name: Py( 0x00007f1dcacc20b0, ), init: true, init_only: false, lookup_key: Simple { key: "minimum_keyphrase_ngram", py_key: Py( 0x00007f1dcabd31e0, ), path: LookupPath( [ S( "minimum_keyphrase_ngram", Py( 0x00007f1dcabd3230, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f1ea94680f0, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f1ea71db950, ), }, ), frozen: false, }, Field { kw_only: false, name: "maximum_keyphrase_ngram", py_name: Py( 0x00007f1dcacc2150, ), init: true, init_only: false, lookup_key: Simple { key: "maximum_keyphrase_ngram", py_key: Py( 0x00007f1dcabd3280, ), path: LookupPath( [ S( "maximum_keyphrase_ngram", Py( 0x00007f1dcabd32d0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f1ea9468110, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f1ea71db950, ), }, ), frozen: false, }, Field { kw_only: false, name: "stop_words", py_name: Py( 0x00007f1dceecdbb0, ), init: true, init_only: false, lookup_key: Simple { key: "stop_words", py_key: Py( 0x00007f1dcabe4270, ), path: LookupPath( [ S( "stop_words", Py( 0x00007f1dcabe4230, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f1ea8cb55b0, ), ), on_error: Raise, validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), validate_default: false, copy_default: false, name: "default[str]", undefined: Py( 0x00007f1ea71db950, ), }, ), frozen: false, }, Field { kw_only: false, name: "top_n", py_name: Py( 0x00007f1dcae44b30, ), init: true, init_only: false, lookup_key: Simple { key: "top_n", py_key: Py( 0x00007f1dcabe41f0, ), path: LookupPath( [ S( "top_n", Py( 0x00007f1dcabe41b0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f1ea9468210, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f1ea71db950, ), }, ), frozen: false, }, Field { kw_only: false, name: "use_maxsum", py_name: Py( 0x00007f1ea52d3f30, ), init: true, init_only: false, lookup_key: Simple { key: "use_maxsum", py_key: Py( 0x00007f1dcabe42b0, ), path: LookupPath( [ S( "use_maxsum", Py( 0x00007f1dcabe42f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x000056185a463580, ), ), on_error: Raise, validator: Bool( BoolValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[bool]", undefined: Py( 0x00007f1ea71db950, ), }, ), frozen: false, }, Field { kw_only: false, name: "use_mmr", py_name: Py( 0x00007f1ea52d3e70, ), init: true, init_only: false, lookup_key: Simple { key: "use_mmr", py_key: Py( 0x00007f1dcabe4330, ), path: LookupPath( [ S( "use_mmr", Py( 0x00007f1dcabe4370, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x000056185a463580, ), ), on_error: Raise, validator: Bool( BoolValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[bool]", undefined: Py( 0x00007f1ea71db950, ), }, ), frozen: false, }, Field { kw_only: false, name: "diversity", py_name: Py( 0x00007f1dcdc68230, ), init: true, init_only: false, lookup_key: Simple { key: "diversity", py_key: Py( 0x00007f1dcabe43b0, ), path: LookupPath( [ S( "diversity", Py( 0x00007f1dcabe43f0, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f1dcaf4aa70, ), ), on_error: Raise, validator: Float( FloatValidator { strict: false, allow_inf_nan: true, }, ), validate_default: false, copy_default: false, name: "default[float]", undefined: Py( 0x00007f1ea71db950, ), }, ), frozen: false, }, Field { kw_only: false, name: "number_of_candidates", py_name: Py( 0x00007f1dcacc21f0, ), init: true, init_only: false, lookup_key: Simple { key: "number_of_candidates", py_key: Py( 0x00007f1dcabd3320, ), path: LookupPath( [ S( "number_of_candidates", Py( 0x00007f1dcabd3370, ), ), ], ), }, validator: WithDefault( WithDefaultValidator { default: Default( Py( 0x00007f1ea9468350, ), ), on_error: Raise, validator: Int( IntValidator { strict: false, }, ), validate_default: false, copy_default: false, name: "default[int]", undefined: Py( 0x00007f1ea71db950, ), }, ), frozen: false, }, ], positional_count: 9, init_only_count: None, dataclass_name: "KeyBERTGenerator", validator_name: "dataclass-args[KeyBERTGenerator]", extra_behavior: Ignore, extras_validator: None, loc_by_alias: true, }, ), class: Py( 0x000056186786caa0, ), fields: [ Py( 0x00007f1ea52ed250, ), Py( 0x00007f1dcacc20b0, ), Py( 0x00007f1dcacc2150, ), Py( 0x00007f1dceecdbb0, ), Py( 0x00007f1dcae44b30, ), Py( 0x00007f1ea52d3f30, ), Py( 0x00007f1ea52d3e70, ), Py( 0x00007f1dcdc68230, ), Py( 0x00007f1dcacc21f0, ), ], post_init: None, revalidate: Never, name: "KeyBERTGenerator", frozen: false, slots: true, }, ), definitions=[], cache_strings=True)¶
- __repr__()¶
Return repr(self).
- __signature__ = <Signature (algorithm_version: str = 'distilbert-base-nli-mean-tokens', minimum_keyphrase_ngram: int = 1, maximum_keyphrase_ngram: int = 2, stop_words: str = 'english', top_n: int = 10, use_maxsum: bool = False, use_mmr: bool = False, diversity: float = 0.5, number_of_candidates: int = 20) -> None>¶
- __wrapped__¶
alias of
KeyBERTGenerator