gt4sd.frameworks.gflownet.ml.module module¶
Model module.
Summary¶
Classes:
A generic algorithm for gflownet. |
|
Module from gflownet. |
Reference¶
- class GFlowNetAlgorithm[source]¶
Bases:
object
A generic algorithm for gflownet.
- compute_batch_losses(model, batch, num_bootstrap=0)[source]¶
Computes the loss for a batch of data, and proves logging informations.
- Parameters
model (
Module
) – the model being trained or evaluated.batch (
Batch
) – a batch of graphs.num_bootstrap (
Optional
[int
,None
]) – the number of trajectories with reward targets in the batch (if applicable).
- Returns
the loss for that batch. info: logged information about model predictions.
- Return type
loss
- __dict__ = mappingproxy({'__module__': 'gt4sd.frameworks.gflownet.ml.module', '__doc__': 'A generic algorithm for gflownet.', 'compute_batch_losses': <function GFlowNetAlgorithm.compute_batch_losses>, '__dict__': <attribute '__dict__' of 'GFlowNetAlgorithm' objects>, '__weakref__': <attribute '__weakref__' of 'GFlowNetAlgorithm' objects>, '__annotations__': {}})¶
- __doc__ = 'A generic algorithm for gflownet.'¶
- __module__ = 'gt4sd.frameworks.gflownet.ml.module'¶
- __weakref__¶
list of weak references to the object (if defined)
- class GFlowNetModule(configuration, dataset, environment, context, task, algorithm, model)[source]¶
Bases:
LightningModule
Module from gflownet.
- __init__(configuration, dataset, environment, context, task, algorithm, model)[source]¶
Construct GFNModule.
- Parameters
configuration (
Dict
[str
,Any
]) – the configuration of the module.dataset (
GFlowNetDataset
) – the dataset to use.environment (
GraphBuildingEnv
) – the environment to use.context (
GraphBuildingEnvContext
) – the context to use.task (
GFlowNetTask
) – the task to solve.algorithm (
GFlowNetAlgorithm
) – algorithm (trajectory_balance or td_loss).model (
Module
) – architecture (graph_transformer_gfn or graph_transformer).
- training_step(batch, batch_idx, optimizer_idx, *args, **kwargs)[source]¶
Training step implementation.
- Parameters
batch (
Batch
) – batch representation.epoch_idx – epoch index.
batch_idx (
int
) – batch index.
- Return type
Dict
[str
,Any
]- Returns
loss and logs.
- training_step_end(batch_parts)[source]¶
Use this when training with dp because
training_step()
will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.Note
If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code
# pseudocode sub_batches = split_batches_for_dp(batch) step_output = [training_step(sub_batch) for sub_batch in sub_batches] training_step_end(step_output)
- Parameters
step_output – What you return in training_step for each batch part.
- Returns
Anything
When using the DP strategy, only a portion of the batch is inside the training_step:
def training_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self(x) # softmax uses only a portion of the batch in the denominator loss = self.softmax(out) loss = nce_loss(loss) return loss
If you wish to do something with all the parts of the batch, then use this method to do it:
def training_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self.encoder(x) return {"pred": out} def training_step_end(self, training_step_outputs): gpu_0_pred = training_step_outputs[0]["pred"] gpu_1_pred = training_step_outputs[1]["pred"] gpu_n_pred = training_step_outputs[n]["pred"] # this softmax now uses the full batch loss = nce_loss([gpu_0_pred, gpu_1_pred, gpu_n_pred]) return loss
See also
See the Multi GPU Training guide for more details.
- validation_step(batch, batch_idx, *args, **kwargs)[source]¶
Validation step implementation.
- Parameters
batch (
Batch
) – batch representation.- Return type
Dict
[str
,Any
]- Returns
loss and logs.
- prediction_step(batch)[source]¶
Inference step.
- Parameters
batch – batch data.
- Return type
Tensor
- Returns
output forward.
- train_epoch_end(outputs)[source]¶
Train epoch end.
- Parameters
outputs (
List
[Dict
[str
,Any
]]) – list of outputs epoch.
Returns:
- configure_optimizers()[source]¶
Configure optimizers.
- Returns
an optimizer, currently only Adam is supported.
- __annotations__ = {}¶
- __doc__ = 'Module from gflownet.'¶
- __module__ = 'gt4sd.frameworks.gflownet.ml.module'¶