rotor package¶
Subpackages¶
Submodules¶
Module contents¶
- class rotor.CheckpointOptim(*args, **kwargs)[source]¶
Bases:
torch.autograd.function.Function
This computes a sequence of functions, following the sequence of operations given as argument. A selected subset of activations are stored during the forward phase, some with their computation graph, some without. The backward phase follows the end of the sequence, with some recomputations when values are missing.
- static backward(ctx, *args)[source]¶
Backwards the tensors given in input during the forward call wrt to args tensors.
- Parameters
ctx (pytorch context) – holds information from forward call as tensors, torch modules and sequence.
*args (torch.Tensor or tuple[torch.Tensor]) – gradient tensors.
- Returns
gradients the tensors given in input during the forward call.
- Return type
torch.Tensor or tuple[torch.Tensor]
- Raises
ValueError – if sequence.Loss operation is met.
- static forward(ctx, functions, sequence, names, preserve_rng_state, arg)[source]¶
Overriding ~torch.autograd.Function.forward method. Applies functions to input in the order defined by sequence. Backward sequence and intermediate tensors are stored in self.ctx.
- ctx¶
Holds information for backward.
- Type
pytorch context
- functions¶
List of sequential function to apply in the sequential model.
- Type
List[torch.nn.modules]
- names¶
Names of functions.
- Type
list[string]
- preserve_rng_state¶
if the model contains randomized operation, save random states.
- Type
bool
- arg (torch.Tensor or tuple[torch.Tensor]
inputs to be forwarded.
- Returns
tensor resulting of operations on arg.
- Return type
torch.Tensor or tuple[torch.Tensor]
Warning
If none of the inputs have required_grad set to True.
- Raises
ValueError – if a sequence.Backward operation is met.
AttributeError – if an unknown operation is met.
- class rotor.Checkpointable(model: torch.nn.modules.container.Sequential, custom_input: Optional[torch.Tensor] = None, mem_limit: Optional[int] = None, mem_slots: int = 500, verbosity=0, preserve_rng_state: bool = True, loss_tmp_memory_usage=0)[source]¶
Bases:
torch.nn.modules.module.Module
Main class of rotor module.
Holds pytorch sequential module, user params, measures and calculated optimal forward sequence.
- model¶
Pytorch sequential module.
- Type
torch.nn.Sequential
- names¶
List of names of each module of the sequential model.
- Type
List[str]
- functions¶
List of the modules of the sequential model.
- Type
List[torch.nn.modules]
- verbosity¶
Output verbose level.
- Type
int
- mem_slots¶
Discretization level of the optimiser. Default : 500.
- Type
int
- preserve_rng_state¶
If the model contains randomized operations, save random states.
- Type
bool
- inspection_values¶
- Type
- Time and memory measures of a forward and backward passes.
- loss_tmp_memory_usage¶
Additional memory consumption of the Loss operation in bytes.
- Type
int
- mem_limit¶
User or hardware defined maximum memory peak in bytes
- Type
int
- during forward and backward phases. 90% of the device free memory is set otherwise.
- build_chain(mem_limit: int) rotor.inspection.Chain [source]¶
Builds self.chain after measures are made.
Memory measures hold in self.all_values are converted and ceiled according to mem_limit and and self.mem_slots.
- Parameters
mem_limit (int) – Custom memory limitation in bytes.
bucket. (Used to determine the size in bytes of a chain memory) –
- Returns
- Return type
- Raises
ValueError if no measures are recorded before call. –
- compute_sequence(custom_mem_limit_bytes: Optional[int] = None, mem_slots: Optional[int] = None, algo: rotor.algorithms.utils.DynamicAlgorithm = <rotor.algorithms.persistent.Persistent object>, **algo_kwargs) rotor.algorithms.sequence.Sequence [source]¶
Computes the optimal rotor checkpointed sequence.
- Parameters
custom_mem_limit_bytes (int) – Use this param set a custom memory limit in bytes.
mem_slots (int) – Quantity of memory buckets. Used for discretization in the dynamic sequence optimisation module.
algo (rotor.algorithms.DynamicAlgorithm) – Dynamic algorithm used to compute sequence.
algo_kwargs – Optional algo keyword arguments.
- Returns
The optimized sequence of operations.
- Return type
- Raises
ValueError if running fully checkpointed forward and backward phases – requires more than custom_mem_limit_bytes.
- forward(inputs)[source]¶
torch.nn.Module override. Computes optimal sequence in training mode. :param inputs: input data of the model. :type inputs: torch.Tensor or tuple[torch.Tensor]
- Returns
Result of rotor forward computation of inputs in training mode. Model computation output otherwise.
- Return type
torch.Tensor or tuple[torch.Tensor]
- measure(custom_input: torch.Tensor)[source]¶
Profiles time and memory usage. Effectuates time and memory usage measures on a forward and backward pass sequential of self.model on custom input.
- Parameters
custom_input (torch.Tensor) – input data.
- Raises
ValueError if no measure were made before. –
- training: bool¶