pyhgf.distribution.HGFPointwise#

class pyhgf.distribution.HGFPointwise(input_data: Array | ndarray | bool_ | number | bool | int | float | complex = nan, time_steps: ndarray | Array | bool_ | number | bool | int | float | complex | None = None, model_type: str = 'continuous', n_levels: int = 2, response_function: Callable | None = None, response_function_inputs: ndarray | Array | bool_ | number | bool | int | float | complex | None = None)[source]#

The HGF distribution returning pointwise log probability.

This class should be used in the context of model comparison where the pointwise log probabilities are required for cross-validation.

__init__(input_data: Array | ndarray | bool_ | number | bool | int | float | complex = nan, time_steps: ndarray | Array | bool_ | number | bool | int | float | complex | None = None, model_type: str = 'continuous', n_levels: int = 2, response_function: Callable | None = None, response_function_inputs: ndarray | Array | bool_ | number | bool | int | float | complex | None = None)[source]#

Distribution initialization.

Parameters:
input_data

List of input data. When n models should be fitted, the list contains n 1d Numpy arrays. By default, the associated time vector is the unit vector starting at 0. A different time_steps vector can be passed to the time_steps argument.

time_steps

List of 1d Numpy arrays containing the time_steps vectors for each input time series. If one of the list items is None, or if None is provided instead, the time vector will default to an integers vector starting at 0.

model_type

The model type to use (can be “continuous” or “binary”).

n_levels

The number of hierarchies in the perceptual model (can be 2 or 3). If None, the nodes hierarchy is not created and might be provided afterwards using add_nodes().

response_function

The response function to use to compute the model surprise.

response_function_inputs

A list of tuples with the same length as the number of models. Each tuple contains additional data and parameters that can be accessible to the response functions.

Methods

L_op(inputs, outputs, output_grads)

Construct a graph for the L-operator.

R_op(inputs, eval_points)

Construct a graph for the R-operator.

__init__([input_data, time_steps, ...])

Distribution initialization.

add_tag_trace(thing[, user_line])

Add tag.trace to a node or variable.

do_constant_folding(fgraph, node)

Determine whether or not constant folding should be performed for the given node.

grad(inputs, output_grads)

Construct a graph for the gradient with respect to each input variable.

make_node([mean_1, mean_2, mean_3, ...])

Convert inputs to symbolic variables.

make_py_thunk(node, storage_map, ...[, debug])

Make a Python thunk.

make_thunk(node, storage_map, compute_map, ...)

Create a thunk.

perform(node, inputs, outputs)

Run the function forward.

prepare_node(node, storage_map, compute_map, ...)

Make any special modifications that the Op needs before doing Op.make_thunk().

Attributes

default_output

An int that specifies which output Op.__call__() should return.

destroy_map

A dict that maps output indices to the input indices upon which they operate in-place.

itypes

otypes

view_map

A dict that maps output indices to the input indices of which they are a view.