pyhgf.distribution.HGFDistribution#
- class pyhgf.distribution.HGFDistribution(input_data: Array | ndarray | bool_ | number | bool | int | float | complex = nan, time_steps: ndarray | Array | bool_ | number | bool | int | float | complex | None = None, model_type: str = 'continuous', n_levels: int = 2, response_function: Callable | None = None, response_function_inputs: ndarray | Array | bool_ | number | bool | int | float | complex | None = None)[source]#
The HGF distribution PyMC >= 5.0 compatible.
Examples
This class should be used in the context of a PyMC probabilistic model to estimate one or many parameter-s probability density with MCMC sampling.
import arviz as az import numpy as np import pymc as pm from math import log from pyhgf import load_data from pyhgf.distribution import HGFDistribution from pyhgf.response import total_gaussian_surprise
Create the data (value and time vectors)
timeserie = load_data("continuous")
We create the PyMC distribution here, specifying the type of model and response function we want to use (i.e. simple Gaussian surprise).
hgf_logp_op = HGFDistribution( n_levels=2, input_data=timeserie[np.newaxis], response_function=total_gaussian_surprise, )
Create the PyMC model. Here we are fixing all the parameters to arbitrary values and sampling the posterior probability density of the tonic volatility at the second level, using a normal distribution as prior:
\[\omega_2 \sim \mathcal{N}(-11.0, 2.0)\]with pm.Model() as model: # prior over tonic volatility ω_2 = pm.Normal("ω_2", -11.0, 2) pm.Potential( "hhgf_loglike", hgf_logp_op( tonic_volatility_2=ω_2, precision_1=1.0, precision_2=1.0, mean_1=1.0, ), )
Sample the model - Using 2 chains, 1 cores and 1000 warmups.
with model: idata = pm.sample(chains=2, cores=1, tune=1000)
Print a summary of the results using Arviz
>>> az.summary(idata, var_names="ω_2")
mean
sd
hdi_3%
hdi_97%
mcse_mean
mcse_sd
ess_bulk
ess_tail
r_hat
ω_2
-14.776
1.065
-16.766
-12.944
0.038
0.027
806
938
1
- __init__(input_data: Array | ndarray | bool_ | number | bool | int | float | complex = nan, time_steps: ndarray | Array | bool_ | number | bool | int | float | complex | None = None, model_type: str = 'continuous', n_levels: int = 2, response_function: Callable | None = None, response_function_inputs: ndarray | Array | bool_ | number | bool | int | float | complex | None = None)[source]#
Distribution initialization.
- Parameters:
- input_data
List of input data. When n models should be fitted, the list contains n 1d Numpy arrays. By default, the associated time vector is the unit vector starting at 0. A different time_steps vector can be passed to the time_steps argument.
- time_steps
List of 1d Numpy arrays containing the time_steps vectors for each input time series. If one of the list items is None, or if None is provided instead, the time vector will default to an integers vector starting at 0.
- model_type
The model type to use (can be “continuous” or “binary”).
- n_levels
The number of hierarchies in the perceptual model (can be 2 or 3). If None, the nodes hierarchy is not created and might be provided afterwards using add_nodes().
- response_function
The response function to use to compute the model surprise.
- response_function_inputs
A list of tuples with the same length as the number of models. Each tuple contains additional data and parameters that can be accessible to the response functions.
Methods
L_op
(inputs, outputs, output_grads)Construct a graph for the L-operator.
R_op
(inputs, eval_points)Construct a graph for the R-operator.
__init__
([input_data, time_steps, ...])Distribution initialization.
add_tag_trace
(thing[, user_line])Add tag.trace to a node or variable.
do_constant_folding
(fgraph, node)Determine whether or not constant folding should be performed for the given node.
grad
(inputs, output_gradients)Gradient of the function.
make_node
([mean_1, mean_2, mean_3, ...])Convert inputs to symbolic variables.
make_py_thunk
(node, storage_map, ...[, debug])Make a Python thunk.
make_thunk
(node, storage_map, compute_map, ...)Create a thunk.
perform
(node, inputs, outputs)Run the function forward.
prepare_node
(node, storage_map, compute_map, ...)Make any special modifications that the Op needs before doing
Op.make_thunk()
.Attributes
default_output
An
int
that specifies which outputOp.__call__()
should return.destroy_map
A
dict
that maps output indices to the input indices upon which they operate in-place.itypes
otypes
view_map
A
dict
that maps output indices to the input indices of which they are a view.