Function to check python environment and install necessary packages | check_and_install |
Function to check if inputs are supported by corresponding fit function | check_input_args_fit |
Function to choose a kernel initializer for a torch layer | choose_kernel_initializer_torch |
Method for extracting ensemble coefficient estimates | coef.drEnsemble |
Character-to-parameter collection function needed for mixture of same distribution (torch) | collect_distribution_parameters |
Function to combine two penalties | combine_penalties |
Function to create (custom) family | create_family |
Function to create (custom) family | create_family_torch |
Function to create mgcv-type penalty | create_penalty |
Generic cv function | cv |
Fitting Semi-Structured Deep Distributional Regression | deepregression |
Function to define output distribution based on dist_fun | distfun_to_dist |
Generic deep ensemble function | ensemble |
Ensembling deepregression models | ensemble.deepregression |
Extract the smooth term from a deepregression term specification | extract_pure_gam_part |
Convenience function to extract penalty matrix and value | extract_S |
Formula helpers | extractlen extractval extractvals form2text |
Extract variable from term | extractvar |
Character-tfd mapping function | family_to_tfd |
Character-to-transformation mapping function | family_to_trafo |
Character-to-transformation mapping function | family_to_trafo_torch |
Character-torch mapping function | family_to_trochd |
Method for extracting the fitted values of an ensemble | fitted.drEnsemble |
Options for formula parsing | form_control |
Function to transform a distritbution layer output into a loss function | from_dist_to_loss |
Function to transform a distribution layer output into a loss function | from_dist_to_loss_torch |
Function to define output distribution based on dist_fun | from_distfun_to_dist_torch |
Define Predictor of a Deep Distributional Regression Model | from_preds_to_dist |
Define Predictor of a Deep Distributional Regression Model | from_preds_to_dist_torch |
used by gam_processor | gam_plot_data |
Function to return the fitted distribution | get_distribution |
Obtain the conditional ensemble distribution | get_ensemble_distribution |
Extract gam part from wrapped term | get_gam_part |
Extract property of gamdata | get_gamdata |
Extract number in matching table of reduced gam term | get_gamdata_reduced_nr |
Helper function to calculate amount of layers Needed when shared layers are used, because of layers have same names | get_help_forward_torch |
Function to return layer given model and name | get_layer_by_opname |
Function to return layer number given model and name | get_layernr_by_opname |
Function to return layer numbers with trainable weights | get_layernr_trainable |
Helper function to create an function that generates R6 instances of class dataset | get_luz_dataset |
Extract term names from the parsed formula content | get_names_pfc |
Extract variables from wrapped node term | get_node_term |
Extract attributes/hyper-parameters of the node term | get_nodedata |
Return partial effect of one smooth term | get_partial_effect |
Extract processor name from term | get_processor_name |
Extract terms defined by specials in formula | get_special |
Function to subset parsed formulas | get_type_pfc |
Function to retrieve the weights of a structured layer | get_weight_by_name |
Function to return weight given model and name | get_weight_by_opname |
Function to define smoothness and call mgcv's smooth constructor | handle_gam_term |
Function to import required packages | import_packages |
Function to import required packages for tensorflow @import tensorflow tfprobability keras | import_tf_dependings |
Function to import required packages for torch @import torch torchvision luz | import_torch_dependings |
Compile a Deep Distributional Regression Model | keras_dr |
Convenience layer function | layer_add_identity layer_concatenate_identity |
Function to create custom nn_linear module to overwrite reset_parameters | layer_dense_module |
Function to define a torch layer similar to a tf dense layer | layer_dense_torch |
Function that creates layer for each processor | autogam_processor gam_processor int_processor layer_generator lin_processor node_processor ri_processor |
NODE/ODTs Layer | layer_node |
Sparse Batch Normalization layer | layer_sparse_batch_normalization |
Sparse 2D Convolutional layer | layer_sparse_conv_2d |
Function to define spline as TensorFlow layer | layer_spline |
Function to define spline as Torch layer | layer_spline_torch |
Function to return the log_score | log_score |
Function to loop through parsed formulas and apply data trafo | loop_through_pfc_and_call_trafo |
Generate folds for CV out of one hot encoded matrix | make_folds |
creates a generator for training | make_generator |
Make a DataGenerator from a data.frame or matrix | make_generator_from_matrix |
Families for deepregression | make_tfd_dist make_torch_dist |
Convenience layer function | makeInputs |
Function that takes term and create layer name | makelayername |
Function to initialize a nn_module Forward functions works with a list. The entries of the list are the input of the subnetworks | model_torch |
Function to define an optimizer combining multiple optimizers | multioptimizer |
Function to exclude NA values | na_omit_list |
Returns the parameter names for a given family | names_families |
custom nn_linear module to overwrite reset_parameters # nn_init_constant works only if value is scalar; so warmstarts for gam does'not work | nn_init_no_grad_constant_deepreg |
Options for orthogonalization | orthog_control |
Function to compute adjusted penalty when orthogonalizing | orthog_P |
Orthogonalize a Semi-Structured Model Post-hoc | orthog_post_fitting |
Orthogonalize structured term by another matrix | orthog_structured_smooths_Z |
Options for penalty setup in the pre-processing | penalty_control |
Plot CV results from deepregression | plot_cv |
Generic functions for deepregression models | coef.deepregression cv.deepregression fit.deepregression fitted.deepregression mean.deepregression plot.deepregression predict.deepregression print.deepregression quant.deepregression stddev.deepregression |
Pre-calculate all gam parts from the list of formulas | precalc_gam |
Handler for prediction with gam terms | predict_gam_handler |
Generator function for deepregression objects | predict_gen |
Function to prepare data based on parsed formulas | prepare_data |
Function to additionally prepare data for fit process (torch) | prepare_data_torch |
Function to prepare input list for fit process, due to different approaches | prepare_input_list_model |
Function to prepare new data based on parsed formulas | prepare_newdata |
Prepares distributions for mixture process | prepare_torch_distr_mixdistr |
Control function to define the processor for terms in the formula | process_terms |
Generic quantile function | quant |
random effect layer | pen_layer re_layer |
Generic function to re-intialize model weights | reinit_weights |
Method to re-initialize weights of a '"deepregression"' model | reinit_weights.deepregression |
Function to define orthogonalization connections in the formula | separate_define_relation |
Hadamard-type layers torch | simplyconnected_layer_torch tibgroup_layer_torch tiblinlasso_layer_torch tib_layer_torch |
Generic sd function | stddev |
Function to get the stoppting iteration from CV | stop_iter_cv_result |
Initializes a Subnetwork based on the Processed Additive Predictor | subnetwork_init |
Initializes a Subnetwork based on the Processed Additive Predictor | subnetwork_init_torch |
TensorFlow repeat function which is not available for TF 2.0 | tf_repeat |
Row-wise tensor product using TensorFlow | tf_row_tensor |
Split tensor in multiple parts | tf_split_multiple |
Function to index tensors columns | tf_stride_cols |
Function to index tensors last dimension | tf_stride_last_dim_tensor |
For using mean squared error via TFP | tfd_mse |
Implementation of a zero-inflated negbinom distribution for TFP | tfd_zinb |
Implementation of a zero-inflated poisson distribution for TFP | tfd_zip |
Hadamard-type layers | inverse_group_lasso_pen layer_group_hadamard layer_hadamard layer_hadamard_diff regularizer_group_lasso simplyconnected_layer tibgroup_layer tib_layer |
Compile a Deep Distributional Regression Model (Torch) | torch_dr |
Function to update miniconda and packages | update_miniconda_deepregression |
Options for weights of layers | weight_control |