Skip to content

BaseEvaluatorWrapper

Bases: ModelExtractor, ABC

Base class for wrappers handling model evaluation processes.

This class serves as a foundational structure for evaluator wrappers, offering methods to initialize, prepare, and evaluate models according to specified parameters. It provides core functionality to streamline evaluation, feature importance analysis, patient inference, and jackknife resampling.

Inherits
  • BaseModelExtractor: Loads configuration parameters and model extraction.
  • ABC: Specifies abstract methods that must be implemented by subclasses.

Parameters:

Name Type Description Default
learners_dict Dict

Dictionary containing models and their metadata.

required
criterion str

Criterion for selecting models (e.g., 'f1', 'brier_score').

required
aggregate bool

Whether to aggregate metrics.

required
verbose bool

Controls verbose in the evaluation process.

required
random_state int

Random state for resampling.

required
path Path

Path to the directory containing processed data files.

required

Attributes:

Name Type Description
learners_dict Dict

Holds learners and metadata.

criterion str

Evaluation criterion to select the optimal model.

aggregate bool

Indicates if metrics should be aggregated.

verbose bool

Flag for controlling logging verbose.

random_state int

Random state for resampling.

model object

Best-ranked model for the given criterion.

encoding str

Encoding type, either 'one_hot' or 'target'.

learner str

The learner associated with the best model.

task str

Task associated with the model ('pocketclosure', 'improve', etc.).

factor Optional[float]

Resampling factor if applicable.

sampling Optional[str]

Resampling strategy used (e.g., 'smote').

classification str

Classification type ('binary' or 'multiclass').

dataloader ProcessedDataLoader

Data loader and transformer.

resampler Resampler

Resampling strategy for training and testing.

df DataFrame

Loaded dataset.

df_processed DataFrame

Processed dataset.

train_df DataFrame

Training data after splitting.

test_df DataFrame

Test data after splitting.

X_train DataFrame

Training features.

y_train Series

Training labels.

X_test DataFrame

Test features.

y_test Series

Test labels.

base_target Optional[ndarray]

Baseline target for evaluations.

baseline Baseline

Basline class for model analysis.

evaluator ModelEvaluator

Evaluator for model metrics and feature importance.

inference_engine ModelInference

Model inference manager.

trainer Trainer

Trainer for model evaluation and optimization.

Inherited Properties
  • criterion (str): Retrieves or sets current evaluation criterion for model selection. Supports 'f1', 'brier_score', and 'macro_f1'.
  • model (object): Retrieves best-ranked model dynamically based on the current criterion. Recalculates when criterion is updated.
Abstract Methods
  • wrapped_evaluation: Performs model evaluation and generates specified plots.
  • evaluate_cluster: Performs clustering and calculates Brier scores.
  • evaluate_feature_importance: Computes feature importance using specified methods.
  • average_over_splits: Aggregates metrics over multiple splits for model robustness.
  • wrapped_patient_inference: Runs inference on individual patient data.
  • wrapped_jackknife: Executes jackknife resampling on patient data for confidence interval estimation.
Source code in periomod/wrapper/_basewrapper.py
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
class BaseEvaluatorWrapper(ModelExtractor, ABC):
    """Base class for wrappers handling model evaluation processes.

    This class serves as a foundational structure for evaluator wrappers, offering
    methods to initialize, prepare, and evaluate models according to specified
    parameters. It provides core functionality to streamline evaluation, feature
    importance analysis, patient inference, and jackknife resampling.

    Inherits:
        - `BaseModelExtractor`: Loads configuration parameters and model extraction.
        - `ABC`: Specifies abstract methods that must be implemented by subclasses.

    Args:
        learners_dict (Dict): Dictionary containing models and their metadata.
        criterion (str): Criterion for selecting models (e.g., 'f1', 'brier_score').
        aggregate (bool): Whether to aggregate metrics.
        verbose (bool): Controls verbose in the evaluation process.
        random_state (int): Random state for resampling.
        path (Path): Path to the directory containing processed data files.

    Attributes:
        learners_dict (Dict): Holds learners and metadata.
        criterion (str): Evaluation criterion to select the optimal model.
        aggregate (bool): Indicates if metrics should be aggregated.
        verbose (bool): Flag for controlling logging verbose.
        random_state (int): Random state for resampling.
        model (object): Best-ranked model for the given criterion.
        encoding (str): Encoding type, either 'one_hot' or 'target'.
        learner (str): The learner associated with the best model.
        task (str): Task associated with the model ('pocketclosure', 'improve', etc.).
        factor (Optional[float]): Resampling factor if applicable.
        sampling (Optional[str]): Resampling strategy used (e.g., 'smote').
        classification (str): Classification type ('binary' or 'multiclass').
        dataloader (ProcessedDataLoader): Data loader and transformer.
        resampler (Resampler): Resampling strategy for training and testing.
        df (pd.DataFrame): Loaded dataset.
        df_processed (pd.DataFrame): Processed dataset.
        train_df (pd.DataFrame): Training data after splitting.
        test_df (pd.DataFrame): Test data after splitting.
        X_train (pd.DataFrame): Training features.
        y_train (pd.Series): Training labels.
        X_test (pd.DataFrame): Test features.
        y_test (pd.Series): Test labels.
        base_target (Optional[np.ndarray]): Baseline target for evaluations.
        baseline (Baseline): Basline class for model analysis.
        evaluator (ModelEvaluator): Evaluator for model metrics and feature importance.
        inference_engine (ModelInference): Model inference manager.
        trainer (Trainer): Trainer for model evaluation and optimization.

    Inherited Properties:
        - `criterion (str)`: Retrieves or sets current evaluation criterion for model
            selection. Supports 'f1', 'brier_score', and 'macro_f1'.
        - `model (object)`: Retrieves best-ranked model dynamically based on the current
            criterion. Recalculates when criterion is updated.

    Abstract Methods:
        - `wrapped_evaluation`: Performs model evaluation and generates specified plots.
        - `evaluate_cluster`: Performs clustering and calculates Brier scores.
        - `evaluate_feature_importance`: Computes feature importance using specified
          methods.
        - `average_over_splits`: Aggregates metrics over multiple splits for model
          robustness.
        - `wrapped_patient_inference`: Runs inference on individual patient data.
        - `wrapped_jackknife`: Executes jackknife resampling on patient data for
          confidence interval estimation.
    """

    def __init__(
        self,
        learners_dict: Dict,
        criterion: str,
        aggregate: bool,
        verbose: bool,
        random_state: int,
        path: Path,
    ):
        """Base class for EvaluatorWrapper, initializing common parameters."""
        super().__init__(
            learners_dict=learners_dict,
            criterion=criterion,
            aggregate=aggregate,
            verbose=verbose,
            random_state=random_state,
        )
        self.path = path
        self.dataloader = ProcessedDataLoader(task=self.task, encoding=self.encoding)
        self.resampler = Resampler(
            classification=self.classification, encoding=self.encoding
        )
        (
            self.df,
            self.df_processed,
            self.train_df,
            self.test_df,
            self.X_train,
            self.y_train,
            self.X_test,
            self.y_test,
            self.base_target,
        ) = self._prepare_data_for_evaluation()
        self.baseline = Baseline(
            task=self.task,
            encoding=self.encoding,
            random_state=self.random_state,
            path=self.path,
        )
        self.evaluator = ModelEvaluator(
            model=self.model,
            X=self.X_test,
            y=self.y_test,
            encoding=self.encoding,
            aggregate=self.aggregate,
        )
        self.inference_engine = ModelInference(
            classification=self.classification,
            model=self.model,
            verbose=self.verbose,
        )
        self.trainer = Trainer(
            classification=self.classification,
            criterion=self.criterion,
            tuning=None,
            hpo=None,
        )

    def _prepare_data_for_evaluation(
        self,
    ) -> Tuple[
        pd.DataFrame,
        pd.DataFrame,
        pd.DataFrame,
        pd.DataFrame,
        pd.DataFrame,
        pd.DataFrame,
        pd.DataFrame,
        pd.DataFrame,
        Optional[np.ndarray],
    ]:
        """Prepares data for evaluation.

        Returns:
            Tuple: df, df_processed, train_df, test_df, X_train, y_train, X_test,
                y_test, and optionally base_target.
        """
        df = self.dataloader.load_data(path=self.path)

        task = "pocketclosure" if self.task == "pocketclosureinf" else self.task

        if task in ["pocketclosure", "pdgrouprevaluation"]:
            base_target = self._generate_base_target(df=df)
        else:
            base_target = None

        df_processed = self.dataloader.transform_data(df=df)
        train_df, test_df = self.resampler.split_train_test_df(
            df=df_processed, seed=self.random_state
        )
        if task in ["pocketclosure", "pdgrouprevaluation"] and base_target is not None:
            test_patient_ids = test_df[self.group_col]
            base_target = (
                base_target.reindex(df_processed.index)
                .loc[df_processed[self.group_col].isin(test_patient_ids)]
                .values
            )

        X_train, y_train, X_test, y_test = self.resampler.split_x_y(
            train_df=train_df, test_df=test_df
        )

        return (
            df,
            df_processed,
            train_df,
            test_df,
            X_train,
            y_train,
            X_test,
            y_test,
            base_target,
        )

    def _generate_base_target(self, df: pd.DataFrame) -> pd.Series:
        """Generates the target column before treatment based on the task.

        Args:
            df (pd.DataFrame): The input dataframe.

        Returns:
            pd.Series: The target before column for evaluation.
        """
        if self.task in ["pocketclosure", "pocketclosureinf"]:
            return df.apply(
                lambda row: (
                    0
                    if row["pdbaseline"] == 4
                    and row["bop"] == 2
                    or row["pdbaseline"] > 4
                    else 1
                ),
                axis=1,
            )
        elif self.task == "pdgrouprevaluation":
            return df["pdgroupbase"]
        else:
            raise ValueError(f"Task '{self.task}' is not recognized.")

    def _train_and_get_metrics(
        self, seed: int, learner: str, test_set_size: float = 0.2, n_jobs: int = -1
    ) -> dict:
        """Helper function to run `train_final_model` with a specific seed.

        Args:
            seed (int): Seed value for train-test split.
            learner (str): Type of learner, used for MLP-specific training logic.
            test_set_size (float): Size of test set. Defaults to 0.2.
            n_jobs (int): Number of parallel jobs. Defaults to -1 (use all processors).

        Returns:
            dict: Metrics from `train_final_model`.
        """
        best_params = (
            self.model.get_params() if hasattr(self.model, "get_params") else {}
        )
        best_threshold = getattr(self.model, "best_threshold", None)
        model_tuple = (learner, best_params, best_threshold)

        result = self.trainer.train_final_model(
            df=self.df_processed,
            resampler=self.resampler,
            model=model_tuple,
            sampling=self.sampling,
            factor=self.factor,
            n_jobs=n_jobs,
            seed=seed,
            test_size=test_set_size,
            verbose=self.verbose,
        )
        return result["metrics"]

    def _subset_test_set(
        self, base: str, revaluation: str
    ) -> Tuple[pd.DataFrame, pd.DataFrame]:
        """Creates a subset of the test set based on differences in raw data variables.

        Args:
            base (str): Baseline variable to compare against in `df_raw`.
            revaluation (str): Revaluation variable to check for changes in `df_raw`.

        Returns:
            Tuple: Subsets of X_test and y_test where
                `revaluation` differs from `base`.
        """
        changed_indices = self.df.index[self.df[revaluation] != self.df[base]]
        X_test_subset = self.X_test.reindex(changed_indices)
        y_test_subset = self.y_test.reindex(changed_indices)
        return X_test_subset, y_test_subset

    def _test_filters(
        self,
        X: pd.DataFrame,
        y: pd.Series,
        base: Optional[str],
        revaluation: Optional[str],
        true_preds: bool,
        brier_threshold: Optional[float],
    ) -> Tuple[pd.DataFrame, pd.Series, int]:
        """Applies subsetting filters to the evaluator's test set.

        Args:
            X (pd.DataFrame): Feature set.
            y (pd.Series): Label set.
            base (Optional[str]): Baseline variable for comparison. If provided with
                `revaluation`, subsets to cases where `revaluation` differs from `base`.
            revaluation (Optional[str]): Revaluation variable for comparison. Used only
                if `base` is also provided.
            true_preds (bool): If True, further subsets to cases where the model's
                predictions match the true labels.
            brier_threshold (Optional[float]): Threshold for filtering Brier scores. If
                provided, further subsets to cases with Brier scores below threshold.

        Returns:
            Tuple: Filtered feature set, labels and number of unique patients.
        """
        if base and revaluation:
            X, y = self._subset_test_set(base=base, revaluation=revaluation)
            X, y = X.dropna(), y.dropna()

        if true_preds:
            pred = self.evaluator.model_predictions().reindex(y.index)
            correct_indices = y.index[pred == y]
            X, y = X.loc[correct_indices].dropna(), y.loc[correct_indices].dropna()

        if brier_threshold is not None:
            brier_scores = self.evaluator.brier_scores().reindex(y.index)
            threshold_indices = brier_scores[brier_scores < brier_threshold].index
            X, y = X.loc[threshold_indices].dropna(), y.loc[threshold_indices].dropna()

        subset_patient_ids = self.test_df.loc[y.index, self.group_col]
        num_patients = subset_patient_ids.nunique()

        return X, y, num_patients

    @abstractmethod
    def wrapped_evaluation(
        self,
        cm: bool,
        cm_base: bool,
        brier_groups: bool,
        calibration: bool,
        tight_layout: bool,
    ):
        """Runs evaluation on the best-ranked model based on specified criteria.

        Args:
            cm (bool): If True, plots the confusion matrix.
            cm_base (bool): If True, plots the confusion matrix against the
                value before treatment. Only applicable for specific tasks.
            brier_groups (bool): If True, calculates Brier score groups.
            calibration (bool): If True, plots model calibration.
            tight_layout (bool): If True, applies tight layout to the plot.
        """

    @abstractmethod
    def evaluate_cluster(
        self,
        n_cluster: int,
        base: Optional[str],
        revaluation: Optional[str],
        true_preds: bool,
        brier_threshold: Optional[float],
        tight_layout: bool,
    ):
        """Performs cluster analysis with Brier scores, with optional subsetting.

        Args:
            n_cluster (int): Number of clusters for Brier score clustering analysis.
            base (Optional[str]): Baseline variable for comparison.
            revaluation (Optional[str]): Revaluation variable for comparison.
            true_preds (bool): If True, further subsets to cases where model predictions
                match the true labels.
            brier_threshold (Optional[float]): Threshold for Brier score filtering.
            tight_layout (bool): If True, applies tight layout to the plot.
        """

    @abstractmethod
    def evaluate_feature_importance(
        self,
        fi_types: List[str],
        base: Optional[str],
        revaluation: Optional[str],
        true_preds: bool,
        brier_threshold: Optional[float],
    ):
        """Evaluates feature importance using specified types, with optional subsetting.

        Args:
            fi_types (List[str]): List of feature importance types to evaluate.
            base (Optional[str]): Baseline variable for comparison.
            revaluation (Optional[str]): Revaluation variable for comparison.
            true_preds (bool): If True, further subsets to cases where model predictions
                match the true labels.
            brier_threshold (Optional[float]): Threshold for Brier score filtering.
        """

    @abstractmethod
    def average_over_splits(self, num_splits: int, n_jobs: int):
        """Trains the final model over multiple splits with different seeds.

        Args:
            num_splits (int): Number of random seeds/splits to train the model on.
            n_jobs (int): Number of parallel jobs.
        """

    @abstractmethod
    def wrapped_patient_inference(
        self,
        patient: Patient,
    ):
        """Runs inference on the patient's data using the best-ranked model.

        Args:
            patient (Patient): A `Patient` dataclass instance containing patient-level,
                tooth-level, and side-level information.
        """

    @abstractmethod
    def wrapped_jackknife(
        self,
        patient: Patient,
        results: pd.DataFrame,
        sample_fraction: float,
        n_jobs: int,
        max_plots: int,
    ) -> pd.DataFrame:
        """Runs jackknife resampling for inference on a given patient's data.

        Args:
            patient (Patient): `Patient` dataclass instance containing patient-level
                information, tooth-level, and side-level details.
            results (pd.DataFrame): DataFrame to store results from jackknife inference.
            sample_fraction (float, optional): The fraction of patient data to use for
                jackknife resampling.
            n_jobs (int, optional): The number of parallel jobs to run.
            max_plots (int): Maximum number of plots for jackknife intervals.
        """

__init__(learners_dict, criterion, aggregate, verbose, random_state, path)

Base class for EvaluatorWrapper, initializing common parameters.

Source code in periomod/wrapper/_basewrapper.py
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    learners_dict: Dict,
    criterion: str,
    aggregate: bool,
    verbose: bool,
    random_state: int,
    path: Path,
):
    """Base class for EvaluatorWrapper, initializing common parameters."""
    super().__init__(
        learners_dict=learners_dict,
        criterion=criterion,
        aggregate=aggregate,
        verbose=verbose,
        random_state=random_state,
    )
    self.path = path
    self.dataloader = ProcessedDataLoader(task=self.task, encoding=self.encoding)
    self.resampler = Resampler(
        classification=self.classification, encoding=self.encoding
    )
    (
        self.df,
        self.df_processed,
        self.train_df,
        self.test_df,
        self.X_train,
        self.y_train,
        self.X_test,
        self.y_test,
        self.base_target,
    ) = self._prepare_data_for_evaluation()
    self.baseline = Baseline(
        task=self.task,
        encoding=self.encoding,
        random_state=self.random_state,
        path=self.path,
    )
    self.evaluator = ModelEvaluator(
        model=self.model,
        X=self.X_test,
        y=self.y_test,
        encoding=self.encoding,
        aggregate=self.aggregate,
    )
    self.inference_engine = ModelInference(
        classification=self.classification,
        model=self.model,
        verbose=self.verbose,
    )
    self.trainer = Trainer(
        classification=self.classification,
        criterion=self.criterion,
        tuning=None,
        hpo=None,
    )

average_over_splits(num_splits, n_jobs) abstractmethod

Trains the final model over multiple splits with different seeds.

Parameters:

Name Type Description Default
num_splits int

Number of random seeds/splits to train the model on.

required
n_jobs int

Number of parallel jobs.

required
Source code in periomod/wrapper/_basewrapper.py
541
542
543
544
545
546
547
548
@abstractmethod
def average_over_splits(self, num_splits: int, n_jobs: int):
    """Trains the final model over multiple splits with different seeds.

    Args:
        num_splits (int): Number of random seeds/splits to train the model on.
        n_jobs (int): Number of parallel jobs.
    """

evaluate_cluster(n_cluster, base, revaluation, true_preds, brier_threshold, tight_layout) abstractmethod

Performs cluster analysis with Brier scores, with optional subsetting.

Parameters:

Name Type Description Default
n_cluster int

Number of clusters for Brier score clustering analysis.

required
base Optional[str]

Baseline variable for comparison.

required
revaluation Optional[str]

Revaluation variable for comparison.

required
true_preds bool

If True, further subsets to cases where model predictions match the true labels.

required
brier_threshold Optional[float]

Threshold for Brier score filtering.

required
tight_layout bool

If True, applies tight layout to the plot.

required
Source code in periomod/wrapper/_basewrapper.py
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
@abstractmethod
def evaluate_cluster(
    self,
    n_cluster: int,
    base: Optional[str],
    revaluation: Optional[str],
    true_preds: bool,
    brier_threshold: Optional[float],
    tight_layout: bool,
):
    """Performs cluster analysis with Brier scores, with optional subsetting.

    Args:
        n_cluster (int): Number of clusters for Brier score clustering analysis.
        base (Optional[str]): Baseline variable for comparison.
        revaluation (Optional[str]): Revaluation variable for comparison.
        true_preds (bool): If True, further subsets to cases where model predictions
            match the true labels.
        brier_threshold (Optional[float]): Threshold for Brier score filtering.
        tight_layout (bool): If True, applies tight layout to the plot.
    """

evaluate_feature_importance(fi_types, base, revaluation, true_preds, brier_threshold) abstractmethod

Evaluates feature importance using specified types, with optional subsetting.

Parameters:

Name Type Description Default
fi_types List[str]

List of feature importance types to evaluate.

required
base Optional[str]

Baseline variable for comparison.

required
revaluation Optional[str]

Revaluation variable for comparison.

required
true_preds bool

If True, further subsets to cases where model predictions match the true labels.

required
brier_threshold Optional[float]

Threshold for Brier score filtering.

required
Source code in periomod/wrapper/_basewrapper.py
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
@abstractmethod
def evaluate_feature_importance(
    self,
    fi_types: List[str],
    base: Optional[str],
    revaluation: Optional[str],
    true_preds: bool,
    brier_threshold: Optional[float],
):
    """Evaluates feature importance using specified types, with optional subsetting.

    Args:
        fi_types (List[str]): List of feature importance types to evaluate.
        base (Optional[str]): Baseline variable for comparison.
        revaluation (Optional[str]): Revaluation variable for comparison.
        true_preds (bool): If True, further subsets to cases where model predictions
            match the true labels.
        brier_threshold (Optional[float]): Threshold for Brier score filtering.
    """

wrapped_evaluation(cm, cm_base, brier_groups, calibration, tight_layout) abstractmethod

Runs evaluation on the best-ranked model based on specified criteria.

Parameters:

Name Type Description Default
cm bool

If True, plots the confusion matrix.

required
cm_base bool

If True, plots the confusion matrix against the value before treatment. Only applicable for specific tasks.

required
brier_groups bool

If True, calculates Brier score groups.

required
calibration bool

If True, plots model calibration.

required
tight_layout bool

If True, applies tight layout to the plot.

required
Source code in periomod/wrapper/_basewrapper.py
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
@abstractmethod
def wrapped_evaluation(
    self,
    cm: bool,
    cm_base: bool,
    brier_groups: bool,
    calibration: bool,
    tight_layout: bool,
):
    """Runs evaluation on the best-ranked model based on specified criteria.

    Args:
        cm (bool): If True, plots the confusion matrix.
        cm_base (bool): If True, plots the confusion matrix against the
            value before treatment. Only applicable for specific tasks.
        brier_groups (bool): If True, calculates Brier score groups.
        calibration (bool): If True, plots model calibration.
        tight_layout (bool): If True, applies tight layout to the plot.
    """

wrapped_jackknife(patient, results, sample_fraction, n_jobs, max_plots) abstractmethod

Runs jackknife resampling for inference on a given patient's data.

Parameters:

Name Type Description Default
patient Patient

Patient dataclass instance containing patient-level information, tooth-level, and side-level details.

required
results DataFrame

DataFrame to store results from jackknife inference.

required
sample_fraction float

The fraction of patient data to use for jackknife resampling.

required
n_jobs int

The number of parallel jobs to run.

required
max_plots int

Maximum number of plots for jackknife intervals.

required
Source code in periomod/wrapper/_basewrapper.py
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
@abstractmethod
def wrapped_jackknife(
    self,
    patient: Patient,
    results: pd.DataFrame,
    sample_fraction: float,
    n_jobs: int,
    max_plots: int,
) -> pd.DataFrame:
    """Runs jackknife resampling for inference on a given patient's data.

    Args:
        patient (Patient): `Patient` dataclass instance containing patient-level
            information, tooth-level, and side-level details.
        results (pd.DataFrame): DataFrame to store results from jackknife inference.
        sample_fraction (float, optional): The fraction of patient data to use for
            jackknife resampling.
        n_jobs (int, optional): The number of parallel jobs to run.
        max_plots (int): Maximum number of plots for jackknife intervals.
    """

wrapped_patient_inference(patient) abstractmethod

Runs inference on the patient's data using the best-ranked model.

Parameters:

Name Type Description Default
patient Patient

A Patient dataclass instance containing patient-level, tooth-level, and side-level information.

required
Source code in periomod/wrapper/_basewrapper.py
550
551
552
553
554
555
556
557
558
559
560
@abstractmethod
def wrapped_patient_inference(
    self,
    patient: Patient,
):
    """Runs inference on the patient's data using the best-ranked model.

    Args:
        patient (Patient): A `Patient` dataclass instance containing patient-level,
            tooth-level, and side-level information.
    """