wsinfer.modellib.run_inference#

Run inference.

From the original paper (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7369575/): > In the prediction (test) phase, no data augmentation was applied except for the > normalization of the color channels.

Functions#

run_inference(→ tuple[list[str], list[str]])

Run model inference on a directory of whole slide images and save results to CSV.

Module Contents#

wsinfer.modellib.run_inference.run_inference(wsi_dir: str | pathlib.Path, results_dir: str | pathlib.Path, model_info: wsinfer_zoo.client.HFModelTorchScript | wsinfer.modellib.models.LocalModelTorchScript, batch_size: int = 32, num_workers: int = 0, speedup: bool = False) tuple[list[str], list[str]][source]#

Run model inference on a directory of whole slide images and save results to CSV.

This assumes the patching has already been done and the results are stored in results_dir. An error will be raised otherwise.

Output CSV files are written to {results_dir}/model-outputs/.

Parameters:
  • wsi_dir (str or Path) – Directory containing whole slide images. This directory can only contain whole slide images. Otherwise, an error will be raised during model inference.

  • results_dir (str or Path) – Directory containing results of patching.

  • model_info – Instance of Weights including the model object and information about how to apply the model to new data.

  • batch_size (int) – The batch size during the forward pass (default is 32).

  • num_workers (int) – Number of workers for data loading (default is 0, meaning use a single thread).

  • speedup (bool) – If True, JIT-compile the model. This has a startup cost but model inference should be faster (default False).

Returns:

  • A tuple of two lists of strings. The first list contains the slide IDs for which

  • patching failed, and the second list contains the slide IDs for which model

  • inference failed.