Kidney | VALIANT /valiant Vanderbilt Advanced Lab for Immersive AI Translation (VALIANT) Wed, 28 Jan 2026 17:02:41 +0000 en-US hourly 1 Glo-In-One-v2: holistic identification of glomerular cells, tissues, and lesions in human and mouse histopathology /valiant/2026/01/28/glo-in-one-v2-holistic-identification-of-glomerular-cells-tissues-and-lesions-in-human-and-mouse-histopathology/ Wed, 28 Jan 2026 17:02:41 +0000 /valiant/?p=5701 Yu, Lining; Yin, Mengmeng; Deng, Ruining; Liu, Quan; Yao, Tianyuan; Cui, Can; Guo, Junlin; Wang, Yu; Wang, Yaohong; Zhao, Shilin; Yang, Haichun; & Huo, Yuankai. (2025)..Journal of Medical Imaging,12(6), 61406.

Segmenting structures and lesions inside kidney glomeruli usually requires expert nephropathologists to carefully examine tissue morphology, a process that is time-consuming and can vary between observers. Building on their earlier Glo In One toolkit for detecting and segmenting glomeruli, the authors developed Glo In One version 2, which adds more detailed segmentation capabilities. They created a large annotated dataset containing 14 labels that cover tissue regions, cell types, and glomerular lesions across 23,529 glomeruli from both human and mouse kidney histopathology images, making it one of the largest datasets of its kind. Using this dataset, they trained a single deep learning model with a dynamic head architecture to segment all 14 classes from partially labeled whole slide images. The model was trained on 368 annotated kidney slides and learned to identify five intraglomerular tissue types and nine lesion types. The model achieved solid performance, with an average Dice similarity coefficient of 76.5 percent for glomerulus segmentation. In addition, using transfer learning, where knowledge learned from mouse data is applied to human data, improved lesion segmentation accuracy by more than 3 percent across lesion types. Overall, this work introduces a publicly available convolutional neural network that enables detailed, multiclass segmentation of glomerular tissue and lesions, helping reduce manual workload and variability in kidney pathology analysis.

Fig.1

This figure presents fine-grained classes of intraglomerular tissue, including Bowman’s capsule (Cap), tuft (Tuft), mesangium (Mes), mesangial cells (Mec), and podocytes (Pod). It also highlights the glomerular lesions observed in rodents and humans: AH, adhesion; CD, capsular drop; GS, global sclerosis; HS, hyalinosis; ML, mesangial lysis; MA, microaneurysm; NS, nodular sclerosis; ME, mesangial expansion; SS segmental sclerosis.

]]> Evaluating cell AI foundation models in kidney pathology with human-in-the-loop enrichment /valiant/2025/12/19/evaluating-cell-ai-foundation-models-in-kidney-pathology-with-human-in-the-loop-enrichment/ Fri, 19 Dec 2025 16:47:48 +0000 /valiant/?p=5563 Guo, J., Lu, S., Cui, C., Deng, R., Yao, T., Tao, Z., Lin, Y., Lionts, M., Liu, Q., Xiong, J., Wang, Y., Zhao, S., Chang, C. E., Wilkes, M., Fogo, A. B., Yin, M., Yang, H., & Huo, Y. (2025)..Communications Medicine,5(1), 495.

Large artificial intelligence foundation models are becoming important tools in healthcare, including digital pathology, where they help analyze medical images. Many of these models have been trained to handle complex tasks such as diagnosing diseases or measuring tissue features using very large and diverse datasets. However, it is less clear how well they perform on more focused tasks, such as identifying and outlining cell nuclei within images from a single organ like the kidney. This study examines how well current cell foundation models perform on this task and explores practical ways to improve them.

To do this, the researchers assembled a large dataset of 2,542 kidney whole slide images collected from multiple medical centers, covering different kidney diseases and even different species. They evaluated three widely used, state-of-the-art cell foundation models—Cellpose, StarDist, and CellViT—for their ability to segment cell nuclei. To improve performance without requiring extensive, time-consuming pixel-level annotations from experts, the team introduced a “human-in-the-loop” approach. This method combines predictions from multiple models to create higher-quality training labels and then refines a subset of difficult cases with corrections from pathologists. The models were fine-tuned using this enriched dataset, and their segmentation accuracy was carefully measured.

The results show that accurately segmenting cell nuclei in kidney pathology remains challenging and benefits from models that are more specifically tailored to this organ. Among the three models, CellViT showed the best initial performance, with an F1 score of 0.78. After fine-tuning with the improved training data, all models performed better, with StarDist reaching the highest F1 score of 0.82. Importantly, combining automatically generated labels from foundation models with a smaller set of pathologist-corrected “hard” image regions consistently improved performance across all models.

Overall, this study provides a clear benchmark for evaluating and improving cell AI foundation models in real-world pathology settings. It also demonstrates that high-quality nuclei segmentation can be achieved with much less expert annotation, supporting more efficient and scalable workflows in clinical pathology without sacrificing accuracy.

Fig. 1: Overall framework.

The upper panel(ac) illustrates the diverse evaluation dataset consisting of 2542 kidney WSIs.ashows the number of kidney WSIs in publicly available cell nuclei datasets versus our evaluation dataset, which exceeds existing datasets by a large margin.bdepicts the diverse data sources included in our dataset.cindicates that these WSIs were stained using Hematoxylin and Eosin (H&E), Periodic acid–Schiff methenamine (PASM), and Periodic acid–Schiff (PAS).Performance: Kidney cell nuclei instance segmentation was performed using three SOTA cell foundation models: Cellpose, StarDist, and CellViT. Model performance was evaluated based on qualitative human feedback for each prediction mask. Data Enrichment: A human-in-the-loop (HITL) design integrates prediction masks from performance evaluation into the model’s continual learning process, reducing reliance on pixel-level human annotation.

]]>
SynStitch: A Self-Supervised Learning Network for Ultrasound Image Stitching Using Synthetic Training Pairs and Indirect Supervision /valiant/2025/06/20/synstitch-a-self-supervised-learning-network-for-ultrasound-image-stitching-using-synthetic-training-pairs-and-indirect-supervision/ Fri, 20 Jun 2025 18:23:23 +0000 /valiant/?p=4559 Yao, Xing; Yu, Runxuan; Hu, Dewei; Yang, Hao; Lou, Ange; Wang, Jiacheng; Lu, Daiwei; Arenas, Gabriel; Oguz, Baris; Pouch, Alison; Schwartz, Nadav; Byram, Brett C.; Oguz, Ipek. Proceedings – International Symposium on Biomedical Imaging (2025). .

Ultrasound (US) imaging is commonly used to see inside the body, but each image only shows a small area. To get a bigger picture, doctors can “stitch” multiple ultrasound images together—kind of like making a panoramic photo. However, it’s hard to accurately combine these images when they only partly overlap or show slightly different views of the same body part.

In this work, we introduceSynStitch, a new self-supervised method that helps stitch together 2D ultrasound images more effectively. SynStitch has two main parts: aSynthetic Stitching Pair Generation Module (SSPGM)and anImage Stitching Module (ISM). The SSPGM uses an AI model calledControlNetto create realistic pairs of ultrasound images from a single image, where the relationship between the two images is known. These pairs are used to teach the ISM how to properly stitch ultrasound images together.

We tested SynStitch on kidney ultrasound images and found that it worked better than several top existing methods. It produced clearer and more accurate stitched images, as shown by both visual results and data measurements. You can find the code for this project at.

Fig. 1.

SynStitch overview. We first train the SSPGM to generate a realistic 2DUS image Is from an input image I with a random affine matrix A. Then we freeze the SSPGM and we train ISM on the synthetic stitching pairs.

]]>
GloFinder: AI-empowered QuPath plugin for WSI-level glomerular detection, visualization, and curation /valiant/2025/04/23/glofinder-ai-empowered-qupath-plugin-for-wsi-level-glomerular-detection-visualization-and-curation/ Wed, 23 Apr 2025 14:05:14 +0000 /valiant/?p=4186 Yue, Jialin; Yao, Tianyuan; Deng, Ruining; Lu, Siqi; Guo, Junlin; Liu, Quan; Xiong, Juming; Yin, Mengmeng; Yang, Haichun; Huo, Yuankai. “Journal of Pathology Informatics 17 (2025): 100433. .

Artificial intelligence (AI) has made it easier to automatically detect glomeruli—the tiny filtering units in the kidney—using high-resolution images of kidney tissue. But many of the existing AI tools are hard to use unless you have advanced programming skills, which makes them less useful for doctors and other healthcare professionals. On top of that, current tools are often trained on only one type of data and don’t let users adjust how confident the system needs to be before marking something as a glomerulus.

To solve these problems, we created GloFinder, a user-friendly tool that works as a plugin for the QuPath image viewer. With just one click, GloFinder can scan an entire kidney slide image and find glomeruli automatically. It also lets users review and edit the results directly on the screen.

GloFinder uses an advanced detection method called CircleNet, which represents glomeruli as circles to help the system find them more precisely. It was trained using around 160,000 manually labeled glomeruli to boost accuracy. To make the results even better, GloFinder uses a smart technique that combines results from several AI models, weighting their confidence levels to improve overall performance.

This tool is designed to make it easier for clinicians and researchers to analyze kidney images quickly and accurately—no programming required—making it a valuable resource for kidney disease research and diagnosis.

Fig. 1.

Glomerular detection results using the GloFinder plugin. Detected glomeruli are represented as circles with various colors indicating detection confidence.

]]>
Spatial Pathomics Toolkit for Quantitative Analysis of Podocyte Nuclei with Histology and Spatial Transcriptomics Data in Renal Pathology /valiant/2024/06/20/spatial-pathomics-toolkit-for-quantitative-analysis-of-podocyte-nuclei-with-histology-and-spatial-transcriptomics-data-in-renal-pathology/ Thu, 20 Jun 2024 14:29:52 +0000 /valiant/?p=2519 Jiayuan Chen, Yu Wang, Ruining Deng, Quan Liu, Can Cui, Tianyuan Yao, Yilin Liu, Jianyong Zhong, Agnes B. Fogo, Haichun Yang, Shilin Zhao, and Yuankai Huo. “” Proceedings of SPIE Medical Imaging 2024: Digital and Computational Pathology, vol. 12933, 1293310, 2024, San Diego, California,

Podocytes are specialized cells essential for kidney function, wrapping around glomerular capillaries. Analyzing these cells on pathology slides has been challenging due to limitations in current methods. To address this, researchers developed the Spatial Pathomics Toolkit (SPT) to improve the assessment of podocyte characteristics in kidney tissue images. The SPT has three key features:

  1. Instance Object Segmentation: Accurately identifies individual podocyte nuclei.
  2. Pathomics Feature Generation: Extracts a wide range of detailed quantitative features from the identified nuclei.
  3. Robust Statistical Analyses: Analyzes the spatial relationships between these features and other spatial transcriptomics data.

Using the SPT, researchers were able to identify and analyze various morphological and textural features of podocyte nuclei. This analysis provided insights into the spatial distribution of podocytes and their association with glomerular injury. The toolkit aims to be a valuable resource for the research community, enhancing the study of kidney disease. The SPT and its source code are available on GitHub for public use.

Figure 1: Overview of the experiment: segmentation and mask generation of podocytes from pathological images, extrac- tion of features and statistical analysis.
]]>
Evaluation Kidney Layer Segmentation on Whole Slide Imaging using Convolutional Neural Networks and Transformers /valiant/2024/06/20/evaluation-kidney-layer-segmentation-on-whole-slide-imaging-using-convolutional-neural-networks-and-transformers/ Thu, 20 Jun 2024 14:07:14 +0000 /valiant/?p=2501 Muhao Liu, Chenyang Qi, Shunxing Bao, Quan Liu, Ruining Deng, Yu Wang, Shilin Zhao, Haichun Yang, and Yuankai Huo. “.” Proceedings of SPIE Medical Imaging 2024: Digital and Computational Pathology, vol. 12933, 129330I, 2024.

Segmenting different layers of kidney structures, like the cortex and medulla, is crucial for analyzing kidney pathology images. Currently, this process is done manually, which is time-consuming and impractical for large-scale digital images. To address this, researchers have tested various deep learning methods to automate the segmentation of kidney layers. They evaluated several advanced models, including CNNs and Transformer-based models, using kidney images from mice. The results show that Transformer models generally perform better than CNN-based models, with a good Mean Intersection over Union (mIoU) score, indicating high accuracy. These findings suggest that deep learning can significantly improve the efficiency and accuracy of kidney layer segmentation in medical pathology.

]]>