2

Trustworthy AI in Digital Health: A Comprehensive Review of Robustness and Explainability

We present a structured overview of methods, challenges, and solutions, aiming to support researchers and practitioners in developing reliable and explainable AI solutions for digital health. This paper is further enriched with detailed discussions of the contributions toward robustness and explainability in digital health, the development of trustworthy AI systems in the era of LLMs, and various evaluation metrics for measuring trust and related parameters such as validity, fidelity, and diversity.

AI-Powered Wearable Sensors for Health Monitoring and Clinical Decision Making

A comprehensive review of AI-powered wearable biosensors emphasizing how machine learning and edge AI enable real-time health monitoring and personalized care, with insights on digital twins, LLMs, and challenges in privacy, scalability, and clinical integration.

LLM-Powered Prediction of Hyperglycemia and Discovery of Behavioral Treatment Pathways from Wearables and Diet

We developed GlucoLens, that takes sensor-driven inputs and uses advanced data processing, large language models, and explainable machine learning models to predict postprandial AUC and hyperglycemia from diet, physical activity, and recent glucose patterns.