Federated and Generative AI Models for Secure, Cross-Institutional Healthcare Data Interoperability

Authors

  • Sateesh Kumar Rongali

Keywords:

Terms—Federated Learning, Generative AI, Cross- Institutional Healthcare Interoperability, Data Safety, Seman- tic Interoperability, Evaluation Matrices, Implementation Play- books

Abstract

In a clinical world filled with siloed information, AI models can be employed that require only the exchange of the learned model parameters instead of the raw data itself; preserving the privacy of local patients without any manual copying and pasting. Synthetically generated data, produced using AI models that have been trained with a privacy-by-design philosophy, can also be allowed into real clinical use. The medical community can therefore benefit from additional data created with Gardens of Trust and operated under Machine Learning as a Service paradigms; without coining new terms or practicing new technologies. What is still missing for a truly symbiotic ecosystem picturing a connected and collaborative cross-institutional clinical AI without renaming normality is the completion of a cross- institutional interoperability step. A step in which different AI stakeholders participating in the plot of the story and willing to exchange their locally generated knowledge, are conversing into a common language allowing the understanding of the same data semantics. Generating semantic-rich data is the first building block to undertake such a step until now. Semantic standards and ontologies expressed on the data are guiding the data pipeline in a way that data requests and responses follow a clearly defined schema in a semantic-wide way. . . even if the generative act is not trusted. Identification of trust in use and data provenance are the guardians of all plots: users play a crucial role in understanding whether content is useful or harmful, with a disentangled posture allowing the evaluation of data provenance along the path. Yet, an adoption threat model helps building the narrative documenting the dangers underlying the technology in a more complete way, covering high-level clinical applications up to low-level magic tricks.

Downloads

Download data is not yet available.

References

Singireddy, J. (2024). Deep Learning Architectures for Automated Fraud Detection in Payroll and Financial Management Services: Towards Safer Small Business Transactions. Journal of Artificial Intelligence and Big Data Disciplines, 1(1), 75-85.

Bae, J., Kim, H., & Park, S. (2024). Federated learning for privacy- preserving clinical analytics across hospital networks. Nature Medicine, 30(1), 112–120. https://doi.org/10.1038/s41591-023-02689-1

Zhang, Y., Huang, Z., & Xu, X. (2024). Secure federated clinical modeling using differential privacy in heterogeneous EHR systems. IEEE Journal of Biomedical and Health Informatics, 28(2), 745–757. https://doi.org/10.1109/JBHI.2024.3341122

Sheelam, G. K. (2024). AI-Driven Spectrum Management: Using Ma- chine Learning and Agentic Intelligence for Dynamic Wireless Op- timization. European Advanced Journal for Emerging Technologies (EAJET)-p-ISSN 3050-9734 en e-ISSN 3050-9742, 2(1).

Huang, Z., Xu, X., Liu, R., Jiang, W., & Fu, Z. (2022). A survey on privacy-preserving machine learning for healthcare. ACM Computing Surveys, 55(8), 1–36. https://doi.org/10.1145/3527151

Kumar, S., Patel, V., & Lee, J. (2024). Interoperable federated architectures for multi-institutional health data exchange. Journal of the American Medical Informatics Association, 31(3), 412–423. https://doi.org/10.1093/jamia/ocad313

Nandan, B. P. (2024). Revolutionizing Semiconductor Chip Design through Generative AI and Reinforcement Learning: A Novel Approach to Mask Patterning and Resolution Enhancement. International Journal of Medical Toxicology and Legal Medicine, 27(5), 759-772.

Rasouli, M., Chen, R. J., & Mahmood, F. (2024). Synthetic data governance in healthcare AI: A review of safeguards and failure modes. Nature Digital Medicine, 7(1), 19. https://doi.org/10.1038/s41746-024-

00923-3

Chen, R. J., Lu, M. Y., Chen, T. Y., Williamson, D. F., & Mah- mood, F. (2021). Synthetic data in machine learning for medicine and healthcare. Nature Biomedical Engineering, 5(6), 493–497. https://doi.org/10.1038/s41551-021-00751-8

Pandiri, L., & Chitta, S. (2024). Machine Learning-Powered Actuarial Science: Revolutionizing Underwriting and Policy Pricing for Enhanced Predictive Analytics in Life and Health Insurance.

Huang, G., Liu, Y., & Chen, Z. (2024). Secure cross-border data inter- operability using federated identity and privacy-preserving computation. IEEE Transactions on Information Forensics and Security, 19, 441–455. https://doi.org/10.1109/TIFS.2023.3332011

Blandfort, P., Fauw, J. D., & Kohli, P. (2024). Evaluating trustwor- thiness in clinical generative models. NPJ Digital Medicine, 7(2), 31. https://doi.org/10.1038/s41746-024-00942-0

Meda, R. (2024). Predictive Maintenance of Spray Equipment Using Machine Learning in Paint Application Services. European Data Science Journal (EDSJ) p-ISSN 3050-9572 en e-ISSN 3050-9580, 2(1).

Li, Q., Yang, F., & Tan, J. (2024). Federated multimodal learning for diagnostic imaging across distributed hospital systems. Medical Image Analysis, 94, 103125. https://doi.org/10.1016/j.media.2024.103125

Bourquard, A., Maier, A., & Rueckert, D. (2024). Privacy- preserving medical AI: Emerging standards and interoperabil- ity frameworks. Artificial Intelligence in Medicine, 146, 102789. https://doi.org/10.1016/j.artmed.2023.102789

Somu, B. (2024). Agentic AI and Financial Compliance: Autonomous Systems for Regulatory Monitoring in Banking. European Data Science Journal (EDSJ) p-ISSN 3050-9572 en e-ISSN 3050-9580, 2(1).

ang, Z., Wu, M., & Shen, D. (2024). Benchmarking federated learning algorithms for clinical image classification. IEEE Transactions on Med- ical Imaging, 43(1), 88–101. https://doi.org/10.1109/TMI.2023.3328420

Gopinath, K., Hou, L., & Morris, Q. (2024). Harmonizing heterogeneous medical ontologies using foundation models. Nature Communications, 15, 2009. https://doi.org/10.1038/s41467-024-41945-9

Inala, R., & Somu, B. (2024). Agentic AI in Retail Banking: Redefining Customer Service and Financial Decision-Making. Journal of Artificial Intelligence and Big Data Disciplines, 1(1).

Perry, A., Azizi, S., & Lungren, M. (2024). Language-model-guided evaluation pipelines for clinical safety and toxicity in synthetic data. Lancet Digital Health, 6(1), e25–e36. https://doi.org/10.1016/S2589- 7500(23)00224-1

Zhou, Y., Chen, S., & Wang, F. (2024). Detecting member- ship inference attacks on federated medical models. IEEE Trans- actions on Dependable and Secure Computing, 21(2), 256–270. https://doi.org/10.1109/TDSC.2023.3320154

Motamary, S. (2024). Data Engineering Strategies for Scaling AI-Driven OSS/BSS Platforms in Retail Manufacturing. BSS Platforms in Retail Manufacturing(December 10, 2024).

Badar, R., Jha, D., & Tschannen, M. (2024). Foundation models for interoperable medical data semantics across institutions. Patterns, 5(1), 100978. https://doi.org/10.1016/j.patter.2023.100978

Sun, H., Xu, W., & Li, J. (2024). Multicenter federated training with adaptive privacy budgets in health data ecosystems. IEEE Access, 12, 10231–10245. https://doi.org/10.1109/ACCESS.2024.3365524

Lakkarasu, P. (2024). From Model to Value: Engineering End-to-End AI Systems with Scalable Data Infrastructure and Continuous ML Delivery. European Journal of Analytics and Artificial Intelligence (EJAAI) p- ISSN 3050-9556 en e-ISSN 3050-9564, 2(1).

Chung, H., Ravi, N., & Ghaffari, M. (2024). Protecting sensitive patient attributes via adversarially-trained generative models. Machine Learning for Healthcare, 12(1), 1–19. https://doi.org/10.1145/3639278

Mao, Y., Zhou, X., & Fan, J. (2024). Threat modeling for federated healthcare pipelines using STRIDE+. Computers & Security, 139, 103010. https://doi.org/10.1016/j.cose.2023.103010

Hameed, S., Zhang, T., & Albarqouni, S. (2024). Practical deploy- ment pathways for federated AI in hospitals: Lessons from pilot to practice. Health Information Science and Systems, 12(1), 14. https://doi.org/10.1007/s13755-024-00252-7

Kruse, C., Glick, G., & Stewart, L. (2024). Interoperability and semantic harmonization for AI-driven clinical systems. International Journal of Medical Informatics, 184, 105413. https://doi.org/10.1016/j.ijmedinf.2023.105413

Shao, Z., Feng, Z., & Lu, Y. (2024). Evaluating safety, utility, and calibration of generative medical imaging models. Radiology: Artificial Intelligence, 6(2), e220409. https://doi.org/10.1148/ryai.220409

Luo, Y., Chen, P., & Zhang, J. (2024). Privacy-enhanced federated medical analytics using secure aggregation and adaptive noise injection. IEEE Transactions on Neural Networks and Learning Systems, 35(4), 512–525. https://doi.org/10.1109/TNNLS.2023.3338129

Rao, A., Choi, J., & Sun, L. (2024). Synthetic EHR generation with transformer-based diffusion models for clinical research. Journal of Biomedical Informatics, 152, 104502. https://doi.org/10.1016/j.jbi.2024.104502

Gao, X., Mitchell, R., & Li, K. (2024). Federated foundation models for cross-institutional healthcare interoperability. IEEE Transactions on Big Data, 10(1), 77–92. https://doi.org/10.1109/TBDATA.2023.3340028

Velicˇkovic´, P., Neil, M., & Esmaili, N. (2024). Trust assessments for clinical generative AI systems: A multi-metric evaluation framework. Nature Communications, 15, 3201. https://doi.org/10.1038/s41467-024-43107-3.

Downloads

Published

2024-12-14

How to Cite

1.
Rongali SK. Federated and Generative AI Models for Secure, Cross-Institutional Healthcare Data Interoperability. J Neonatal Surg [Internet]. 2024 Dec. 14 [cited 2026 Feb. 3];13(1):1683-94. Available from: https://jneonatalsurg.com/index.php/jns/article/view/9558

Issue

Section

Original Article