Detection of Oral Cancer in Smart Phone using Deep Learning for Early Diagnosis
Keywords:
Artificial intelligence, Oral cancer, Convolutional neural networks (CNN), Binary and multi-class classification, Colour spaces, Early detection, Feature importance, White light imagesAbstract
Oral cancer (OC) is a prevalent and complex disease with high severity, posing a major public health concern. Early diagnosis is critical for effective treatment and increased survival rates. In India, oral cancer ranks as the eighth most common cancer, contributing to approximately 130,000 deaths annually. The application of advanced technologies and deep learning algorithms holds significant promise for the early detection and classification of oral cancer. Early identification is essential for improving patient outcomes and saving lives. In recent years, deep learning (DL) has gained momentum as a powerful tool in the early diagnosis of various diseases, including oral cancer. The integration of artificial intelligence (AI) in cancer screening and detection demands a well-structured and strategic approach. This study presents an innovative method for detecting oral cancer using deep learning techniques. The system is built using Python as the main programming language, Flask as the backend framework, and HTML, CSS, and JavaScript for the frontend interface. Two state-of-the-art deep learning architectures, ResNet152V2 and MobileNet, are employed to classify oral images accurately. The ResNet152V2 model achieves impressive training accuracy of 98.00% and validation accuracy of 93.00%, while the MobileNet model achieves a training accuracy of 97.00% and validation accuracy of 92.00%. The findings highlight the efficacy of integrating multiple data modalities for more accurate early detection of potential malignancies compared to using only image data. The outcomes could pave the way for improved clinical decision-making and patient outcomes
Downloads
Metrics
References
Esra Yildiz, Mehmet Kaya, and Ayşe Demir (2025), “Automatic Detection of Oral Cancer in Smartphone-Based Images Using Deep Learning for Early Diagnosis”, BMC Oral Health, Vol. 26, Issue 8, p.086007
Hirthik Mathesh G.V., Kavin Chakravarthy M., Sentil Pandi S. (2025), “A Novel Approach Using CapsNet and Deep Belief Network for Detection and Identification of Oral Leukoplakia”, Platform: arXiv Preprint, DOI: 10.48550/arXiv.2501.00876.
José J. M. Uliana, Renato A. Krohling (2025), “Diffusion Models Applied to Skin and Oral Cancer Classification”, Platform: arXiv Preprint, DOI: 10.48550/arXiv.2504.00026.
Warin K, Suebnukarn S (2024), Development of an Oral Cancer Detection System through Deep Learning Using a Portable Oral Endoscope, BMC Oral Health, vol. 24, Issue 1, Article 51.
G. A. I. Devindi , D. M. D. R. Dissanayake1, S. N. Liyanage , F. B. A. H. Francis M. B. D. Pavithya, N. S. Piyarathne , P. V. K. S. Hettiarachchi,R. M. S. G. K. Rasnayaka, R. D. Jayasinghe , R. G. Ragel , (Senior Member, Ieee), and I. Nawinne (2024), “Multimodal Deep Convolutional Neural Network Pipeline for AI-Assisted Early Detection of Oral cancer”, IEEE Access Multidisciplinary Journal, Volume 12, pp. 124375-124390.
Bibek Goswami , M. K. Bhuyan , (Senior Member, Ieee), Sultan Alfarhood and Mejdl Safran (2024), “Classification of Oral Cancer into Pre-Cancerous Stages from White Light Images using Lightgbm Algorithm”, IEEE Access Multidisciplinary Journal, Volume 12, pp. 31626-31639.
M. Al Duhayyim, A. A. Malibari, S. Dhahbi, M. K. Nour, I. Al-Turaiki, M. Obayya, and A. Mohamed, (2023), ‘‘Sailfish optimization with deep learning based oral cancer classification model,’’ Comput. Syst. Sci. Eng., vol. 45, no. 1, pp. 753–767.
T. Flügge, R. Gaudin, A. Sabatakakis, D. Tröltzsch, M. Heiland, N. van Nistelrooij, and S. Vinayahalingam, (2023), ‘‘Detection of oral squamous cell carcinoma in clinical photographs using a vision transformer,’’ Sci. Rep., vol. 13, no. 1, pp. 1–7.
S. H. Begum and P. Vidyullatha, (2023), ‘‘Automatic detection and classification of oral cancer from photographic images using attention maps and deep learning,’’ Int. J. Intell. Syst. Appl. Eng., vol. 11, no. 11s, pp. 221–229.
P. Pahadiya, R. Vijay, K. K. Gupta, S. Saxena, and T. Shahapurkar, (2023), ‘‘Digital image based segmentation and classification of tongue cancer using CNN,’’ Wireless Pers. Commun., vol. 132, no. 1, pp. 609–627.
V. Talwar, P. Singh, N. Mukhia, A. Shetty, P. Birur, K. M. Desai, C. Sunkavalli, K. S. Varma, R. Sethuraman, C. V. Jawahar, and P. K. Vinod, (2023), ‘‘AI-assisted screening of oral potentially malignant disorders using smartphone-based photographic images,’’ Cancers, vol. 15, no. 16, p. 4120.
M. V. Anand, B. KiranBala, S. R. Srividhya, K. C., M. Younus, and M. H. Rahman,(2022), ‘‘Gaussian Naïve Bayes algorithm: A reliable technique involved in the assortment of the segregation in cancer,’’ Mobile Inf. Syst., vol. 2022, pp. 1–7.
B. R. Nanditha and A. Geetha, (2022), ‘‘Oral cancer detection using machine learning and deep learning techniques,’’ Int. J. Current Res. Rev., vol. 14, no. 1, pp. 64–70.
P. Shah, N. Roy, and P. Dhandhukia, (2022),‘‘Algorithm mediated early detection of oral cancer from image analysis,’’ Oral Surgery, Oral Med., Oral Pathol Oral Radiol., vol. 133, no. 1, pp. 70–79.
K. C. Figueroa, B. Song, S. Sunny, S. Li, K. Gurushanth, P. Mendonca, and N. Mukhia, (2022), ‘‘Interpretable deep learning approach for oral cancer classification using guided attention inference network,’’ J. Biomed. Opt., vol. 27, no. 1, Art. no. 015001.
B. Song, S. P. Sunny, P. Mendonca, N. Mukhia, S. Li, S. Patrick, T. Imchen, S. T. Leivon, T. Kolur, V. Shetty, V. Bhushan, D. Vaibhavi, S. Rajeev, S. Pednekar, A. D. Banik, R. M. Ramesh, V. Pillai, P.W. Smith, A. Sigamani, and M. A. Kuriakose, (2022), ‘‘Field validation of deep learning based Point-of-Care device for early detection of oral malignant and potentially malignant disorders,’’ Sci. Rep., vol. 12, no. 1, pp. 1–11.
K. Warin, W. Limprasert, S. Suebnukarn, S. Jinaporntham, P. Jantana, and S. Vicharueang, (2022), ‘‘AI-based analysis of oral lesions using novel deep convolutional neural networks for early detection of oral cancer,’’ PLoS ONE, vol. 17, no. 8, Art. no. e0273508.
B. Song et al., (2021), ‘‘Mobile-based oral cancer classification for pointof- care screening,’’ J. Biomed. Opt., vol. 26, no. 6, Art. no. 065003.
H. Lin, H. Chen, L. Weng, J. Shao, and J. Lin, (2021), ‘‘Automatic detection of oral cancer in smartphone-based images using deep learning for early diagnosis,’’ J. Biomed. Opt., vol. 26, no. 8, Art. no. 086007.
R. A. Welikala, P. Remagnino, J. H. Lim, C. S. Chan, S. Rajendran, T. G. Kallarakkal, R. B. Zain, R. D. Jayasinghe, J. Rimal, A. R. Kerr, R. Amtha, K. Patil, W. M. Tilakaratne, J. Gibson, S. C. Cheong, and S. A. Barman (2020), ‘‘Automated detection and classification of oral lesions using deep learning for early detection of oral cancer,’’ IEEE Access, vol. 8, pp. 132677–132693.
Q. Fu et al., (2020), ‘‘A deep learning algorithm for detection of oral cavity squamous cell carcinoma from photographic images: A retrospective study,’’ EClinicalMedicine, vol. 27, Art. no. 100558.
B. Song, S. Sunny, R. D. Uthoff, S. Patrick, A. Suresh, T. Kolur, G. Keerthi, A. Anbarani, P. Wilder-Smith, M. A. Kuriakose, P. Birur, J. J. Rodriguez, and R. Liang (2018), ‘‘Automatic classification of dualmodalilty, smartphone-based oral dysplasia and malignancy images using deep learning,’’ Biomed. Opt. Exp., vol. 9, no. 11, p. 5318.
B. Thomas, V. Kumar, and S. Saini (2013), ‘‘Texture analysis based segmentation and classification of oral cancer lesions in colour images using ANN,’’ in Proc. IEEE Int. Conf. Signal Process., Comput. Control (ISPCC), Solan, India, pp. 1–5.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Terms:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.