• Users Online: 395
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 

 
REVIEW ARTICLE
Ahead of print publication  

Computational Intelligence in Otorhinolaryngology


1 Department of ENT, Institute of Naval Medicine, INHS Asvini, Mumbai, Maharashtra, India
2 Department of Neurotology, Madras ENT Research Foundation, Chennai, Tamil Nadu, India
3 Family Clinic, SHO, Mumbai, Maharashtra, India

Date of Submission19-Oct-2022
Date of Decision02-Dec-2022
Date of Acceptance05-Jan-2023
Date of Web Publication18-Feb-2023

Correspondence Address:
Sunil Mathews,
Institute of Naval Medicine, INHS Asvini, Mumbai, Maharashtra
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/jmms.jmms_159_22

  Abstract 


There have been major advancements in the field of artificial intelligence (AI) in the last few decades and its use in otorhinolaryngology has seen promising results. In machine learning, which is a subset of AI, computers learn from historical data to gather insights and they make diagnoses about new input data, based on the information it has learned. The objective of this study was to provide a comprehensive review of current applications, future possibilities, and limitations of AI, with respect to the specialty of otorhinolaryngology. A search of the literature was performed using PubMed and Medline search engines. Search terms related to AI or machine learning in otorhinolaryngology were identified and queried to select recent and relevant articles. AI has implications in various areas of otorhinolaryngology such as automatically diagnosing hearing loss, improving performance of hearing aids, restoring speech in paralyzed individuals, predicting speech and language outcomes in cochlear implant candidates, diagnosing various otology conditions using otoscopic images, training in otological surgeries using virtual reality simulator, classifying and quantifying opacification in computed tomography images of paranasal sinuses, distinguishing various laryngeal pathologies based on laryngoscopic images, automatically segmenting anatomical structures to accelerate radiotherapy planning, and assisting pathologist in reporting of thyroid cytopathology. The results of various studies show that machine learning might be used by general practitioners, in remote areas where specialist care is not readily available and as a supportive diagnostic tool in otorhinolaryngology setups, for better diagnosis and faster decision-making.

Keywords: Artificial intelligence, convolutional neural network, deep learning, hearing, laryngeal pathology, machine learning, thyroid cytopathology



How to cite this URL:
Mathews S, Dham R, Dutta A, Jose AT. Computational Intelligence in Otorhinolaryngology. J Mar Med Soc [Epub ahead of print] [cited 2023 Mar 23]. Available from: https://www.marinemedicalsociety.in/preprintarticle.asp?id=369945




  Introduction Top


The term computational intelligence or artificial intelligence (AI) refers to technology that allows computer systems to perform tasks that normally require human intelligence, such as word and object recognition, visual perception, and complex decision-making. In computational learning, algorithms acquire information from the data provided to the computer, and make predictions about these unanalyzed data, using the information it has learned. The last few decades have witnessed a computational advancement and technological revolution in the field of medicine.[1]

Machine learning refers to a set of AI procedures that gain more accuracy over time, discovering patterns. In machine learning, computers learn from historical data to gather insights and they make predictions about new input data using information it has learned. There are several types of machine learning algorithms and one class among these which need a special mention is artificial neural networks (ANNs). ANNs are based on how the synapses in the human brain work. ANNs have multiple layers of interconnected nodes and each node performs a series of calculations based on inputs and signals received by it, and each node is connected to other nodes. Data in ANNs travel from the input layer to the output layer traversing more than one hidden layer. Such complex ANNs that contain multiple hidden layers are called “deep neural networks,” which are equipped to solve most complex problems including image and speech learning.[2] This subset of machine learning is referred to as “deep learning” where the algorithm is trained to pick up patterns in data.[3]

The interest in the field of computational intelligence and medicine received special attention with the outbreak of the COVID-19 pandemic in November 2019, putting the entire health-care system into a frenzy. AI and mechanization helped in establishing contactless clinics and automated patient triage for COVID-19 symptoms. Availing remote check-in, contactless payment using apps, and visual monitoring of patients, when required, are ways that can propel safe health care toward the future. Otorhinolaryngologists and their surrounding staff posed as a high-risk group for the COVID-19 infection due to high viral load exposure through aerosolized particles during clinical examination, outpatient procedures, and surgeries. To prevent exposure, telemedicine practices are being used as integral tools for patient care to reduce hospital exposure and even troubleshooting medical devices like cochlear implants.[4]

Although the use of computational intelligence in the field of otorhinolaryngology is in the stage of infancy, the research work is promising. In the applications of computational intelligence, the basic requirement is establishing datasets of various patient characteristics, diseases, and their outcomes. Next step is to recommend specific tests and possible treatment options, based on patient's symptoms. Based on its available database, the machine learning algorithm decides if the prediction provided was correct or not, to provide the patient with best possible treatment.[1] In this review article, a comprehensive review of current applications, future possibilities, and limitations of AI, with respect to the specialty of otorhinolaryngology, is provided.


  Methods Top


A search of the literature was performed using PubMed and Medline search engines. Keywords relevant to the search topics were identified as “machine learning,” “artificial intelligence,” and “otorhinolaryngology” or “otolaryngology” and queried to select recent and relevant articles. Citations related to the topic were identified and abstracts of relevant articles were scrutinized for inclusion in the review. Articles not related to the topic and articles not in English language were removed. Finally, a total of 32 articles relevant to the review were selected and reviewed.


  Results and Discussion Top


After reviewing the articles related to various applications of AI or computational intelligence in otorhinolaryngology, the following were found to be relevant:


  Role in Audiology Top


Machine learning has revolutionized hearing sciences. It is used in the diagnosis of hearing loss by automatically classifying the auditory brainstem responses and for estimating the audiometric thresholds.[5],[6] By using computational intelligence, online hearing testing can be done when there is a restriction of access to standard audiometry. Bing et al. used computational intelligence to predict audiology outcomes among sudden sensorineural hearing loss (SSNHL) patients.[7] In peripheral remote areas with limited or no access to audiology facilities, online testing is a viable option so as to initiate treatment, especially in cases of SSNHL patients. AI aids in telemapping and troubleshooting of cochlear implants, without the need for visiting the centers. They also provide access to specialist providers, reducing transportation costs and loss of wages.[8] This helps in motivating the family to actively participate in habilitation and aids in good follow-up for the habilitationist, audiologist, and the doctor.

Patients using hearing aids often complain about difficulty in understanding speech in noise. This is often referred to as the “cocktail party” problem, and recent advances in AI have the potential to solve this issue. AI can learn to adjust device parameters as needed in different situations by tracking listener's preference in these situations and analyzing soundscapes. By incorporating deep learning algorithms to separate target sound from background noise, AI could increase the speech-to-noise ratio at the listener's tympanic membrane, which effectively converts hearing in noise to hearing in quiet. AI can help the listener in preferential listening even to unknown talkers or to different people due to the listener's shifted attention. This is made possible by deciphering the desired target sound stream for the listener and delivering it as an isolated signal by monitoring listener's EEG potentials, based on two technologies called 'source separation' and 'cognitive control'.[9] Effective programming for enhanced satisfaction for hearing aid users can be obtained by applying deep learning-based speech enhancement techniques and can provide better speech intelligibility for the hearing aid users.[10] For those whom preferentially lip-read for communication, the use of digital flash cards and speech text applications are handy tools, in a pandemic like COVID-19, where individuals surrounding them wear face masks.

Advanced AI using deep learning has enabled approaches with end-to-end decoding, using multilayer neural networks. These are capable of direct translation of acoustic waveforms into text without using a hidden Markov model or a separate language model, unlike traditional automatic speech recognition methods. By using automatic speech recognition and AI, decoding of spoken speech directly from brain cortical activity is possible, both in the form of text and as synthesized speech waves. Based on these developments, prototypes of direct neuroprosthesis to decode words and sentences in real time from brain activity as the patient attempts to speak are developed, for patients with anarthria, either due to cerebrovascular accident or amyotrophic lateral sclerosis that impair vocal tract control. The AI with advances in speech neurosciences has the potential to restore speech in severely paralyzed individuals who cannot communicate naturally. AI has the potential to improve speech and language outcomes, thereby transforming habilitation of children with hearing loss. AI is being explored to predict speech and language outcomes, using presurgical brain imaging in young children undergoing cochlear implantation. Magnetic resonance imaging (MRI) scans of the brain which are used for this purpose are T1 and diffuser tensor imaging. It is based on the fact that spoken language depends both on the peripheral auditory system and the central nervous system, thereby brain variability affecting language variability after cochlear implantation. In addition, neuronal reorganization caused by auditory deprivation results in variation in perception and understanding of spoken language. Customization of language therapy and listening training could be individualized, if accurate prediction could be achieved.[9]


  Role in Otology Top


In the department of Otolaryngology-Head and Neck surgery at The Ohio State University Wexner Medical Center, Columbus, AI is being used in developing an 'Auto-Scope', which is a computer-assisted image analysis software. This 'Auto-Scope' classifies tympanic membrane images as normal or abnormal. This may aid in diagnosis at primary care settings and can reduce overtreatment of ear pathologies, thereby reducing antibiotic resistance. AI also plays a role in the triage and provision of novel and promising strategies in the diagnosis of giddiness and aids in traditional decision-making of vertigo patients.[11] An example of otoscopic images for the reference standard is given in [Figure 1]. Otoscopic images are fed to the convolutional neural network (CNN)-based classifier in large numbers (usually in thousands for each variety of condition), followed by training and validation of these images. Training and testing of these datasets are done and then a comparison is made between AI-driven diagnoses versus clinician-given diagnoses. [Table 1] gives a summary of various studies using AI to diagnose otology conditions based on otoscopy images.[12],[13],[14],[15],[16],[17],[18]
Figure 1: Representative otoendoscopic images of tympanic membrane for the reference standard. (a) Normal, (b) scarred but intact, (c) myringosclerotic patches on tympanic membrane, (d) chronic otitis media mucosal type, (e) chronic otitis media squamous type

Click here to view
Table 1: Studies using artificial intelligence in diagnosis of otology conditions using otoscopic imaging

Click here to view


Surgical and training applications in otology

Virtual reality simulator, “Cardinal Sim,” can be used for training on temporal bone dissection without needing a person/cadaveric temporal bone. Deep learning techniques are used here to automatically identify various critical neurovascular structures of the temporal bone scan images and can aid in patient-specific virtual reality otologic surgery.[6] Deep learning algorithms can even identify segmental intracochlear anatomy.[19] This aids in the selection of appropriate cochlear implant electrodes, as well as for customized cochlear implant programming.

Three-dimensional (3D) printed temporal bones can be fabricated using 3D high-resolution computed tomography of temporal bones. This can be used as a tool for preoperative surgical dissection practice for surgeons. It can also be used as a brilliant training tool for the residents.[20]


  Role in Rhinology Top


Different types of machine learning algorithms are used in rhinology; an example of a supervised machine learning system is “classification,” wherein the machine learning system predicts a category or a class of an item. When applied, it is capable of differentiating between opacified and nonopacified sinuses on computed tomography (CT) images an example is given in [Figure 2].
Figure 2: Example of classification of various CT images of paranasal sinuses based on opacification. (a) Well-pneumatized nonopacified PNS. (b) Unilateral opacification of PNS. (c) Partial opacification of PNS. (d) Total opacification of PNS on both sides. PNS: Paranasal sinuses

Click here to view


Another machine learning system is “clustering” which uses an unsupervised machine learning algorithm to group items that are similar, based on their attributes, that is, to identify images from endoscopic sinus surgery and to label the images automatically. Using CNN, the critical anatomy of nasal cavities can be delineated and it is also helpful in quantifying the sinus opacification.[1],[21],[22]

'Inception' is a convolutional neural network capable of classifying the patency of osteomeatal complex on CT scan of paranasal sinuses with 85% accuracy.[23] 'Random forest' is a more recent machine learning model which has demonstrated that baseline olfactory function can be predicted in chronic sinusitis patients based on mucus cytokines interleukins 5 and 13.[1]


  Role in Laryngology Top


Computational learning is incorporated in voice analysis software to aid in the diagnosis of various laryngeal pathologies, as machine learning classifiers are shown to perform better than rule-based algorithms.[24] When AI algorithms were used along with voice analysis and videostroboscopy images, T1a glottis cancers and other lesions of the vocal cords could be diagnosed with around 100% accuracy.[25] A study was conducted by Ren et al. for automatic recognition of laryngoscopic images using a deep learning technique to distinguish various laryngeal lesions based on 24,667 laryngoscopy images (normal study, vocal cord nodule, vocal cord polyp, leukoplakia, and laryngeal malignancy).[26]The examples of common laryngeal pathologies as seen on laryngoscopy images are given in [Figure 3].
Figure 3: Common laryngeal pathologies as seen on laryngoscopy images. (a) Normal vocal cords. (b) Vocal cord nodule. (c) Carcinoma of vocal cord

Click here to view


They developed a CNN-based classifier and the same was compared with clinical visual assessment (CVAs). The transfer learning strategy called “ResNet-101 model” was adopted for classifying laryngeal images. They observed an overall accuracy of 96.24% and CNN-based classifier outperformed CVAs for most laryngeal conditions [Table 2]. However, despite the size of this dataset, this model cannot be used to diagnose lesions such as laryngeal papillomas, because of its exceptionally variable distribution within the larynx, making no set location for which CNN can predictably focus to classify the lesion. AI-based diagnostic algorithms help clinicians to make more confident diagnoses, particularly in cases of early cancers or precancerous lesions. This AI-based diagnostic system is to supplement and not to replace clinical assessment by the physician. It helps in faster decision-making and guiding the clinician to conduct further investigation such as a biopsy for suspicious lesion. This will also benefit patients in rural and remote areas through telemedicine.[26]
Table 2: Laryngoscopic image recognition by Ren et al.[26]

Click here to view



  Role in Head-and-Neck Oncology Top


In head-and-neck oncology, deep learning networks are being used for automatically segmenting the anatomical structures to accelerate radiotherapy (RT) planning. This can be utilized to spare critical structures during RT.[27] Stepp et al. used machine learning in predicting the presence of nodal disease based on gene expression in human papillomavirus (HPV)-related primary oropharyngeal squamous cell carcinomas based on a 40-gene profile.[28] At present, the strongest prognostic biomarker in head-and-neck squamous cell carcinoma is HPV status. Carnielli et al. used AI to develop a proteomic signature to predict the presence of lymph node metastasis in oral cancer.[29] Mascharak et al. at Stanford University trained a machine learning algorithm to visually identify oropharyngeal malignancies using white light and narrow-band imaging. The goal of this technology was to develop an automated detection system for oropharyngeal malignancies to improve early diagnosis.[30] Kann et al. trained a neural network to identify nodal metastasis and extranodal extension on preoperative CT imaging from patients with head-and-neck cancer.[31] The summary of the above studies is given in [Table 3].
Table 3: Summary of applications of artificial intelligence in various head-and-neck malignancies

Click here to view


A new type of tumor marker, derived from analysis of quantitative imaging is called “radiomics.” Radiomics is driven by computational advances to analyze large amount of data on imaging such as CT, MRI, positron emission tomography scans, and functional imaging studies. Radiomics is developed to overcome the limitations of tumor markers derived from tissue and blood. This is because tissue markers derived from biopsies of tumor tissue represent only a small subregion of tumor at a single time point, and often not representative of tumor's biology or alterations in biology during and posttreatment. This makes radiomics an ideal option for treatment monitoring and assessment of response to treatment. Liquid biopsies give an overall picture of secreted factors of tumor but lack any spacial resolution of the tumor. In radiomics, in addition to the usual radiological features, tumor tissue is described in terms of complete three-dimensional information regarding tumor makeup, tumor shape, distribution of voxel intensities within the volume occupied by the tumor, and texture of the tumor. In addition, radiomics allow for repetitive noninvasive analysis based on follow-up images. Special resolution of radiomics allows analysis of tumor tissue as well as healthy tissue to predict side effects related to treatment. A disadvantage of radiomics in the head-and-neck region is artifacts produced by metal implants like dental fillings.[33]

AI with hyperspectral imaging is used for differentiating thyroid malignancies from the normal head-and-neck tissue, with 97% sensitivity, 96% specificity, and 96% accuracy. This can enable surgeons to define margins during surgical excision with more precision.[34] In Bethesda system for reporting of thyroid cytopathology, clinical decision is straightforward for Bethesda II (benign) and VI (malignant) lesions. Bethesda III–V constitutes indeterminate groups and pose a dilemma for the clinician, and the usual treatment options are repeat FNAC, follow-up, molecular testing, and surgery for definite histopathological examination. Recent advances on use of AI in diagnosis of thyroid malignancies in ultrasound images, development of machine learning algorithm for analysis of digitized cytopathology slides to accurately diagnose the presence or absence of thyroid malignancy, help to avoid unnecessary surgeries. These machine learning algorithms have a role as an adjunct tool to assist the clinician in reducing the number of indeterminate cases.[9]


  Other Uses Top


AI is revolutionizing health record management with the transformation toward an electronic health records system, thereby reducing the burden of clinical documentation by the physician. By using natural language processing, computational intelligence can automatically record and extract content from clinical interactions between a doctor and a patient, using virtual scribes.[35] In remote and rural areas, digital data and patient documents can be evaluated elsewhere to assist the patients in decision-making and counseling.


  Limitations and Fallacies of Computational Intelligence in Otorhinolaryngology Top


One limitation of AI is that they function as 'black boxes', which means, there is visibility of data fed to the system and the predictions generated by the AI, but there is lack of visibility into how computational learning algorithms make meaningful interpretation of the data.[36] Using such algorithms for making clinical decisions can be unsettling and can be described like prescribing a drug with no awareness of the mechanism of action. Hacking of algorithms to harm the community at large is a major threat of this technology. Computational intelligence raises ethical concerns, privacy issues, and liability issues over the use of enormous amounts of patient data.[37] Algorithm bias exceeds human bias and much work is required to eradicate the embedded prejudice and create a true representative cross-section of the population.

Although mostly these algorithms help in broad classification, being able to give a precise prediction at an individual is uncertain. Data ownership is another major concern; who owns the data, the patient, the institution, or the government? is an answer to be sought before it goes into the hands of companies with a vested interest.[38] As technology changes quickly, sufficient data infrastructure with necessary validation and continuous recalibration will be required.


  Conclusion Top


AI is shaping the future of health care, improving quality, cost, and accessibility to the facility. Many of these applications require high-resource centers and settings. However, it has in its uniqueness applications in even the most underdeveloped areas, by taking over the diagnostic duties of a health-care worker. However, AI serves only as an aid to the physician and is not a substitute for the medical practitioner, providing substantial guidance to the physician in challenging situations, based on the patient data. The unprecedented insight gained into diagnosis, patient care, and treatment outcome ushers a new era of medical care quality and success. AI has the potential to transform otorhinolaryngology and future looks bright!

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Bur AM, Shew M, New J. Artificial intelligence for the otolaryngologist: A state of the art review. Otolaryngol Head Neck Surg 2019;160:603-11.  Back to cited text no. 1
    
2.
van Gerven M, Bohte S. Editorial: Artificial neural networks as models of neural information processing. Front Comput Neurosci 2017;11:114.  Back to cited text no. 2
    
3.
Bengio Y, Courville A, Vincent P. Representation learning: A review and new perspectives. IEEE Trans Pattern Anal Mach Intell 2013;35:1798-828.  Back to cited text no. 3
    
4.
Dham R, Arumugam SV, Dharmarajan S, Mathews S, Paramasivan VK, Kameswaran M. Interrupted cochlear implant habilitation due to COVID-19 pandemic-ways and means to overcome this. Int J Pediatr Otorhinolaryngol 2020;138:110327.  Back to cited text no. 4
    
5.
Chi Y, Wang J, Zhao Y, Noble JH, Dawant BM. A Deep-Learning-Based Method for the Localization of Cochlear Implant Electrodes in CT Images,” 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy:2019;1141-45, doi: 10.1109/ISBI.2019.8759536.  Back to cited text no. 5
    
6.
Compton EC, Agrawal SK, Ladak HM, Chan S, Hoy M, Nakoneshny SC, et al. Assessment of a virtual reality temporal bone surgical simulator: A national face and content validity study. J Otolaryngol Head Neck Surg 2020;49:17.  Back to cited text no. 6
    
7.
Bing D, Ying J, Miao J, Lan L, Wang D, Zhao L, et al. Predicting the hearing outcome in sudden sensorineural hearing loss via machine learning models. Clin Otolaryngol 2018;43:868-74.  Back to cited text no. 7
    
8.
Slager HK, Jensen J, Kozlowski K, Teagle H, Park LR, Biever A, et al. Remote programming of cochlear implants. Otol Neurotol 2019;40:e260-6.  Back to cited text no. 8
    
9.
Wilson BS, Tucci DL, Moses DA, Chang EF, Young NM, Zeng FG, et al. Harnessing the power of artificial intelligence in otolaryngology and the communication sciences. J Assoc Res Otolaryngol 2022;23:319-49.  Back to cited text no. 9
    
10.
Lai YH, Zheng WZ, Tang ST, Fang SH, Liao WH, Tsao Y. Improving the performance of hearing aids in noisy environments based on deep learning technology. Annu Int Conf IEEE Eng Med Biol Soc 2018;2018:404-8.  Back to cited text no. 10
    
11.
Li ZZ, Yu DZ. Application prospect of artificial intelligence technology in vestibular disorders. Lin Chung Er Bi Yan Hou Tou Jing Wai Ke Za Zhi 2019;33:895-7.  Back to cited text no. 11
    
12.
Mironică I, Vertan C, Gheorghe DC. Automatic Pediatric Otitis Detection by Classification of Global Image Features. Iasi Romania: Proceedings of the E-Health and Bioengineering Conference IEEE. 2011 November 24-26; 2011. p. 1-4.  Back to cited text no. 12
    
13.
Vertan C, Gheorghe DC, Ionescu B. Eardrum Color Content Analysis in Video-Otoscopy Images for the Diagnosis Support of Pediatric Otitis. Iasi Romania: Proceedings of the International Symposium on Signals, Circuits and Systems. 2011 June 30; 2011. p. 1-4.  Back to cited text no. 13
    
14.
Shie CK, Chang HT, Fan FC, Chen CJ, Fang TY, Wang PC. A Hybrid Feature-Based Segmentation and Classification System for the Computer Aided Self-Diagnosis of Otitis Media. Chicago USA: Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2014 August 26-30; 2014. p. 4655-8.  Back to cited text no. 14
    
15.
Senaras C, Moberly AC, Teknos T, Essig G, Elmaraghy C, Taj-Schaal N, et al. Detection of eardrum abnormalities using ensemble deep learning approaches. Proc Med Imaging 2018 Comput Aided Diagn 2018;10575:105751A.  Back to cited text no. 15
    
16.
Myburgh HC, Jose S, Swanepoel DW, Laurent C. Towards low cost automated smartphone-and cloudbased otitis media diagnosis. Biomed Signal Process Control 2018;39:34-52.  Back to cited text no. 16
    
17.
Cha D, Pae C, Seong SB, Choi JY, Park HJ. Automated diagnosis of ear disease using ensemble deep learning with a big otoendoscopy image database. EBioMedicine 2019;45:606-14.  Back to cited text no. 17
    
18.
Viscaino M, Maass JC, Delano PH, Torrente M, Stott C, Auat Cheein F. Computer-aided diagnosis of external and middle ear conditions: A machine learning approach. PLoS One 2020;15:e0229226.  Back to cited text no. 18
    
19.
Wang J, Noble JH, Dawant BM. Metal artifact reduction for the segmentation of the intra cochlear anatomy in CT images of the ear with 3D-conditional GANs. Med Image Anal. 2019;58:101553. doi:10.1016/j.media.2019.101553  Back to cited text no. 19
    
20.
Skrzat J, Zdilla MJ, Brzegowy P, Hołda M. 3 D printed replica of the human temporal bone intended for teaching gross anatomy. Folia Med Cracov 2019;59:23-30.  Back to cited text no. 20
    
21.
Iwamoto Y, Xiong K, Kitamura T, Han XH, Matsushiro N, Nishimura H, et al. Automatic segmentation of the paranasal sinus from computer tomography images using a probabilistic atlas and a fully convolutional network. Annu Int Conf IEEE Eng Med Biol Soc 2019;2019:2789-92.  Back to cited text no. 21
    
22.
Humphries SM, Centeno JP, Notary AM, Gerow J, Cicchetti G, Katial RK, et al. Volumetric assessment of paranasal sinus opacification on computed tomography can be automated using a convolutional neural network. Int Forum Allergy Rhinol 2020;10:1218-25.  Back to cited text no. 22
    
23.
Chowdhury NI, Smith TL, Chandra RK, Turner JH. Automated classification of osteomeatal complex inflammation on computed tomography using convolutional neural networks. Int Forum Allergy Rhinol 2019;9:46-52.  Back to cited text no. 23
    
24.
Cesari U, De Pietro G, Marciano E, Niri C, Sannino G, Verde L. Voice disorder detection via an m-health system: design and results of a clinical study to evaluate Vox4Health. Biomed Res Int. 2018. doi:10.1155/2018/8193694.  Back to cited text no. 24
    
25.
Unger J, Lohscheller J, Reiter M, Eder K, Betz CS, Schuster M. A noninvasive procedure for early-stage discrimination of malignant and precancerous vocal fold lesions based on laryngeal dynamics analysis. Cancer Res 2015;75:31-9.  Back to cited text no. 25
    
26.
Ren J, Jing X, Wang J, Ren X, Xu Y, Yang Q, et al. Automatic recognition of laryngoscopic images using a deep-learning technique. Laryngoscope 2020;130:E686-93.  Back to cited text no. 26
    
27.
Tong N, Gou S, Yang S, Ruan D, Sheng K. Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks. Med Phys 2018;45:4558-67.  Back to cited text no. 27
    
28.
Stepp WH, Farquhar D, Sheth S, Mazul A, Mamdani M, Hackman TG, et al. RNA oncoimmune phenotyping of HPV-positive p16-positive oropharyngeal squamous cell carcinomas by nodal status. JAMA Otolaryngol Head Neck Surg 2018;144:967-75.  Back to cited text no. 28
    
29.
Carnielli CM, Macedo CC, De Rossi T, Granato DC, Rivera C, Domingues RR, et al. Combining discovery and targeted proteomics reveals a prognostic signature in oral cancer. Nat Commun 2018;9:3598.  Back to cited text no. 29
    
30.
Mascharak S, Baird BJ, Holsinger FC. Detecting oropharyngeal carcinoma using multispectral, narrow-band imaging and machine learning. Laryngoscope 2018;128:2514-20.  Back to cited text no. 30
    
31.
Kann BH, Aneja S, Loganadane GV, Kelly JR, Smith SM, Decker RH, et al. Pretreatment identification of head and neck cancer nodal metastasis and extranodal extension using deep learning neural networks. Sci Rep 2018;8:14036.  Back to cited text no. 31
    
32.
Fei, B., Lu, G., Wang, X., Zhang, H., Little, J. V., Patel, M. R., et al. Label-free reflectance hyperspectral imaging for tumor margin assessment: a pilot study on surgical specimens of cancer patients. J Biomed Opt 2017;22:1-7. doi:10.1117/1.JBO.22.8.086009.  Back to cited text no. 32
    
33.
Tanadini-Lang S, Balermpas P, Guckenberger M, Pavic M, Riesterer O, Vuong D, et al. Radiomic biomarkers for head and neck squamous cell carcinoma. Strahlenther Onkol 2020;196:868-78.  Back to cited text no. 33
    
34.
Halicek M, Lu G, Little JV, Wang X, Patel M, Griffith CC, et al. Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. J Biomed Opt 2017;22:60503.  Back to cited text no. 34
    
35.
Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng 2018;2:719-31.  Back to cited text no. 35
    
36.
Yu MK, Ma J, Fisher J, Kreisberg JF, Raphael BJ, Ideker T. Visible machine learning for biomedicine. Cell 2018;173:1562-5.  Back to cited text no. 36
    
37.
Topol EJ. High-performance medicine: The convergence of human and artificial intelligence. Nat Med 2019;25:44-56.  Back to cited text no. 37
    
38.
Kish LJ, Topol EJ. Unpatients-why patients should own their medical data. Nat Biotechnol 2015;33:921-4.  Back to cited text no. 38
    


    Figures

  [Figure 1], [Figure 2], [Figure 3]
 
 
    Tables

  [Table 1], [Table 2], [Table 3]



 

 
Top
 
 
  Search
 
     Search Pubmed for
 
    -  Mathews S
    -  Dham R
    -  Dutta A
    -  Jose AT
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
Abstract
Introduction
Methods
Results and Disc...
Role in Audiology
Role in Otology
Role in Rhinology
Role in Laryngology
Role in Head-and...
Other Uses
Limitations and ...
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed161    
    PDF Downloaded4    

Recommend this journal