<?xml version="1.0" encoding="UTF-8"?><?xml-model type="application/xml-dtd" href="https://jats.nlm.nih.gov/publishing/1.3/JATS-journalpublishing1-3.dtd"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "https://jats.nlm.nih.gov/publishing/1.3/JATS-journalpublishing1-3.dtd">
<article xmlns:ali="http://www.niso.org/schemas/ali/1.0/" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" specific-use="Marcalyc 1.3" dtd-version="1.3" article-type="research-article" xml:lang="en">
<front>
<journal-meta>
<journal-id journal-id-type="index">5704</journal-id>
<journal-title-group>
<journal-title specific-use="original" xml:lang="pt">Revista de Epidemiologia e Controle de Infecção</journal-title>
<abbrev-journal-title abbrev-type="publisher" xml:lang="pt">RECI</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">2238-3360</issn>
<publisher>
<publisher-name>Universidade de Santa Cruz do Sul</publisher-name>
<publisher-loc>
<country>Brasil</country>
<email>liapossuelo@unisc.br</email>
</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="art-access-id" specific-use="redalyc">570481700018</article-id>
<article-id pub-id-type="doi">10.17058/reci.v15i1.19227</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Artigos Revisão</subject>
</subj-group>
</article-categories>
<title-group>
<article-title xml:lang="en">The use of machine learning methods for computed tomography image classification in the Covid-19 pandemic: a review</article-title>
<trans-title-group>
<trans-title xml:lang="pt">O uso de métodos de aprendizado de máquina para classificação de imagens de tomografia computadorizada na pandemia da COVID-19: uma revisão</trans-title>
</trans-title-group>
<trans-title-group>
<trans-title xml:lang="es">El uso de métodos de aprendizaje de máquina para clasificación de imágenes de tomografía computarizada en la pandemia de COVID-19: una revisión</trans-title>
</trans-title-group>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="no">
<name name-style="western">
<surname>Jacek</surname>
<given-names>Sieredziński</given-names>
</name>
<xref ref-type="aff" rid="aff1"/>
</contrib>
<contrib contrib-type="author" corresp="no">
<name name-style="western">
<surname>Daniel</surname>
<given-names>Zaborski</given-names>
</name>
<xref ref-type="aff" rid="aff2"/>
<email>daniel.zaborski@zut.edu.pl</email>
</contrib>
</contrib-group>
<aff id="aff1">
<institution content-type="original">Sin institución</institution>
<country country="PL">Polonia</country>
<institution-wrap>
<institution content-type="orgname">Military Hospital and Clinic</institution>
<institution-id institution-id-type="ror">https://ror.org/03pebmm12</institution-id>
</institution-wrap>
</aff>
<aff id="aff2">
<institution content-type="original">Sin institución</institution>
<country country="PL">Polonia</country>
<institution-wrap>
<institution content-type="orgname">Laboratory of Biostatistics, West Pomeranian University of Technology</institution>
<institution-id institution-id-type="ror">https://ror.org/0596m7f19</institution-id>
</institution-wrap>
</aff>
<pub-date pub-type="epub-ppub">
<season>January-March</season>
<year>2025</year>
</pub-date>
<volume>15</volume>
<issue>1</issue>
<fpage>109</fpage>
<lpage>120</lpage>
<history>
<date date-type="received" publication-format="dd mes yyyy">
<day>12</day>
<month>03</month>
<year>2024</year>
</date>
<date date-type="accepted" publication-format="dd mes yyyy">
<day>22</day>
<month>11</month>
<year>2024</year>
</date>
</history>
<permissions>
<ali:free_to_read/>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<ali:license_ref>https://creativecommons.org/licenses/by/4.0/</ali:license_ref>
<license-p>Esta obra está bajo una Licencia Creative Commons Atribución 4.0 Internacional.</license-p>
</license>
</permissions>
<abstract xml:lang="en">
<title>Abstract</title>
<p>Background and Objectives: COVID-19 has been declared a pandemic by the World Health Organization, representing a major challenge worldwide. An early diagnosis method for COVID-19 is based on CT scans, which can be analyzed using artificial intelligence to save medical, logistical, and human resources. Therefore, this study aimed to present the current state of the art in the application of machine learning to classify computed tomography images in the COVID-19 pandemic. Content: The review briefly describes the types of machine learning methods for COVID-19 detection, the stages of deep learning model construction (segmentation, augmentation), and selected aspects of explainable artificial intelligence. Finally, the application results are discussed and the most common performance indicators for individual models are given.  Conclusion: Models and algorithms developed during the peak of the COVID-19 pandemic can be reused in the event of future outbreaks of this or similar infectious diseases.</p>
</abstract>
<trans-abstract xml:lang="pt">
<title>Resumo</title>
<p>Justificativa e Objetivos: A COVID-19 foi declarada uma pandemia pela Organização Mundial da Saúde, representando um grande desafio em todo o mundo. Um método de diagnóstico precoce da COVID-19 é baseado em tomografias computadorizadas, que podem ser analisadas usando inteligência artificial para economizar recursos médicos, logísticos e humanos. Portanto, o objetivo deste estudo foi apresentar o atual estado da arte na aplicação do aprendizado de máquina para classificar imagens de tomografia computadorizada na pandemia de COVID-19. Conteúdo: A revisão descreve brevemente os tipos de métodos de aprendizado de máquina para detecção de COVID-19, os estágios de construção do modelo de aprendizagem profunda (segmentação, aumento) e aspectos selecionados da inteligência artificial explicável.Finalmente, os resultados daaplicação são discutidos e os indicadores dedesempenho mais comuns para modelos individuaissão dados. Conclusão: Modelos e algoritmosdesenvolvidos durante o pico da pandemia deCovid-19 podem ser reusados no caso de futurossurtos desta ou doenças infecciosas semelhantes.</p>
</trans-abstract>
<trans-abstract xml:lang="es">
<title>Resumen</title>
<p>Justificación y Objetivos: La Organización Mundial de la Salud ha declarado que la COVID-19 es una pandemia, lo que ha planteó un gran desafío a nivel mundial. Un método de diagnóstico precoz para COVID-19 se basa en tomografías computarizadas, que pueden analizarse mediante inteligencia artificial para ahorrar recursos médicos, logísticos y humanos. Por lo tanto, el objetivo de este estudio fue presentar el estado actual del arte en la aplicación del aprendizaje automático para clasificar imágenes de tomografía computarizada en la pandemia de COVID-19. Contenido: La revisión describe brevemente los tipos de métodos de aprendizaje automático para la detección de COVID-19, las etapas de construcción del modelo de aprendizaje profundo (segmentación, aumento) y aspectos seleccionados de la inteligencia artificial explicable. Finalmente, se discuten los resultados de la aplicación y se presentan los indicadores de rendimiento más comunes para modelos individuales. Conclusión: Los modelos y algoritmos desarrollados durante el pico de la pandemia de COVID-19 pueden reutilizarse en caso de futuros brotes de esta o de enfermedades infecciosas similares.</p>
</trans-abstract>
<kwd-group xml:lang="en">
<title>Keywords</title>
<kwd>COVID-19</kwd>
<kwd>Tomography, X-Ray Computed</kwd>
<kwd>Machine Learning</kwd>
<kwd>Deep Learning</kwd>
<kwd>Neural Networks, Computer</kwd>
</kwd-group>
<kwd-group xml:lang="pt">
<title>Palavras-chave</title>
<kwd>COVID-19</kwd>
<kwd>Tomografia Computadorizada, Raios X</kwd>
<kwd>Aprendizado de Máquina</kwd>
<kwd>Aprendizado Profundo</kwd>
<kwd>Redes Neurais de Computação</kwd>
</kwd-group>
<kwd-group xml:lang="es">
<title>Palabras clave</title>
<kwd>COVID-19</kwd>
<kwd>Tomografía Computarizada, Rayos X</kwd>
<kwd>Aprendizaje Automático</kwd>
<kwd>Aprendizaje Profundo</kwd>
<kwd>Redes Neurales de la Computación</kwd>
</kwd-group>
<counts>
<fig-count count="2"/>
<table-count count="1"/>
<equation-count count="3"/>
<ref-count count="40"/>
</counts>
<custom-meta-group>
<custom-meta>
<meta-name>redalyc-journal-id</meta-name>
<meta-value>5704</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec>
<title>
<bold>INTRODUCTION</bold>
</title>
<p>The first human cases of coronavirus disease 19 (COVID-19) were reported in Wuhan City, China, in December 2019.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref1">1</xref>-<xref ref-type="bibr" rid="redalyc_570481700018_ref3">3</xref>
</sup> The COVID-19 pandemic was declared on March 11, 2020, by the World Health Organization.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref4">4</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>
</sup> As of November 1, 2023, 771,548,954 cases and 6,974,460 deaths have been confirmed, ranking COVID-19 fifth among the deadliest epidemics and pandemics in history.<sup>4</sup>
</p>
<p>Widely accepted management strategies to restrict the spread of COVID-19 have included lockdowns, travel restrictions, quarantines, social distancing, isolation, infection control measures, and vaccination.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>-<xref ref-type="bibr" rid="redalyc_570481700018_ref7">7</xref>
</sup> Different drug types have also been developed and many substances with other indications have been “repurposed” to treat patients with COVID-19.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref4">4</xref>
</sup> However, the emergence of new worrying variants has become a major problem in the efficient prevention and treatment of the infection.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref8">8</xref>
</sup> SARS-CoV-2 may cause no symptoms, only mild symptoms such as cramps and fever, or serious complications such as shortness of breath and kidney failure.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref3">3</xref>
</sup> The risk of severe disease is also higher for older people and for those with underlying conditions, such as diabetes and cancer.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref2">2</xref>
</sup>
</p>
<p>Real-time reverse transcription-polymerase chain reaction (rRT-PCR) is currently the diagnostic gold standard used to confirm COVID-19 infection.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref8">8</xref>,</sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref9">
<sup>9</sup>
</xref>However, the method is expensive, laborious, time-consuming, requires well-trained personnel to perform sophisticated procedures, and has a relatively low positive detection rate in the early stage.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref1">1</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref10">10</xref>-<xref ref-type="bibr" rid="redalyc_570481700018_ref15">15</xref>
</sup> Furthermore, new genetic variants of SARS-CoV-2 may lead to false-negative results.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref16">16</xref>
</sup> An early diagnostic method for COVID-19 is based on computed tomography (CT) scans,<sup>1,<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref6">6</xref>,10,<xref ref-type="bibr" rid="redalyc_570481700018_ref11">11</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref13">13</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref17">17</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref18">18</xref>
</sup> which provide a higher sensitivity rate (88-98%) than RT-PCR (59-71%).<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref19">19</xref>
</sup> Compared with X rays, CT generates more detailed cross-sectional images without tissue overlap, has higher sensitivity and specificity, and can distinguish between COVID-19 and other conditions, such as pneumonia.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref2">2</xref>,8,9,<xref ref-type="bibr" rid="redalyc_570481700018_ref12">12</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref16">16</xref>
</sup> Indeed, CT provides 3D examinations of organs from multiple angles and allows the severity of the infection to be assessed.<sup>6</sup> Three main types of COVID-19-related irregularities have been identified on lung CT images: ground-glass opacification, consolidation, and pleural effusion.<sup>1,9,11,12</sup> To further improve CT analysis, artificial intelligence (AI) can be used,<sup>1,12,<xref ref-type="bibr" rid="redalyc_570481700018_ref20">20</xref>
</sup> saving time as well as medical, logistical and human resources,<sup>2,<xref ref-type="bibr" rid="redalyc_570481700018_ref3">3</xref>,8,11</sup> facilitating the detection, classification, diagnosis, segmentation, prediction, and improvement of image quality.<sup>5,20,<xref ref-type="bibr" rid="redalyc_570481700018_ref21">21</xref>
</sup>
</p>
<p>Therefore, our study aimed to present the current state of the art in the application of machine learning to classify computed tomography images in the COVID-19 period.</p>
</sec>
<sec>
<title>
<bold>METHODS</bold>
</title>
<p>This narrative review was conducted to assess the literature with a focus on machine learning methods and their use to classify CT images during the COVID-19 pandemic, not to answer a specific research question. This review gathered a group of literature articles on the above-mentioned topic in a qualitative manner. In addition, a quantitative analysis of the literature or its quality was not the main aim of this study. The selection of articles was based on the following inclusion and exclusion criteria.</p>
<p>
<bold>Eligibility criteria</bold>
</p>
<p>Only full-text articles on applying machine learning methods to COVID-19 detection based on CT scans were included. The selected articles were published in English between January 1, 2021, and December 31, 2023.</p>
<p>
<bold>Exclusion criteria</bold>
</p>
<p>Preprints, conference abstracts, books, book chapters, notes, technical reports, as well as studies not addressing the scientific knowledge about applying machine learning methods to detect COVID-19 based on CT scans were excluded.</p>
<p>
<bold>Information source and search strategy</bold>
</p>
<p>The following query was used for searching PubMed (November 24th, 2023): machine learning AND computed tomography AND image classification AND COVID-19.</p>
<p>
<bold>Selection of studies</bold>
</p>
<p>Articles that appeared to meet the inclusion criteria were selected for full reading to determine their eligibility. Supplementary articles were included after checking their reference lists.</p>
<p>
<bold>Data collection</bold>
</p>
<p>The initial number of articles was 213 but it was reduced to 60 after applying exclusion criteria. The thorough reading and critical evaluation of article content resulted in the selection of the 40 most relevant articles (Figure 1).</p>
<p>
<fig id="gf1">
<label>Figure 1</label>
<caption>
<title>The procedure of article selection (studies from around the world (2019-2023).</title>
</caption>
<alt-text>Figure 1  The procedure of article selection (studies from around the world (2019-2023).</alt-text>
<graphic xlink:href="570481700018_gf7.png" position="anchor" orientation="portrait">
<alt-text>Figure 1  The procedure of article selection (studies from around the world (2019-2023).</alt-text>
</graphic>
</fig>
</p>
</sec>
<sec>
<title>
<bold>RESULTS</bold>
</title>
<p>
<bold>Segmentation and augmentation</bold>
</p>
<p>Among the five models (U-Net, LinkNet, R2U-Net, Attention U-Net, and U-Net++),<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref12">12</xref>
</sup> the highest values were achieved by LinkNet for Dice coefficient (DC) and intersection over union (IoU) for lung segmentation (0.980 and 0.967, respectively), whereas R2U-Net showed the lowest values (0.962 and 0.928, respectively).<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref9">9</xref>
</sup> The lung area was also segmented from the small cohort of CT images with BCDU-Net,<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref22">22</xref>
</sup> which was inspired by U-Net<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref23">23</xref>
</sup> and involved bi-directional convolutional long short-term memory (ConvLSTM) with densely connected convolutions. In other studies, candidate infected regions were segmented from pulmonary CT images, using a 3D deep learning (DL) model (region proposal network)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref14">14</xref>
</sup> or Visual Basic NET (VB-Net), followed by various classification methods [convolutional neural networks (CNN) and inception network or random forest (RF)].<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref13">13</xref>
</sup> The authors developed a VB-Net algorithm, which combined the V-Net model with the bottleneck layer, thus integrating the fine-grained COVID-19 image features, reducing the number of feature mapping channels, and effectively increasing the convolution speed. Dynamic fusion segmentation network (DFSN) is another image segmentation method,<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref18">18</xref>
</sup> whose IoU and DC values were 0.800 and 0.530, respectively. The first component of this system automatically segmented infection-related pixels and served as the backbone to extract dynamically selected pixel-level information, which was used to make a final diagnosis. Other authors<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref24">24</xref>
</sup> used a semi-supervised lung infection segmentation deep network (Inf-Net) for chest CT images, including a parallel partial decoder to aggregate high-level features.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref10">10</xref>
</sup> They obtained a slightly lower accuracy for non-infected CT regions and applied an additional classifier to improve the overall model performance.</p>
<p>Lung-lesion maps were obtained from input images processed by different segmentation networks (U-net, DRUNET, FCN, SegNet, and DeepLabv3).<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref25">25</xref>
</sup> Pre-trained 2D UNet,<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref26">26</xref>
</sup> unsupervised lung segmentation (Shift3D),<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref27">27</xref>
</sup> entire-lung segmentation (followed by resizing, bin discretization, and radiomic feature extraction),<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref28">28</xref>
</sup> k-means clustering with gray level co-occurrence matrices (for extracting regions of interest and textural features),<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref29">29</xref>
</sup> and a segmentation network within the DL framework (for segmenting lung and lesion areas, thus extracting spatiotemporal information from multiple CT scans to perform auxiliary diagnosis)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref30">30</xref>
</sup> were also used for image segmentation. In another study,<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref31">31</xref>
</sup> over-segmentation mean shift was followed by a superpixel-simple linear iterative clustering algorithm for pulmonary parenchyma segmentation. Each superpixel cluster was described according to its position, grey intensity, second-order texture, and spatial-context-saliency features. Subsequently, the watershed segmentation was applied to the mean-shift clusters to identify ground-glass opacity and pulmonary infiltrates only in the pulmonary parenchyma segmentation-indicated zones. Application of the EfficientNet and EfficientDet networks<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref19">19</xref>
</sup> yielded DC values of 0.980 and 0.730 for lung and COVID-19 segmentation, respectively, whereas a DC of 0.590 was reported for a Unet-like architecture with backbone residual network (ResNet-34).<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref32">32</xref>
</sup> Finally, a DC of 0.575 was obtained using a weakly-supervised method based on a generative adversarial network (GAN),<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref33">33</xref>
</sup> whereas a multitask model outperformed individual segmentation models for the joint segmentation of pulmonary lesions.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref34">34</xref>
</sup>
</p>
<p>To prevent overfitting, data augmentation and transfer learning (TL) can be used. The former includes translation, horizontal (and vertical) flipping, and random rotation to enhance the accuracy of model prediction.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>
</sup> Augmentation may reduce class imbalance or data scarcity problems.<sup>5,<xref ref-type="bibr" rid="redalyc_570481700018_ref10">10</xref>
</sup> Some authors<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref3">3</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref9">9</xref>
</sup> applied facile image transformation (scaling, rotation, and flipping) resources to increase the number of records, whereas others<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref35">35</xref>
</sup> improved the representational learning capability by distortion, painting, and perspective transformation. Finally, GAN was used in two studies on data augmentation.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref33">33</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref36">36</xref>
</sup> The first one involved GAN hyperparameter tuning with the whale optimization algorithm to avoid overfitting and instability, whereas the second one used image-level labels to generate normal-looking CT slices (from those with COVID-19 lesions), whose reality was improved with a feature match strategy.</p>
<p>
<bold>Classification</bold>
</p>
<p>An open-source framework consisting of several DL algorithms differentiated COVID-19 from community-acquired pneumonia and other lung diseases.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref22">22</xref>
</sup> It could deal with heterogeneous data and small sample sizes irrespective of the CT image source. To increase accuracy and decrease logarithmic loss and testing time, another study used augmented data to train CNN and ConvLSTM-based DL models. They were compared with traditional machine learning (ML) models [support vector machines (SVM) and k-nearest neighbors (k-NN)], and their performance was lower. <sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref3">3</xref>
</sup> COVID-19 probability was also predicted using a weakly supervised DL model based on 3D CT volumes from the segmented 3D lung regions.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref26">26</xref>
</sup> Lung lesions were determined from activation regions in a classification network and unsupervised connected components.</p>
<p>An infection size-aware RF automatically rated patients into classes with the different lesion ranges using the thin-section CT image records for COVID-19 and community-acquired pneumonia.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref13">13</xref>
</sup> Model performance was further increased by including radiomic features. Another method distinguished COVID-19 from common pneumonia based on lung vessel morphology.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref9">9</xref>
</sup> It used maximum intensity projection to indicate small-density changes in CT scans, thus accurately reflecting blood vessel condition and calcification of their walls. The applied capsule network used the DenseNet-121 feature extractor and outperformed ResNet-50 and Inception-V3. Community-acquired pneumonia and other non-pneumonic images were also analyzed with a 2D CNN (COVNet), which extracted visual features from volumetric chest CT scans.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref23">23</xref>
</sup> Input CT slices were fed to a pre-trained ResNet50 to obtain features, which were then combined and processed by a fully connected layer. To increase the contrast between the local lesion regions and the abdominal cavity, another deep CNN-based classification algorithm performed convolution and deconvolution operations.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref11">11</xref>
</sup> Moreover, discrimination between image types was improved with middle-level features, and they were classified in each channel using a modified open-source COVID-CT dataset.</p>
<p>One of the DL architectures (ResNet-18) distinguished among COVID-19, influenza, and normal subjects.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref14">14</xref>
</sup> Segmented images were categorized with their corresponding confidence scores using a location-attention classification model. Another ResNet-18 architecture was trained on a large CT dataset for differentiating COVID-19 and other types of viral pneumonia.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref25">25</xref>
</sup> This system involved segmentation, classification, and quantitative measurements. However, it required manually segmented images and multi-modal data that were difficult to obtain. COVID-19 was also differentiated from common pneumonia and healthy subjects by using a dynamic transfer-learning classification network in which dynamically selected pixel-level information was used for the final diagnosis.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref18">18</xref>
</sup>
</p>
<p>Features extracted by several CNN models (AlexNet, ResNet18, ResNet50, Inceptionv3, Densenet201, Inceptionresnetv2, MobileNetv2, GoogleNet) from the images stored in the COVID-19 Radiography Database were fed to the traditional ML models [SVM, k-NN, naïve Bayes (NB) and decision trees (DT)]. Their hyperparameters were determined with Bayesian optimization.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>
</sup> A pretrained InceptionV3 model was also developed for feature extraction and classification using the SARS-CoV-2 CT-Scan dataset.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref36">36</xref>
</sup> Four different data [University of Texas (Southwestern Medical Center), China Consortium of Chest CT Image Investigation (CC-CCII), COVID-CT set, and MosMedData] sources were used for training DL models. Their best performance was obtained with multiple 3D CT datasets whose classification accuracy decreased when evaluated on an external set without lung field segmentation.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref35">35</xref>
</sup> In another study,<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref12">12</xref>
</sup> datasets of COVID-19 were distinguished from those of community-acquired pneumonia with a pipeline (including a capsule network with the DenseNet121 block) consisting of four connected modules for lesion slice selection and slice- and patient-level prediction.</p>
<p>A multitask learning framework (involving task prioritization, convergence acceleration, and joint learning performance improvement) automatically classified CT images into COVID-19 positive or negative cases using a random-weighted loss function.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref27">27</xref>
</sup> COVID-19 was detected with 3D CNN and an auxiliary feed-forward ANN based on chest CT scans and RT-PCR results. Clinical metadata also helped with distinguishing between COVID-19 and other viral pneumonia in a patient-level method (including InceptionResnetV2), which aggregated chest CT volumes into 2D representations.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref34">34</xref>
</sup> A combination of features from chest CT volumes improved model performance compared with clinical data alone. Other DL models (AlexNet, ResNet50, and SqueezeNet) were also compared with the traditional ML ones (NB, bagging, and Reptree). They classified CT images into two categories (COVID and non-COVID),<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref29">29</xref>
</sup> whereas a custom 3D CNN trained on the CT scans from patients with suspected or known COVID-19 assigned images to three groups (COVID-19, other type of pulmonary infection or lack of infection signs).<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref32">32</xref>
</sup> More classes (severe-, moderate-, mild-, and non-pneumonic patients) were included in a multinomial logistic regression model, which was trained on the CT radiomic features selected by two feature selection algorithms (RF and multivariate adaptive regression splines).<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref28">28</xref>
</sup>
</p>
<p>Automatic systems trained on multiple COVID-19 CT images were developed for COVID-19 detection (using spatiotemporal information fusion)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref30">30</xref>
</sup> or identification of ground-glass opacity, and pulmonary infiltrates to assess disease progression during the patient’s follow-up assessment and evaluation.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref31">31</xref>
</sup> Differently, thousands of labeled CT images were used for a COVID-19 decision support and segmentation system (involving the EfficientNet and EfficientDet networks), which rejected non-related images using a header analysis and classifiers.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref19">19</xref>
</sup>
</p>
<p>Performance indicators for the models included in this review presented different values (for studies with two or more models, only that with the maximum sensitivity is mentioned) (Table 1).</p>
<p>
<table-wrap id="gt1">
<label>Table 1</label>
<caption>
<title>Performance indicators for COVID-19 detection models (studies from around the world, 2019-2023).</title>
</caption>
<alt-text>Table 1  Performance indicators for COVID-19 detection models (studies from around the world, 2019-2023).</alt-text>
<alternatives>
<graphic xlink:href="570481700018_gt2.png" position="anchor" orientation="portrait"/>
<table style="width:702.9pt;border-collapse:collapse;border:none;    " id="gt2-526564616c7963">
<thead style="display:none;">
<tr style="display:none;">
<th style="display:none;"/>
</tr>
</thead>
<tbody>
<tr>
<td style="width:48.2pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>Author</bold>
<bold/>
</td>
<td style="width:62.35pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>Country</bold>
</td>
<td style="width:102.05pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>Objectives</bold>
<bold/>
</td>
<td style="width:102.05pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>Methods</bold>
<bold/>
</td>
<td style="width:110.55pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>Main</bold>
<bold> results</bold>
</td>
<td style="width:34.0pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>Se (Re)</bold>
</td>
<td style="width:34.0pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>Sp</bold>
<bold/>
</td>
<td style="width:34.0pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>PPV  (Pr)</bold>
</td>
<td style="width:34.0pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>NPV</bold>
</td>
<td style="width:34.0pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>Acc</bold>
<bold/>
</td>
<td style="width:34.0pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>F1</bold>
</td>
<td style="width:39.7pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>MCC</bold>
</td>
<td style="width:34.0pt;border-top:solid windowtext 1.0pt;   border-left:none;border-bottom:solid black 1.0pt;border-right:none;      padding:0cm 5.4pt 0cm 5.4pt">
<bold>AUC</bold>
</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Sedik <italic>et al</italic>. (2020)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref3">3</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Egypt</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To improve the learnability of CNN and the convolutional long short-term memory-based DL models and increase the accuracy of COVID-19 detection.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Two data-augmentation techniques based on simple image transformations and generative adversarial networks.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Acc, logarithmic loss, and testing time were improved relative to DL models without data augmentation; an increased Acc (4-11%) was observed between data-augmented DL models and other investigated ML techniques.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.997</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.987</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.987</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.997</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">1.000</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.990</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.984</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.990</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Aslan <italic>et al</italic>. (2022)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Turkey</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To classify CT chest images from the COVID-19 Radiography Database and determine hyperparameters of ML algorithms.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Automatic lung segmentation with ANN; data augmentation, feature extraction with CNN, classification with support vector machines, k-nearest neighbors, naive Bayes, and decision trees; hyperparameter determination with Bayesian optimization.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">DenseNet201 model and support vector machines showed the best predictive performance.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.964</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.981</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.964</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.963</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.945</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.964</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Wu <italic>et al</italic>. (2023)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref9">9</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">China</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To accurately and automatically distinguish between COVID-19 and CAP using DL.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">A DL method was based on maximum-intensity projection images (obtained from CT scans); they served as inputs into a capsule network trained and validated on 333 and 3581 CT scans, respectively.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">LinkNet achieved the highest DC; the capsule network with the DenseNet-121 feature extractor outperformed ResNet-50 and Inception-V3; Acc decreased to 0.857 and 0.818 without maximum-intensity projection or capsule network, respectively; Acc of 0.961, 0.997, and 0.949 were achieved on the external validation datasets; Se was higher than or comparable to other state-of-the-art methods.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.971</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.968</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.971</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.970</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.986</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Qi <italic>et al</italic>. (2022)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref12">12</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">China</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To improve existing ML methods for distinguishing between COVID-19 and CAP based on CT images.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">A fully automatic DL pipeline comprising four connected modules (for lung segmentation, slice selection, and slice- and patient-level prediction) was trained and tested on 326 CT scans; its generalization capability was evaluated on a public dataset of 110 patients.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">LinkNet exhibited the largest IoU and DC; the capsule network with ResNet50 achieved an Acc of 0.925 and AUC of 0.933 in the selection of slices with lesions; the capsule network with DenseNet121 showed an Acc of 0.971 and AUC of 0.992 for slice-level prediction; Acc of 1.000 was obtained for patient-level prediction.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.997</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.966</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.965</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.981</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.983</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Shi <italic>et al</italic>. (2021)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref13">13</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">China</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To rapidly and accurately screen patients with COVID-19 and CAP using ML.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">COVID-19 (1658) and CAP (1027) patients underwent thin-section CT; segmentation of infection and lung fields were used to extract location-specific features; a random forest categorized patients with different ranges of infected lesion sizes and classified them within each group.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Large performance margins were achieved against comparison methods, especially for medium infection size (0.01% to 10%); the inclusion of radiomic features slightly improved classification results.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.907</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.833</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.879</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.942</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Xu <italic>et al.</italic> (2020)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref14">14</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">China</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To establish an early screening model for distinguishing COVID-19 from influenza-A viral pneumonia and healthy cases based on pulmonary CT images and DL.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Different numbers of samples were used for COVID-19 (219), influenza-A viral pneumonia (224), and healthy subjects (175); infection regions were determined using a 3D DL model; separated images were categorized with the corresponding confidence scores using a location-attention classification model; infection types and confidence scores were calculated using the noisy-OR Bayesian function.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">The overall Acc on the benchmark dataset was 86.7% for all CT cases taken together.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.900</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.931</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.867</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.915</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Zhang <italic>et al</italic>. (2022)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref18">18</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">China, UK, Belgium</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To automatically segment lesions in CT images and distinguish COVID-19 in common pneumonia patients and healthy subjects.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">A dynamic fusion segmentation network segmented infection-related pixels and aggregated low-level features that were fused to model multi-scale semantic information; COVID-19 patients were identified with a dynamic transfer-learning classification network.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Two models achieved state-of-the-art performance in segmentation and classification tasks.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.980</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.820</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.770</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Carmo <italic>et al</italic>. (2021)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref19">19</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Brazil</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To develop and deploy a COVID-19 decision support and segmentation system based on CT and X-ray images.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">EfficientNet and EfficientDet segmented and classified images in a real-time scalable manner in communication with a Picture Archiving and Communication System; non-related images were rejected using header analysis and classifiers.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Acc values of 0.94 and 0.98 were achieved for CT and X-ray classification, respectively, whereas those DC for lung and COVID-19 segmentation were 0.98 and 0.73, respectively; the median response times were 7 s for X-ray and 4 min for CT.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.953</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.905</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.944</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.928</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.979</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Shiri <italic>et al</italic>. (2021)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref20">20</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Iran, Switzerland, Canada, The Netherlands, Denmark</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To develop prognostic survival models for COVID-19 patients using clinical data and lung and/or lesion radiomic features extracted from chest CT images.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Survival modeling was based on radiomic features and clinical data (separately or in combination); the maximum-relevance minimum-redundancy method and XGBoost were used for feature selection and classification.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Cancer comorbidity, consciousness level, and radiological score were highly correlated with survival; oxygen saturation and blood urea nitrogen were important clinical features; small-area high-gray-level emphasis and high-gray level-zone emphasis from gray-level size-zone matrix, run-length non-uniformity from gray-level run-length matrix, and high-gray-level-zone emphasis from gray-level size-zone matrix yielded the highest predictive performance; the most accurate prognostic model included combined lung, lesion, and clinical features.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.880</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.890</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.880</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.950</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Javaheri <italic>et al</italic>. (2021)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref22">22</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Iran, USA, Canada, Vietnam</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To enhance the Acc of CT image-based COVID-19 recognition.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">CovidCTNet (an open-source framework) differentiated COVID-19 from CAP and other lung diseases.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">CovidCTNet increased the Acc of CT image-based COVID-19 detection to 95% compared with radiologist evaluation (70%) and was independent of the CT imaging hardware.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.909</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">1.000</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.933</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.940</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Li <italic>et al</italic>. (2020)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref23">23</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">China</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To develop a fully automatic framework for COVID-19 detection using chest CT scans.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">COVID-19 detection neural network (COVNet) extracted visual features from 4352 volumetric chest CT scans obtained from 3322 patients.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">COVNet showed a high predictive performance on the independent test set.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.900</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.960</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.960</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Zhang <italic>et al</italic>. (2020)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref25">25</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">China</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To develop an AI system for diagnosing COVID-19 pneumonia and differentiating it from other common types of pneumonia and normal controls.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">AI system identified clinical markers correlated with COVID-19 lesion properties and provided accurate clinical prognosis together with clinical data.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Globally available AI systems showed high predictive performance.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.949</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.911</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.925</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.980</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Wang <italic>et al</italic>. (2020)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref26">26</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">China</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To develop a DL-based model for automatic COVID-19 diagnosis based on chest CT images.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">A weakly-supervised DL framework used 3D CT volumes for COVID-19 classification and lesion localization; the UNet-segmented 3D lung regions were fed into a 3D DL network;  CT volumes were used for training (499) and testing (131).</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">The algorithm took only 1.93 s to process a single patient’s CT volume; the weakly-supervised DL model could accurately predict COVID-19 without lesion annotation.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.907</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.911</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.840</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.982</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.901</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.959</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Bao <italic>et al</italic>. (2022)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref27">27</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">China, Australia</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To develop an end-to-end multitask learning framework (COVID-MTL) capable of automated and simultaneous detection and severity assessment of COVID-19.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">COVID-MTL learned different COVID-19 tasks in parallel through the random-weighted loss function; the 3D real-time augmentation algorithm (Shift3D) introduced space variances for 3D CNN components; MTL accelerated convergence and improved joint learning performance compared to single-task models; COVID-MTL was trained on 930 CT scans and tested on 399 cases.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">COVID-MTL achieved high performance in the detection of COVID-19 against radiology and nucleic acid tests, outperforming other state-of-the-art models; COVID-MTL yielded AUC of 0.800 and 0.813 for classifying control and/or suspected, mild and/or regular, and severe and/or critically-ill cases.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.902</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.912</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.902</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.905</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.939</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Guhan <italic>et al</italic>. (2022)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref29">29</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">India, Saudi Arabia</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To segment the CT images using k-means clustering, extract textural features using gray level co-occurrence matrix, and classify CT scans using ML.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">One hundred COVID-19 and non-COVID-19 images were segmented and classified with naive Bayes, bagging, and REPTree; pre-trained AlexNet, ResNet50, and SqueezeNet were used for predictive performance comparison.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Naive Bayes and ResNet50 achieved the highest Acc (97.0% and 99.0%, respectively).</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.990</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.980</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.991</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.990</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Li <italic>et al</italic>. (2021)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref30">30</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">China</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To automatically detect COVID-19 based on spatiotemporal information fusion.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">The spatiotemporal information features of multiple CT scans were extracted using a segmentation network to perform auxiliary diagnosis.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">High predictive performance was achieved in the classification of COVID-19 and non-COVID-19 CT scans; each scan took about 30 s for detection.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.953</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.967</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.944</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.960</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.946</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Tello-Mijares <italic>et al</italic>. (2021)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref31">31</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Mexico</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To automatically identify ground-glass opacity and pulmonary infiltrates in CT images from COVID-19 patients and assess disease progression during the patient’s follow-up evaluation.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Oversegmentation mean-shift followed by superpixel-simple linear iterative clustering was applied to COVID-19 CT images for pulmonary parenchyma segmentation.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Pulmonary parenchyma identification had a precision and recall of over 92.0% on twofold cross-validation; pulmonary infiltrate identification for ground-glass opacity showed a precision and recall of 96.0%.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.968</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.967</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.967</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.983</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Topff <italic>et al</italic>. (2023)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref32">32</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">The Netherlands, Spain, Belgium</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To develop a DL-based clinical decision support system for the automatic diagnosis of COVID-19 on chest CT scans and construct a complementary segmentation tool for assessing the extent of lung involvement and measuring disease severity.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Data annotation was performed by 34 radiologists and/or radiology residents including quality control measures; 2,802 CT scans were ranked with a multi-class classification model created using a 3D CNN; an UNET-like architecture with a backbone Residual Network (ResNet-34) was selected for image segmentation.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">The diagnostic multiclassification model yielded high micro-average and macro-average values for AUC (0.93 and 0.91, respectively) on the external test dataset; the segmentation performance was moderate (DC=0.59).</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.870</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.940</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.950</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.830</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.900</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.830</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Yang <italic>et al</italic>. (2021)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref33">33</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">China, Taiwan</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To localize COVID-19 lesions with a weakly-supervised method based on a generative adversarial network using only image-level labels.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">A generative adversarial network-based framework generated normal-looking CT slices from CT slices with COVID-19 lesions; a feature match strategy improved the quality of generated images; the localization map of lesions was obtained by subtracting the output image from its corresponding input image; a diagnostic system with improved classification Acc was obtained by adding a classifier branch to the generative adversarial network-based framework.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">The weakly-supervised learning method obtained a DC of 0.575 and exceeded other widely used weakly-supervised object localization approaches; its performance was similar to that of fully supervised learning methods in the COVID-19 lesion segmentation task (DC of 0.575); the common severity cohort had the largest sample size as well as the highest visual score.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.647</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.929</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.884</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.640</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.883</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">Ortiz <italic>et al</italic>. (2022)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref34">34</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">USA</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">To assess the value of aggregated chest CT data for COVID-19 prognosis compared to clinical metadata alone.</td>
<td style="width:102.05pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">A patient-level algorithm aggregated chest CT volumes into 2D representations that were integrated with clinical metadata to distinguish COVID-19 patients from healthy participants and patients with other viral pneumonia; the multitask segmentation approach was compared to combining feature-agnostic volumetric CT classification feature maps with clinical metadata for predicting mortality.</td>
<td style="width:110.55pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">A multitask model for joint segmentation of different classes of pulmonary lesions present in COVID-19-infected lungs outperformed individual segmentation models for each task; a combination of features derived from chest CT volumes improved AUC values to 0.80 from 0.52 obtained by using only patients’ clinical data.</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.590</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.690</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.920</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.750</td>
<td style="width:39.7pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;padding:0cm 5.4pt 0cm 5.4pt">0.810</td>
</tr>
<tr>
<td style="width:48.2pt;border:none;border-bottom:solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">Goel <italic>et al</italic>. (2021)<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref36">36</xref>
</sup>
</td>
<td style="width:62.35pt;border:none;border-bottom:solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">India, Australia, Korea</td>
<td style="width:102.05pt;border:none;border-bottom:   solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">To generate CT images using a generative adversarial network and optimize its hyperparameters using the whale optimization algorithm.</td>
<td style="width:102.05pt;border:none;border-bottom:   solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">The method was tested with different classification and meta-heuristic algorithms using the SARS-CoV-2 CT-Scan dataset, consisting of COVID-19 and non-COVID-19 images.</td>
<td style="width:110.55pt;border:none;border-bottom:   solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">The performance of the optimized model was better than that of other state-of-the-art methods.</td>
<td style="width:34.0pt;border:none;border-bottom:solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">0.998</td>
<td style="width:34.0pt;border:none;border-bottom:solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">0.978</td>
<td style="width:34.0pt;border:none;border-bottom:solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">0.978</td>
<td style="width:34.0pt;border:none;border-bottom:solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">0.998</td>
<td style="width:34.0pt;border:none;border-bottom:solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">0.992</td>
<td style="width:34.0pt;border:none;border-bottom:solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">0.988</td>
<td style="width:39.7pt;border:none;border-bottom:solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">-</td>
<td style="width:34.0pt;border:none;border-bottom:solid windowtext 1.0pt;   padding:0cm 5.4pt 0cm 5.4pt">-</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
</p>
<p>
<bold>Abbreviations</bold>: <bold>Acc</bold>: accuracy; <bold>ANN</bold>: artificial neural network; <bold>AUC</bold>: area under the curve; <bold>CAP</bold>: community-acquired pneumonia; <bold>CNN</bold>: convolutional neural network; <bold>CT</bold>: computed tomography; <bold>DC</bold>: Dice coefficient; <bold>DL</bold>: deep learning; <bold>F1</bold>: F1-score; <bold>IoU</bold>: intersection over union; <bold>MCC</bold>: Matthew correlation coefficient; <bold>ML</bold>: machine learning; <bold>NPV</bold>: negative predictive value; <bold>PPV</bold> (Pr): positive predictive value (precision); <bold>Se</bold> (Re): sensitivity (recall); <bold>Sp</bold>: specificity.</p>
</sec>
<sec>
<title>
<bold>DISCUSSION</bold>
</title>
<p>
<bold>Types of methods</bold>
</p>
<p>Machine learning, which belongs to the AI domain, can generally be divided into “traditional methods” and deep learning (both of which can be applied for pattern recognition, regression, or classification).<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref18">18</xref>
</sup> The difference lies in the way images are pre-processed, among other things. Whereas the first group relies on expert-derived inputs (such as the average greyscale) that require human involvement, the second uses the whole images as inputs and extracts the features by itself.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>-<xref ref-type="bibr" rid="redalyc_570481700018_ref7">7</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref10">10</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref11">11</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref16">16</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref21">21</xref>
</sup> It can be successfully used for medical-related imaging tasks, such as image preprocessing, registration, detection, and segmentation.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref6">6</xref>
</sup> In the context of COVID-19, DL has been applied at the molecular (e.g., protein structure prediction), patient (e.g., medical imaging for diagnosis), and population (e.g., epidemiology) scales.<sup>18</sup> Deep learning, as a data-driven approach, performs classification based on the image features learned by a model during the training stage.<sup>6,<xref ref-type="bibr" rid="redalyc_570481700018_ref8">8</xref>
</sup>
</p>
<p>It usually involves the type of artificial neural network (ANN), also called convolutional neural network (CNN). They have gained much popularity due to their higher performance in automatic disease detection tasks.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref6">6</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref11">11</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref16">16</xref>
</sup> Other DL methods include recurrent neural networks, deep belief networks, and reinforcement learning.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref10">10</xref>
</sup> One of the CNN architectures (named AlexNet, with fully supervised learning) achieved excellent performance on highly challenging datasets. It was the winner of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2012.<sup>6,16</sup> A wide range of ANN settings and training skills (ReLU, dropout, pooling, and local response normalization) enabled more effective CNN training and better performance.<sup>6,<xref ref-type="bibr" rid="redalyc_570481700018_ref37">37</xref>
</sup> It has been used in many studies on COVID-19 detection that mainly differed in the feature selection method and training of multiple classifiers. Since AlexNet was created, more advanced pre-trained networks based on this architecture (VGG, GoogLeNet, ResNet, DenseNet, MobileNet, SqueezeNet, and Network in Network) have been applied to COVID-19 detection.<sup>5,<xref ref-type="bibr" rid="redalyc_570481700018_ref7">7</xref>,16</sup> Visual Geometry Group, which is simple in architecture but effective in performance, was the winner of the ILSVRC challenge in 2014.<sup>6</sup> ResNet and DenseNet both use residual blocks and skip connections to make image-level classification. They also employ attention mechanisms, multi-view presentation learning, and semi-supervision because high-level features tend to lose details of the input image and the above-mentioned methods may fail in complex imaging data.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref18">18</xref>
</sup>
</p>
<p>Pretrained networks can be reused in the process called transfer learning (TL).<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref10">10</xref>
</sup> The trained model can be transferred to a new one, for which additional training data may be provided and in which modified neural layers can be incorporated.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref16">16</xref>
</sup> After automatic feature extraction (using TL with pre-trained models or custom CNN developed from scratch), ML methods (such as k-NN, SVM, DT, or NB) can be used to classify these features as COVID-19 or non-COVID-19 (e.g., normal or viral pneumonia).<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref6">6</xref>
</sup>
</p>
<p>
<bold>Deep learning stages</bold>
</p>
<p>The DL algorithm may include several steps, such as pre-processing, segmentation, feature extraction, classification, performance evaluation, and explainable model prediction.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref6">6</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref10">10</xref>
</sup> Preprocessing <sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref2">2</xref>
</sup> is the first stage in CT image analysis, for which different techniques are used. In preprocessing, raw images are converted into an appropriate format for further analysis. Medical images collected from different devices can vary in size, slice thickness, and the number of scans (e.g., 60-70 in CT).<sup>6</sup> During preprocessing, resizing, normalization, and sometimes conversion from RGB to grayscale are performed.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref16">16</xref>
</sup> In addition, the voxel dimension is resampled to account for the variation across datasets (resampling to an isomorphic resolution). Images are also improved with smoothing to increase the signal-to-noise ratio.</p>
<p>Segmentation is the next step of image preprocessing, for which a full CNN and its variants have been used.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref1">1</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref6">6</xref>
</sup> An image that shows only the lungs is more appropriate for infection detection. This is probably because it prevents the model from focusing on unwanted targets like bone and soft tissue.<sup/>
<xref ref-type="bibr" rid="redalyc_570481700018_ref8">
<sup>8</sup>
</xref>To achieve this, the lung region must be segmented from the raw image, which enables a more successful diagnosis. The lung area of the original image is cut by the segmentation process.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>
</sup> Sometimes, pixel values are also limited to obtain a proper range of Hounsfield units in the lung image.<sup>6</sup> In segmentation, underused multi-scale context information, high variance in texture, size and position of infected regions, and small inter-class variance of lesions are potential challenges.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref18">18</xref>
</sup> Manual lung segmentation is laborious, tedious, time-consuming, and heavily depends on the radiologists’ knowledge and experience.<sup>6</sup> However, DL-based segmentation techniques can automatically identify infected regions, thus allowing rapid screening of COVID-19 images. Classic U-Net, UNet++, and VB-Net are the popular segmentation methods.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref2">2</xref>,6,<xref ref-type="bibr" rid="redalyc_570481700018_ref10">10</xref>
</sup>
</p>
<p>Of all DL models, U-Net is the most famous architecture for segmentation, whose results may also be affected by image type. For example, two different segmentation approaches were used for the NIFTI and DICOM CT lung images as no method works for all image formats.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref8">8</xref>
</sup>
</p>
<p>Dice coefficient (DC) and intersection over union (IoU) are the two common measures for evaluating segmentation effectiveness.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref18">18</xref>
</sup> The first one is defined as:<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref38">38</xref>
</sup>
</p>
<p>
<disp-formula id="e1">
<label/>
<graphic xlink:href="570481700018_ee2.png" position="anchor" orientation="portrait">
<alt-text/>
</graphic>
</disp-formula>
</p>
<p>where A is a set that represents the ground truth and B represents the computed segmentation.</p>
<p>IoU, also known as the Jaccard index, is the most commonly used metric for comparing the similarity between two arbitrary shapes.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref39">39</xref>
</sup> It encodes the shape properties of the objects under comparison into the region property and calculates a normalized measure with a focus on their areas (or volumes). It is given by the following formula:</p>
<p>
<disp-formula id="e2">
<label/>
<graphic xlink:href="570481700018_ee3.png" position="anchor" orientation="portrait">
<alt-text/>
</graphic>
</disp-formula>
</p>
<p>After segmentation, augmentation is employed to increase the segmented image count, thus providing data diversity.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref16">16</xref>
</sup> Rotation, shifting in the width and height dimensions, shearing, zooming, flipping in the horizontal and vertical axes, and brightness changing can be used for this purpose.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref10">10</xref>
</sup>
</p>
<p>
<bold>Explainable artificial intelligence</bold>
</p>
<p>Deep learning black-box models provide no evidence of correctly extracted features. On the other hand, explainable AI is an emerging field that assigns certain values to image regions leading to the predicted outcome. Thus, radiologists can locate abnormalities in the lungs and have an insight into the important areas responsible for image classification.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref6">6</xref>
</sup> According to some authors,<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref21">21</xref>
</sup> CT was the second most common (20.0%) image modality coupled with explainable AI, although other studies reported a combined application to CT and X rays. It should be noted that the performance of COVID-19 detection models can be further improved by incorporating both kinds of images (chest X-ray or CT).<sup>6</sup> Explainable AI has most often been applied to lung examination and used different publicly available data repositories of CT images for COVID-19 diagnosis (Kaggle, Signal Processing Grand Challenge on COVID-19 dataset, COVIDx CT, COVIDx CT-2A &amp; COVIDx CT-2B, CC-CCII, MosMedData, COVID-Ctset, LTRC dataset, CT Chest Images Dataset from Mendeley, COVID pandemic, iRoads, Caltech-256, and Caltech-101). The availability of such repositories was the main reason for the advancement of COVID-19 studies among those using explainable AI.</p>
<p>
<bold>Supervised vs. unsupervised learning</bold>
</p>
<p>Further division of ML is based on the role of a “teacher” or “trainer”: in supervised learning, a loss function is optimized considering predicted labels and ground truth requiring manual annotation; in unsupervised learning, data patterns are found automatically using clustering.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref2">2</xref>
</sup> To achieve the best performance, all ML methods must be configured before the training process using hyperparameter optimization.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref5">5</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref16">16</xref>
</sup> Hyperparameters differ from model parameters: the former (such as the number of ANN layers, size, shape, type, number of neurons, intermediate processing elements, etc.) are calculated before the training phase, whereas the latter (such as weights) are optimized during learning. There are several ways to set the hyperparameters and different strategies can be adopted (including a manual one). Many algorithms, such as Bayesian optimization, grid search, swarm optimization (e.g., Sparrow optimization algorithm), etc., can be used to search the optimal hyperparameter.<sup>16</sup>
</p>
<p>
<bold>Performance indicators</bold>
</p>
<p>The most frequently reported model performance indicators are as follows: sensitivity (or recall; Se), specificity (Sp), accuracy (Acc), positive predictive value (or precision; PPV), negative predictive value (NPV), F-measure (F1), Matthews correlation coefficient (MCC), and area under the curve (AUC).<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref2">2</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref8">8</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref16">16</xref>
</sup> They are expressed by the following equations:<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref6">6</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref9">9</xref>-<xref ref-type="bibr" rid="redalyc_570481700018_ref12">12</xref>,<xref ref-type="bibr" rid="redalyc_570481700018_ref40">40</xref>
</sup>
</p>
<p>
<disp-formula id="e3">
<label/>
<graphic xlink:href="570481700018_ee4.png" position="anchor" orientation="portrait">
<alt-text/>
</graphic>
</disp-formula>
</p>
<p>where TP, TN, FP, and FN are the numbers of true positives, true negatives, false positives, and false negatives, respectively. Area under the curve (AUC) is the area under the receiver operating characteristic curve (Figure 2).</p>
<p>To evaluate the performance of a model, the dataset is usually divided into a training, validation, and test set. Training data are used to develop a model, whereas the learning process and model quality are assessed by monitoring overfitting or underfitting on the validation set. The model is finally evaluated on an independent test set, assuming that the input features are similar to those learned in the training set.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref6">6</xref>
</sup> K-fold cross-validation is an alternative approach to model testing.<sup>
<xref ref-type="bibr" rid="redalyc_570481700018_ref10">10</xref>
</sup>
</p>
<p>
<fig id="gf5">
<label>Figure 2</label>
<caption>
<title>An example of a receiver operating characteristic (ROC) curve (studies from around the world, 2019-2023); AUC: area under the curve.</title>
</caption>
<alt-text>Figure 2  An example of a receiver operating characteristic (ROC) curve (studies from around the world, 2019-2023); AUC: area under the curve.</alt-text>
<graphic xlink:href="570481700018_gf8.png" position="anchor" orientation="portrait">
<alt-text>Figure 2  An example of a receiver operating characteristic (ROC) curve (studies from around the world, 2019-2023); AUC: area under the curve.</alt-text>
</graphic>
</fig>
</p>
<p>Finally, some limitations of the present study must be mentioned. The first limitation of this review is the total number of references (40) that were finally included in the text. The second limitation, which is also a drawback, is the use of only one database (PubMed) for article search. However, the inclusion of additional literature sources would have increased the number of references even further. Therefore, a final representative subset of original studies and review articles was selected from the largest biomedical bibliographic database in the world.</p>
</sec>
<sec>
<title>
<bold>CONCLUSION</bold>
</title>
<p>Most studies on the use of artificial intelligence for COVID-19 diagnosis involved deep learning and feature extraction methods. Segmentation and augmentation were also frequently applied to improve model performance and overcome data scarcity. More extensive data sets and standardized modeling procedures, including an objective evaluation of model predictive capabilities, will be required in the future to introduce these methods into the common clinical practice. Models developed during the peak of the COVID-19 pandemic can be reused in future outbreaks of other similar diseases.</p>
</sec>
</body>
<back>
<ref-list>
<title>
<bold>REFERENCES</bold>
</title>
<ref id="redalyc_570481700018_ref1">
<mixed-citation publication-type="journal">1. Abdel-Basset M, Chang V, Hawash H, et al. FSS-2019-nCov: A deep learning architecture for semi-supervised few-shot segmentation of COVID-19 infection. Knowl-Based Syst. 2021;212:106647. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.knosys.2020.106647">https://doi.org/10.1016/j.knosys.2020.106647</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abdel-Basset</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Hawash</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>FSS-2019-nCov: A deep learning architecture for semi-supervised few-shot segmentation of COVID-19 infection.</article-title>
<source>Knowl-Based Syst.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.knosys.2020.106647">https://doi.org/10.1016/j.knosys.2020.106647</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref2">
<mixed-citation publication-type="journal">2. Mondal MRH, Bharati S, Podder P. Diagnosis of COVID-19 using machine learning and deep learning: A review. Curr Med Imaging. 2021;17(12):1403–18. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2174/1573405617666210713113439">https://doi.org/10.2174/1573405617666210713113439</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mondal</surname>
<given-names>MRH</given-names>
</name>
<name>
<surname>Bharati</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Podder</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Diagnosis of COVID-19 using machine learning and deep learning: A review.</article-title>
<source>Curr Med Imaging.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2174/1573405617666210713113439">https://doi.org/10.2174/1573405617666210713113439</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref3">
<mixed-citation publication-type="journal">3. Sedik A, Iliyasu AM, Abd El-Rahiem B, et al. Deploying machine and deep learning models for efficient data-augmented detection of COVID-19 infections. Viruses. 2020;12(7):769. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/v12070769">https://doi.org/10.3390/v12070769</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sedik</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Iliyasu</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Abd</surname>
<given-names>El-Rahiem</given-names>
</name>
</person-group>
<article-title>Deploying machine and deep learning models for efficient data-augmented detection of COVID-19 infections.</article-title>
<source>Viruses</source>
<year>2020</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/v12070769">https://doi.org/10.3390/v12070769</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref4">
<mixed-citation publication-type="journal">4. Aboul-Fotouh S, Mahmoud AN, Elnahas EM, et al. What are the current anti-COVID-19 drugs? From traditional to smart molecular mechanisms. Virol J. 2023;20(1):241. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1186/s12985-023-02210-z">https://doi.org/10.1186/s12985-023-02210-z</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Aboul-Fotouh</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Mahmoud</surname>
<given-names>AN</given-names>
</name>
<name>
<surname>Elnahas</surname>
<given-names>EM</given-names>
</name>
</person-group>
<article-title>What are the current anti-COVID-19 drugs? From traditional to smart molecular mechanisms.</article-title>
<source>Virol J.</source>
<year>2023</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1186/s12985-023-02210-z">https://doi.org/10.1186/s12985-023-02210-z</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref5">
<mixed-citation publication-type="journal">5. Aslan MF, Sabanci K, Durdu A, et al. COVID-19 diagnosis using state-of-the-art CNN architecture features and Bayesian Optimization. Comput Biol Med. 2022;142:105244. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2022.105244">https://doi.org/10.1016/j.compbiomed.2022.105244</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Aslan</surname>
<given-names>MF</given-names>
</name>
<name>
<surname>Sabanci</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Durdu</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>COVID-19 diagnosis using state-of-the-art CNN architecture features and Bayesian Optimization.</article-title>
<source>Comput Biol Med.</source>
<year>2022</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2022.105244">https://doi.org/10.1016/j.compbiomed.2022.105244</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref6">
<mixed-citation publication-type="journal">6. Aggarwal P, Mishra NK, Fatimah B, et al. COVID-19 image classification using deep learning: Advances, challenges and opportunities. Comput Biol Med. 2022;144:105350. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2022.105350">https://doi.org/10.1016/j.compbiomed.2022.105350</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Aggarwal</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Mishra</surname>
<given-names>NK</given-names>
</name>
<name>
<surname>Fatimah</surname>
<given-names>B</given-names>
</name>
</person-group>
<article-title>COVID-19 image classification using deep learning: Advances, challenges and opportunities.</article-title>
<source>Comput Biol Med.</source>
<year>2022</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2022.105350">https://doi.org/10.1016/j.compbiomed.2022.105350</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref7">
<mixed-citation publication-type="journal">7. Jia G, Lam H-K, Xu Y. Classification of COVID-19 chest X-Ray and CT images using a type of dynamic CNN modification method. Comput Biol Med. 2021;134:104425. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2021.104425">https://doi.org/10.1016/j.compbiomed.2021.104425</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jia</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>Y</given-names>
</name>
</person-group>
<article-title>Classification of COVID-19 chest X-Ray and CT images using a type of dynamic CNN modification method.</article-title>
<source>Comput Biol Med.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2021.104425">https://doi.org/10.1016/j.compbiomed.2021.104425</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref8">
<mixed-citation publication-type="journal">8. Fallahpoor M, Chakraborty S, Heshejin MT, et al. Generalizability assessment of COVID-19 3D CT data for deep learning-based disease detection. Comput Biol Med. 2022;145:105464. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2022.105464">https://doi.org/10.1016/j.compbiomed.2022.105464</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fallahpoor</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Chakraborty</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Heshejin</surname>
<given-names>MT</given-names>
</name>
</person-group>
<article-title>Generalizability assessment of COVID-19 3D CT data for deep learning-based disease detection.</article-title>
<source>Comput Biol Med.</source>
<year>2022</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2022.105464">https://doi.org/10.1016/j.compbiomed.2022.105464</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref9">
<mixed-citation publication-type="journal">9. Wu Y, Qi Q, Qi S, et al. Classification of COVID-19 from community-acquired pneumonia: Boosting the performance with capsule network and maximum intensity projection image of CT scans. Comput Biol Med. 2023;154:106567. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2023.106567">https://doi.org/10.1016/j.compbiomed.2023.106567</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Qi</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Qi</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Classification of COVID-19 from community-acquired pneumonia: Boosting the performance with capsule network and maximum intensity projection image of CT scans.</article-title>
<source>Comput Biol Med.</source>
<year>2023</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2023.106567">https://doi.org/10.1016/j.compbiomed.2023.106567</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref10">
<mixed-citation publication-type="journal">10. Awassa L, Jdey I, Dhahri H, et al. Study of different deep learning methods for coronavirus (COVID-19) pandemic: taxonomy, survey and insights. Sensors (Basel). 2022;22(5):1890. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/s22051890">https://doi.org/10.3390/s22051890</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Awassa</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Jdey</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Dhahri</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Study of different deep learning methods for coronavirus (COVID-19) pandemic: taxonomy, survey and insights.</article-title>
<source>Sensors</source>
<year>2022</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/s22051890">https://doi.org/10.3390/s22051890</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref11">
<mixed-citation publication-type="journal">11. Fang L, Wang X. COVID-19 deep classification network based on convolution and deconvolution local enhancement. Comput Biol Med. 2021;135:104588. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2021.104588">https://doi.org/10.1016/j.compbiomed.2021.104588</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fang</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>X</given-names>
</name>
</person-group>
<article-title>COVID-19 deep classification network based on convolution and deconvolution local enhancement.</article-title>
<source>Comput Biol Med.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2021.104588">https://doi.org/10.1016/j.compbiomed.2021.104588</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref12">
<mixed-citation publication-type="journal">12. Qi Q, Qi S, Wu Y, et al. Fully automatic pipeline of convolutional neural networks and capsule networks to distinguish COVID-19 from community-acquired pneumonia via CT images. Comput Biol Med. 2022;141:105182. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2021.105182">https://doi.org/10.1016/j.compbiomed.2021.105182</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Qi</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Qi</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Y</given-names>
</name>
</person-group>
<article-title>Fully automatic pipeline of convolutional neural networks and capsule networks to distinguish COVID-19 from community-acquired pneumonia via CT images.</article-title>
<source>Comput Biol Med.</source>
<year>2022</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2021.105182">https://doi.org/10.1016/j.compbiomed.2021.105182</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref13">
<mixed-citation publication-type="journal">13. Shi F, Xia L, Shan F, et al. Large-scale screening to distinguish between COVID-19 and community-acquired pneumonia using infection size-aware classification. Phys Med Biol. 2021;66(6):065031. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1088/1361-6560/abe838">https://doi.org/10.1088/1361-6560/abe838</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shi</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Xia</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Shan</surname>
<given-names>F</given-names>
</name>
</person-group>
<article-title>Large-scale screening to distinguish between COVID-19 and community-acquired pneumonia using infection size-aware classification.</article-title>
<source>Phys Med Biol.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1088/1361-6560/abe838">https://doi.org/10.1088/1361-6560/abe838</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref14">
<mixed-citation publication-type="journal">14. Xu X, Jiang X, Ma C, et al. A deep learning system to screen novel coronavirus disease 2019 pneumonia. Engineering. 2020;6(10):1122–9. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.eng.2020.04.010">https://doi.org/10.1016/j.eng.2020.04.010</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>A deep learning system to screen novel coronavirus disease 2019 pneumonia.</article-title>
<source>Engineering</source>
<year>2020</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.eng.2020.04.010">https://doi.org/10.1016/j.eng.2020.04.010</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref15">
<mixed-citation publication-type="journal">15. Kuo K-M, Talley PC, Chang C-S. The accuracy of machine learning approaches using non-image data for the prediction of COVID-19: A meta-analysis. Int J Med Inf. 2022;164:104791. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.ijmedinf.2022.104791">https://doi.org/10.1016/j.ijmedinf.2022.104791</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Talley</surname>
<given-names>PC</given-names>
</name>
</person-group>
<article-title>The accuracy of machine learning approaches using non-image data for the prediction of COVID-19: A meta-analysis.</article-title>
<source>Int J Med Inf.</source>
<year>2022</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.ijmedinf.2022.104791">https://doi.org/10.1016/j.ijmedinf.2022.104791</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref16">
<mixed-citation publication-type="journal">16. Baghdadi NA, Malki A, Abdelaliem SF, et al. An automated diagnosis and classification of COVID-19 from chest CT images using a transfer learning-based convolutional neural network. Comput Biol Med. 2022;144:105383. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2022.105383">https://doi.org/10.1016/j.compbiomed.2022.105383</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baghdadi</surname>
<given-names>NA</given-names>
</name>
<name>
<surname>Malki</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Abdelaliem</surname>
<given-names>SF</given-names>
</name>
</person-group>
<article-title>An automated diagnosis and classification of COVID-19 from chest CT images using a transfer learning-based convolutional neural network.</article-title>
<source>Comput Biol Med.</source>
<year>2022</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2022.105383">https://doi.org/10.1016/j.compbiomed.2022.105383</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref17">
<mixed-citation publication-type="journal">17. Dey A, Chattopadhyay S, Singh PK, et al. MRFGRO: a hybrid meta-heuristic feature selection method for screening COVID-19 using deep features. Sci Rep. 2021;11(1):24065. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41598-021-02731-z">https://doi.org/10.1038/s41598-021-02731-z</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dey</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Chattopadhyay</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Singh</surname>
<given-names>PK</given-names>
</name>
</person-group>
<article-title>MRFGRO: a hybrid meta-heuristic feature selection method for screening COVID-19 using deep features.</article-title>
<source>Sci Rep.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41598-021-02731-z">https://doi.org/10.1038/s41598-021-02731-z</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref18">
<mixed-citation publication-type="journal">18. Zhang X, Jiang R, Huang P, et al. Dynamic feature learning for COVID-19 segmentation and classification. Comput Biol Med. 2022;150:106136. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2022.106136">https://doi.org/10.1016/j.compbiomed.2022.106136</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Dynamic feature learning for COVID-19 segmentation and classification.</article-title>
<source>Comput Biol Med.</source>
<year>2022</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2022.106136">https://doi.org/10.1016/j.compbiomed.2022.106136</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref19">
<mixed-citation publication-type="journal">19. Carmo D, Campiotti I, Rodrigues L, et al. Rapidly deploying a COVID-19 decision support system in one of the largest Brazilian hospitals. Health Informatics J. 2021;27(3):14604582211033017. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/14604582211033017">https://doi.org/10.1177/14604582211033017</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carmo</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Campiotti</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Rodrigues</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Rapidly deploying a COVID-19 decision support system in one of the largest Brazilian hospitals.</article-title>
<source>Health Informatics J.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/14604582211033017">https://doi.org/10.1177/14604582211033017</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref20">
<mixed-citation publication-type="journal">20. Shiri I, Sorouri M, Geramifar P, et al. Machine learning-based prognostic modeling using clinical data and quantitative radiomic features from chest CT images in COVID-19 patients. Comput Biol Med. 2021;132:104304. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2021.104304">https://doi.org/10.1016/j.compbiomed.2021.104304</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shiri</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Sorouri</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Geramifar</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Machine learning-based prognostic modeling using clinical data and quantitative radiomic features from chest CT images in COVID-19 patients.</article-title>
<source>Comput Biol Med.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2021.104304">https://doi.org/10.1016/j.compbiomed.2021.104304</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref21">
<mixed-citation publication-type="journal">21. Champendal M, Müller H, Prior JO, et al. A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging. Eur J Radiol. 2023;169:111159. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.ejrad.2023.111159">https://doi.org/10.1016/j.ejrad.2023.111159</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Champendal</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Müller</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Prior</surname>
<given-names>JO</given-names>
</name>
</person-group>
<article-title>A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging.</article-title>
<source>Eur J Radiol.</source>
<year>2023</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.ejrad.2023.111159">https://doi.org/10.1016/j.ejrad.2023.111159</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref22">
<mixed-citation publication-type="journal">22. Javaheri T, Homayounfar M, Amoozgar Z, et al. CovidCTNet: an open-source deep learning approach to diagnose covid-19 using small cohort of CT images. NPJ Digit Med. 2021;4(1):29. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41746-021-00399-3">https://doi.org/10.1038/s41746-021-00399-3</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Javaheri</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Homayounfar</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Amoozgar</surname>
<given-names>Z</given-names>
</name>
</person-group>
<article-title>CovidCTNet: an open-source deep learning approach to diagnose covid-19 using small cohort of CT images.</article-title>
<source>NPJ Digit Med.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41746-021-00399-3">https://doi.org/10.1038/s41746-021-00399-3</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref23">
<mixed-citation publication-type="journal">23. Li L, Qin L, Xu Z, et al. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: Evaluation of the diagnostic accuracy. Radiology. 2020;296(2):E65–71. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1148/radiol.2020200905">https://doi.org/10.1148/radiol.2020200905</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Qin</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>Z</given-names>
</name>
</person-group>
<article-title>Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: Evaluation of the diagnostic accuracy.</article-title>
<source>Radiology.</source>
<year>2020</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1148/radiol.2020200905">https://doi.org/10.1148/radiol.2020200905</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref24">
<mixed-citation publication-type="journal">24. Fan D-P, Zhou T, Ji G-P, et al. Inf-Net: Automatic COVID-19 lung infection segmentation from CT images. IEEE Trans Med Imaging. 2020;39(8):2626–37. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1109/TMI.2020.2996645">https://doi.org/10.1109/TMI.2020.2996645</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhou</surname>
<given-names>T</given-names>
</name>
</person-group>
<article-title>Inf-Net: Automatic COVID-19 lung infection segmentation from CT images.</article-title>
<source>IEEE Trans Med Imaging.</source>
<year>2020</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1109/TMI.2020.2996645">https://doi.org/10.1109/TMI.2020.2996645</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref25">
<mixed-citation publication-type="journal">25. Zhang K, Liu X, Shen J, et al. Clinically applicable AI system for accurate diagnosis, quantitative measurements, and prognosis of COVID-19 pneumonia using computed tomography. Cell. 2020;181(6):1423–33. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.cell.2020.04.045">https://doi.org/10.1016/j.cell.2020.04.045</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Shen</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Clinically applicable AI system for accurate diagnosis, quantitative measurements, and prognosis of COVID-19 pneumonia using computed tomography.</article-title>
<source>Cell.</source>
<year>2020</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.cell.2020.04.045">https://doi.org/10.1016/j.cell.2020.04.045</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref26">
<mixed-citation publication-type="journal">26. Wang X, Deng X, Fu Q, et al. A weakly-supervised framework for COVID-19 classification and lesion localization from chest CT. IEEE Trans Med Imaging. 2020;39(8):2615–25. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1109/TMI.2020.2995965">https://doi.org/10.1109/TMI.2020.2995965</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Deng</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Fu</surname>
<given-names>Q</given-names>
</name>
</person-group>
<article-title>A weakly-supervised framework for COVID-19 classification and lesion localization from chest CT.</article-title>
<source>IEEE Trans Med Imaging.</source>
<year>2020</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1109/TMI.2020.2995965">https://doi.org/10.1109/TMI.2020.2995965</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref27">
<mixed-citation publication-type="journal">27. Bao G, Chen H, Liu T, et al. COVID-MTL: Multitask learning with Shift3D and random-weighted loss for COVID-19 diagnosis and severity assessment. Pattern Recognit. 2022;124:108499. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.patcog.2021.108499">https://doi.org/10.1016/j.patcog.2021.108499</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bao</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>T</given-names>
</name>
</person-group>
<article-title>COVID-MTL: Multitask learning with Shift3D and random-weighted loss for COVID-19 diagnosis and severity assessment.</article-title>
<source>Pattern Recognit.</source>
<year>2022</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.patcog.2021.108499">https://doi.org/10.1016/j.patcog.2021.108499</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref28">
<mixed-citation publication-type="journal">28. Shiri I, Mostafaei S, Haddadi Avval A, et al. High-dimensional multinomial multiclass severity scoring of COVID-19 pneumonia using CT radiomics features and machine learning algorithms. Sci Rep. 2022;12(1):14817. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41598-022-18994-z">https://doi.org/10.1038/s41598-022-18994-z</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shiri</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Mostafaei</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Haddadi</surname>
<given-names>Avval</given-names>
</name>
</person-group>
<article-title>High-dimensional multinomial multiclass severity scoring of COVID-19 pneumonia using CT radiomics features and machine learning algorithms.</article-title>
<source>Sci Rep.</source>
<year>2022</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41598-022-18994-z">https://doi.org/10.1038/s41598-022-18994-z</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref29">
<mixed-citation publication-type="journal">29. Guhan B, Almutairi L, Sowmiya S, et al. Automated system for classification of COVID-19 infection from lung CT images based on machine learning and deep learning techniques. Sci Rep. 2022;12(1):17417. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41598-022-20804-5">https://doi.org/10.1038/s41598-022-20804-5</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guhan</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Almutairi</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Sowmiya</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Automated system for classification of COVID-19 infection from lung CT images based on machine learning and deep learning techniques.</article-title>
<source>Sci Rep.</source>
<year>2022</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41598-022-20804-5">https://doi.org/10.1038/s41598-022-20804-5</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref30">
<mixed-citation publication-type="journal">30. Li T, Wei W, Cheng L, et al. Computer-aided diagnosis of COVID-19 CT scans based on spatiotemporal information fusion. J Healthc Eng. 2021;2021:6649591. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1155/2021/6649591">https://doi.org/10.1155/2021/6649591</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Wei</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Computer-aided diagnosis of COVID-19 CT scans based on spatiotemporal information fusion.</article-title>
<source>J Healthc Eng.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1155/2021/6649591">https://doi.org/10.1155/2021/6649591</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref31">
<mixed-citation publication-type="journal">31. Tello-Mijares S, Woo L. Computed tomography image processing analysis in COVID-19 patient follow-up assessment. J Healthc Eng. 2021;2021:8869372. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1155/2021/8869372">https://doi.org/10.1155/2021/8869372</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tello-Mijares</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Woo</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Computed tomography image processing analysis in COVID-19 patient follow-up assessment.</article-title>
<source>J Healthc Eng.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1155/2021/8869372">https://doi.org/10.1155/2021/8869372</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref32">
<mixed-citation publication-type="journal">32. Topff L, Sánchez-García J, López-González R, et al. A deep learning-based application for COVID-19 diagnosis on CT: The Imaging COVID-19 AI initiative. PLoS One. 2023;18(5):e0285121. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1371/journal.pone.0285121">https://doi.org/10.1371/journal.pone.0285121</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Topff</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Sánchez-García</surname>
<given-names>J</given-names>
</name>
<name>
<surname>López-González</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>A deep learning-based application for COVID-19 diagnosis on CT: The Imaging COVID-19 AI initiative.</article-title>
<source>PLoS One.</source>
<year>2023</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1371/journal.pone.0285121">https://doi.org/10.1371/journal.pone.0285121</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref33">
<mixed-citation publication-type="journal">33. Yang Z, Zhao L, Wu S, et al. Lung lesion localization of COVID-19 from chest CT image: A novel weakly supervised learning method. IEEE J Biomed Health Inform. 2021;25(6):1864–72. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1109/JBHI.2021.3067465">https://doi.org/10.1109/JBHI.2021.3067465</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Lung lesion localization of COVID-19 from chest CT image: A novel weakly supervised learning method.</article-title>
<source>IEEE J Biomed Health Inform.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1109/JBHI.2021.3067465">https://doi.org/10.1109/JBHI.2021.3067465</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref34">
<mixed-citation publication-type="journal">34. Ortiz A, Trivedi A, Desbiens J, et al. Effective deep learning approaches for predicting COVID-19 outcomes from chest computed tomography volumes. Sci Rep. 2022;12(1):1716. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41598-022-05532-0">https://doi.org/10.1038/s41598-022-05532-0</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ortiz</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Trivedi</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Desbiens</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Effective deep learning approaches for predicting COVID-19 outcomes from chest computed tomography volumes.</article-title>
<source>Sci Rep.</source>
<year>2022</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41598-022-05532-0">https://doi.org/10.1038/s41598-022-05532-0</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref35">
<mixed-citation publication-type="journal">35. Nguyen D, Kay F, Tan J, et al. Deep learning–based COVID-19 pneumonia classification using chest CT images: model generalizability. Front Artif Intell. 2021;4:694875. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frai.2021.694875">https://doi.org/10.3389/frai.2021.694875</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nguyen</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Kay</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Deep learning–based COVID-19 pneumonia classification using chest CT images: model generalizability.</article-title>
<source>Front Artif Intell.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frai.2021.694875">https://doi.org/10.3389/frai.2021.694875</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref36">
<mixed-citation publication-type="journal">36. Goel T, Murugan R, Mirjalili S, et al. Automatic screening of COVID-19 using an optimized generative adversarial network. Cogn Comput. 2021. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s12559-020-09785-7">https://doi.org/10.1007/s12559-020-09785-7</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goel</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Murugan</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Mirjalili</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Automatic screening of COVID-19 using an optimized generative adversarial network.</article-title>
<source>Cogn Comput.</source>
<year>2021</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s12559-020-09785-7">https://doi.org/10.1007/s12559-020-09785-7</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref37">
<mixed-citation publication-type="journal">37. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Adv Neural Inf Process Syst. 2012;25. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3065386">https://doi.org/10.1145/3065386</ext-link>
</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Krizhevsky</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Sutskever</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Hinton</surname>
<given-names>GE</given-names>
</name>
</person-group>
<article-title>ImageNet classification with deep convolutional neural networks.</article-title>
<source>Adv Neural Inf Process Syst.</source>
<year>2012</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3065386">https://doi.org/10.1145/3065386</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref38">
<mixed-citation publication-type="journal">38. Kang D, Park S, Paik J. SdBAN: Salient object detection using bilateral attention network with dice coefficient loss. IEEE Access. 2020;8:104357–70. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1109/ACCESS.2020.2999627">https://doi.org/10.1109/ACCESS.2020.2999627</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kang</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Paik</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>SdBAN: Salient object detection using bilateral attention network with dice coefficient loss.</article-title>
<source>IEEE Access.</source>
<year>2020</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1109/ACCESS.2020.2999627">https://doi.org/10.1109/ACCESS.2020.2999627</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref39">
<mixed-citation publication-type="confproc">39. Rezatofighi H, Tsoi N, Gwak J, et al. Generalized intersection over union: A metric and a loss for bounding box regression. Proc. IEEECVF Conf. Comput. Vis. Pattern Recognit. 2019. p. 658–66. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.48550/arXiv.1902.09630">https://doi.org/10.48550/arXiv.1902.09630</ext-link>.</mixed-citation>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Rezatofighi</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Tsoi</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Gwak</surname>
<given-names>J</given-names>
</name>
</person-group>
<source>Proc. IEEECVF Conf. Comput. Vis. Pattern Recognit.</source>
<year>2019</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.48550/arXiv.1902.09630">https://doi.org/10.48550/arXiv.1902.09630</ext-link>
</comment>
</element-citation>
</ref>
<ref id="redalyc_570481700018_ref40">
<mixed-citation publication-type="journal">40. Zaborski D, Proskura WS, Grzesiak W, et al. The comparison between random forest and boosted trees for dystocia detection in dairy cows. Comput Electron Agric. 2019;163:104856. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compag.2019.104856">https://doi.org/10.1016/j.compag.2019.104856</ext-link>.</mixed-citation>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zaborski</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Proskura</surname>
<given-names>WS</given-names>
</name>
<name>
<surname>Grzesiak</surname>
<given-names>W</given-names>
</name>
</person-group>
<article-title>The comparison between random forest and boosted trees for dystocia detection in dairy cows.</article-title>
<source>Comput Electron Agric.</source>
<year>2019</year>
<comment>
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compag.2019.104856">https://doi.org/10.1016/j.compag.2019.104856</ext-link>
</comment>
</element-citation>
</ref>
</ref-list>
</back>
</article>