Yüce, Fatma
Loading...
Name Variants
Yüce F, Yuce F, Yuce Fatma
Fatma YUCE
F., Yüce
Yuce F
YÜCE Fatma
Yüce, F.
Fatma YÜCE
Yüce Fatma
Fatma Yuce
Fatma Yüce
Fatma, Yüce
Yuce, Fatma
Yüce F
Yuce, F.
Yuce Fatma
Yüce, Fatma
YUCE Fatma
Fatma YUCE
F., Yüce
Yuce F
YÜCE Fatma
Yüce, F.
Fatma YÜCE
Yüce Fatma
Fatma Yuce
Fatma Yüce
Fatma, Yüce
Yuce, Fatma
Yüce F
Yuce, F.
Yuce Fatma
Yüce, Fatma
YUCE Fatma
Job Title
Dr. Öğr. Üyesi
Email Address
fatma.yuce@okan.edu.tr
ORCID ID
Scopus Author ID
Turkish CoHE Profile ID
Google Scholar ID
WoS Researcher ID
Scholarly Output
6
Articles
4
Citation Count
11
Supervised Theses
0
6 results
Scholarly Output Search Results
Now showing 1 - 6 of 6
Article Citation Count: 0Performance evaluation of a deep learning model for automatic detection and localization of idiopathic osteosclerosis on dental panoramic radiographs(Nature Portfolio, 2024) Tassoker, Melek; Yüce, Fatma; Ozic, Muhammet Usame; Yuce, Fatma; Ağız,Diş ve Çene Radyolojisi / Oral, Dental and Maxillofacial RadiologyIdiopathic osteosclerosis (IO) are focal radiopacities of unknown etiology observed in the jaws. These radiopacities are incidentally detected on dental panoramic radiographs taken for other reasons. In this study, we investigated the performance of a deep learning model in detecting IO using a small dataset of dental panoramic radiographs with varying contrasts and features. Two radiologists collected 175 IO-diagnosed dental panoramic radiographs from the dental school database. The dataset size is limited due to the rarity of IO, with its incidence in the Turkish population reported as 2.7% in studies. To overcome this limitation, data augmentation was performed by horizontally flipping the images, resulting in an augmented dataset of 350 panoramic radiographs. The images were annotated by two radiologists and divided into approximately 70% for training (245 radiographs), 15% for validation (53 radiographs), and 15% for testing (52 radiographs). The study employing the YOLOv5 deep learning model evaluated the results using precision, recall, F1-score, mAP (mean Average Precision), and average inference time score metrics. The training and testing processes were conducted on the Google Colab Pro virtual machine. The test process's performance criteria were obtained with a precision value of 0.981, a recall value of 0.929, an F1-score value of 0.954, and an average inference time of 25.4 ms. Although radiographs diagnosed with IO have a small dataset and exhibit different contrasts and features, it has been observed that the deep learning model provides high detection speed, accuracy, and localization results. The automatic identification of IO lesions using artificial intelligence algorithms, with high success rates, can contribute to the clinical workflow of dentists by preventing unnecessary biopsy procedure.Correction Citation Count: 0Detection of pulpal calcifcations on bite-wing radiographs using deep learning (vol 27, pg 2679, 2023)(Springer Heidelberg, 2023) Yuce, Fatma; Yüce, Fatma; Ozic, Muhammet Usame; Tassoker, Melek; Ağız,Diş ve Çene Radyolojisi / Oral, Dental and Maxillofacial Radiology[No Abstract Available]Article Citation Count: 0Fully Automated Detection of Osteoporosis Stage on Panoramic Radiographs Using YOLOv5 Deep Learning Model and Designing a Graphical User Interface(Springer Heidelberg, 2023) Ozic, Muhammet Usame; Yüce, Fatma; Tassoker, Melek; Yuce, Fatma; Ağız,Diş ve Çene Radyolojisi / Oral, Dental and Maxillofacial RadiologyPurposeOsteoporosis is a systemic disease that causes fracture risk and bone fragility due to decreased bone mineral density and deterioration of bone microarchitecture. Deep learning-based image analysis technologies have effectively been used as a decision support system in diagnosing disease. This study proposes a deep learning-based approach that automatically performs osteoporosis localization and stage estimation on panoramic radiographs with different contrasts.MethodsEight hundred forty-six panoramic radiographs were collected from the hospital database and pre-processed. Two radiologists annotated the images according to the Mandibular Cortical Index, considering the cortical region extending from the distal to the antegonial area of the foramen mentale. The data were trained and validated using the YOLOv5 deep learning algorithm in the Linux-based COLAB Pro cloud environment. The Weights and Bias platform was integrated into COLAB, and the training process was monitored instantly. Using the model weights obtained, the test data that the system had not seen before were analyzed. Using the non-maximum suppression technique on the test data, the bounding boxes of the regions that could be osteoporosis were automatically drawn. Finally, a graphical user interface was developed with the PyQT5 library.ResultsTwo radiologists analyzed the data, and the performance criteria were calculated. The performance criteria of the test data were obtained as follows: an average precision of 0.994, a recall of 0.993, an F1-score of 0.993, and an inference time of 14.3 ms (0.0143 s).ConclusionThe proposed method showed that deep learning could successfully perform automatic localization and staging of osteoporosis on panoramic radiographs without region-of-interest cropping and complex pre-processing methods.Article Citation Count: 3Detection of pulpal calcifications on bite-wing radiographs using deep learning(Springer Heidelberg, 2023) Yuce, Fatma; Yüce, Fatma; Ozic, Muhammet Usame; Tassoker, Melek; Ağız,Diş ve Çene Radyolojisi / Oral, Dental and Maxillofacial RadiologyObjectives Pulpal calcifications are discrete hard calcified masses of varying sizes in the dental pulp cavity. This study is aimed at measuring the performance of the YOLOv4 deep learning algorithm to automatically determine whether there is calcification in the pulp chambers in bite-wing radiographs. Materials and methods In this study, 2000 bite-wing radiographs were collected from the faculty database. The oral radiologists labeled the pulp chambers on the radiographs as "Present" and "Absent" according to whether there was calcification. The data were randomly divided into 80% training, 10% validation, and 10% testing. The weight file for pulpal calcification was obtained by training the YOLOv4 algorithm with the transfer learning method. Using the weights obtained, pulp chambers and calcifications were automatically detected on the test radiographs that the algorithm had never seen. Two oral radiologists evaluated the test results, and performance criteria were calculated. Results The results obtained on the test data were evaluated in two stages: detection of pulp chambers and detection of pulpal calcification. The detection performance of pulp chambers was as follows: recall 86.98%, precision 98.94%, F1-score 91.60%, and accuracy 86.18%. Pulpal calcification Absent and Present detection performance was as follows: recall 86.39%, precision 85.23%, specificity 97.94%, F1-score 85.49%, and accuracy 96.54%. Conclusion The YOLOv4 algorithm trained with bite-wing radiographs detected pulp chambers and calcification with high success rates.Publication Citation Count: 0The Frequency of Pre-Eruptive Intracoronal Resorption in Impacted Teeth with Complete Bone Retention: A Cone-Beam Computed Tomography Study: Cross-Sectional Study(2023) Yüce, Fatma; Melek TAŞSÖKER; Ağız,Diş ve Çene Radyolojisi / Oral, Dental and Maxillofacial RadiologyObjective: The aim of this study was to investigate the frequency of pre-eruptive intracoronal resorptions (PIR) and to determine whether PIR differs according to age and gender in impacted teeth with full bone retention. Material and Methods: A total of 2,434 permanent teeth from 2,365 (1,293 females, 1,072 males) patients between the ages of 18-89 were evaluated in the study. Semi-impacted teeth in eruption process, impacted teeth with jaw pathologies, primary teeth with full bone retention, mesiodens, and supernumerary impacted teeth were not included in the study. Descriptive statistics (mean, standard deviation) were calculated for all parameters in the study. The chisquare test was used to determine the relationships between categorical variables. p<0.05 was considered significant. Results: A total of 276 impacted teeth with bone retention were observed in 207 of 2,365 patients. PIR lesions (6 molars, 4 canines, 1 incisor, 1 premolar) were detected in 12 (4.3%) of the examined impacted teeth. Seven of them were in the maxilla and 5 were in the mandible. Five of the patients with PIR were male and 7 were female (p>0.05) and their mean age was 54.3 (28-75) years. Conclusion: The frequency of PIR in impacted teeth with complete bone retention was 4.3%, and it was most common in molar teeth. If the resorbed impacted tooth is likely to erupt, it should be followed up with restorative or endodontic treatment and tried to be kept in the mouth.Article Citation Count: 8Comparison of five convolutional neural networks for predicting osteoporosis based on mandibular cortical index on panoramic radiographs(British inst Radiology, 2022) Yüce, Fatma; Ozic, Muhammet Usame; Yuce, Fatma; Ağız,Diş ve Çene Radyolojisi / Oral, Dental and Maxillofacial RadiologyObjectives: The aim of the present study was to compare five convolutional neural networks for predicting osteoporosis based on mandibular cortical index (MCI) on panoramic radiographs. Methods: Panoramic radiographs of 744 female patients over 50 years of age were labeled as C1, C2, and C3 depending on the MCI. The data of the present study were reviewed in different categories including (C1, C2, C3), (C1, C2), (C1, C3), and (C1, (C2 +C3)) as two-class and three-class predictions. The data were separated randomly as 20% test data, and the remaining data were used for training and validation with fivefold cross-validation. AlexNET, GoogleNET, ResNET-50, SqueezeNET, and ShuffleNET deep-learning models were trained through the transfer learning method. The results were evaluated by performance criteria including accuracy, sensitivity, specificity, F1-score, AUC, and training duration. The Gradient-Weighted Class Activation Mapping (Grad-CAM) method was applied for visual interpretation of where deep-learning algorithms gather the feature from image regions. Results: The dataset (C1, C2, C3) has an accuracy rate of 81.14% with AlexNET; the dataset (C1, C2) has an accuracy rate of 88.94% with GoogleNET; the dataset (C1, C3) has an accuracy rate of 98.56% with AlexNET; and the dataset (C1,(C2+C3)) has an accuracy rate of 92.79% with GoogleNET. Conclusion: The highest accuracy was obtained in the differentiation of C3 and C1 where osseous structure characteristics change significantly. Since the C2 score represent the intermediate stage (osteopenia), structural characteristics of the bone present behaviors closer to C1 and C3 scores. Therefore, the data set including the C2 score provided relatively lower accuracy results.