Mikaeili, M.Bilge, H.S.2026-03-152026-03-152025979833155565897983315556652687-777510.1109/TIPTEKNO68206.2025.112701312-s2.0-105030538213https://doi.org/10.1109/TIPTEKNO68206.2025.11270131https://hdl.handle.net/20.500.14517/8945We present a U-Net-based pipeline for B-mode ultrasound image reconstruction that ingests post-processed RF data, predicts a log-compressed image, and then performs an interpolation for display. To identify effective design choices, we compare six configurations formed by two loss functions: mean-squared error and compound MMUAE+TV+gradient loss, and three output activation functions: linear, ReLU, and tanh. Evaluation with PSNR, SSIM, and visual inspection of scan line profiles and B-mode images indicates that the activation function is the dominant factor. Tanh function consistently preserves lesion boundaries, maintains realistic speckle, and avoids dynamic-range saturation; linear function is acceptable but yields softer edges; ReLU degrades contrast owing to negative-value clipping. Switching from MSE to the compound loss function produces only modest changes, suggesting that regularization is secondary to activation choice in this setting. Overall, a tanh-activated U-Net with a simple MSE objective offers a strong accuracy-complexity trade-off for reconstruction. © 2025 IEEE.eninfo:eu-repo/semantics/closedAccessDeep Neural NetworkRaw RF DataReconstructionUltrasound ImageUltrasond ImageDeep Learning Framework for B-Mode Ultrasound Image ReconstructionConference Object