[1] SUN Y, CAO B, ZHU P, et al. Drone-based RGB-infrared cross-modality vehicle detection via uncertainty-aware lear-ning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(10): 6700-6713.
[2] DING J, XUE N, LONG Y, et al. Learning RoI transformer for oriented object detection in aerial images[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 2849- 2858.
[3] 王媛彬, 郭亚茹, 刘佳, 等. 基于注意力机制和空洞卷积的CycleGAN煤矿井下低照度图像增强算法[J/OL]. 煤炭科学技术 [2024-03-07]. http://kns.cnki.net/kcms/detail/11.2402. TD.20240119.1728.013.html.
WANG Y B, GUO Y R, LIU J, et al. Low illumination image enhancement algorithm of CycleGAN coal mine based on attention mechanism and dilated convolution[J/OL].Coal Science and Technology [2024-03-07]. http://kns.cnki.net/kcms/detail/11.2402.TD.20240119.1728.013.html.
[4] 杜晓刚, 路文杰, 雷涛, 等. 亮度信噪比引导Transformer的低照度图像增强[J/OL]. 计算机工程与应用 [2024-03-12]. http://kns.cnki.net/kcms/detail/11.2127.TP.20240308.1612.002.html.
DU X G, LU W J, LEI T, et al. Low-light image enhancement using brightness and signal-to-noise ratio guided Tran-sformer[J/OL]. Computer Engineering and Applications [2024-03-12]. http://kns.cnki.net/kcms/detail/11.2127.TP.2024-0308.1612.002.html.
[5] 黄淑英, 黎为, 杨勇, 等. 基于照度图引导的低照度图像增强网络[J]. 计算机辅助设计与图形学学报, 2024, 36(1): 92-101.
HUANG S Y, LI W, YANG Y, et al. Low-light image enhancement network guided by illuminance map[J]. Journal of Computer-Aided Design and Graphics, 2024, 36(1): 92-101.
[6] 徐胜军, 杨华, 李明海, 等. 基于双频域特征聚合的低照度图像增强[J]. 光电工程, 2023, 50(12): 32-49.
XU S J, YANG H, LI M H,et al. Low-light image enhancement based on dual-frequency domain feature aggregation[J]. Optoelectronic Engineering, 2023, 50(12): 32-49.
[7] CUI Z, LI K, GU L, et al. You only need 90K parameters to adapt light: a light weight transformer for image enhancement and exposure correction[C]//Proceedings of the 33rd British Machine Vision Conference 2022, London, Nov 21-24, 2022: 238.
[8] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, Long Beach, Dec 4-9, 2017: 5998-6008.
[9] 赵卫东, 王辉, 柳先辉. 边缘信息增强的显著性目标检测网络[J]. 同济大学学报(自然科学版), 2024, 52(2): 293-302.
ZHAO W D, WANG H, LIU X H. Salient object detection network with edge information enhancement[J]. Journal of Tongji University (Natural Science Edition), 2024, 52(2): 293-302.
[10] 赵继达, 甄国涌, 储成群. 无人机目标检测的光适应预处理算法[J/OL]. 无线电工程 [2024-03-08]. http://kns.cnki.net/kcms/detail/13.1097.TN.20240202.1718.012.html.
ZHAO J D, ZHEN G Y, CHU C Q. Light adaptive pre-processing algorithm for UAV target detection[J/OL]. Radio Engineering [2024-03-08]. http://kns.cnki.net/kcms/detail/13.1097.TN.20240202.1718.012.html.
[11] YANG K F, CHENG C, ZHAO S X, et al. Learning to adapt to light[J]. International Journal of Computer Vision, 2023, 131(4): 1022-1041.
[12] LI X, WANG W, HU X, et al. Selective kernel networks[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 510-519.
[13] DAI J, QI H, XIONG Y, et al. Deformable convolutional networks[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Washington: IEEE Computer Society, 2017: 764-773.
[14] HAN K, WANG Y, TIAN Q, et al. GhostNet: more features from cheap operations[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 1580-1589.
[15] QIU J, CHEN C, LIU S, et al. SlimConv: reducing channel redundancy in convolutional neural networks by features recombining[J]. IEEE Transactions on Image Processing, 2021, 30: 6434-6445.
[16] CHEN Y, FAN H, XU B, et al. Drop an octave: reducing spatial redundancy in convolutional neural networks with octave convolution[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 3435-3444.
[17] 张江鑫, 杨惠. 基于同态高低通滤波与多尺度Retinex的低照度彩色图像增强[J]. 计算机应用与软件, 2021, 38(1): 232-237.
ZHANG J X, YANG H. Enhancement of low illumination color image based on homomorphic high-low pass filtering and multiscale Retinex[J]. Journal of Computer Applications and Software, 2021, 38(1): 232-237.
[18] ZHANG X, ZENG H, GUO S, et al. Efficient long-range attention network for image super-resolution[C]//Proceedings of the 17th European Conference on Computer Vision. Cham: Springer, 2022: 649-667.
[19] ZHENG Z, WANG P, LIU W, et al. Distance-IoU loss: faster and better learning for bounding box regression[C]//Proceedings of the 2020 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2020: 12993-13000.
[20] TONG Z, CHEN Y, XU Z, et al. Wise-IoU: bounding box regression loss with dynamic focusing mechanism[J]. arXiv:2301.10051, 2023.
[21] 张利丰, 田莹. 改进YOLOv8的多尺度轻量型车辆目标检测算法[J]. 计算机工程与应用, 2024, 60(3): 129-137.
ZHANG L F, TIAN Y. Improved YOLOv8 multi-scale and lightweight vehicle object detection algorithm[J]. Computer Engineering and Applications, 2024, 60(3): 129-137.
[22] 张剑锐, 魏霞, 张林鍹, 等. 改进YOLO v7的绝缘子检测与定位[J]. 计算机工程与应用, 2024, 60(4): 183-191.
ZHANG J R, WEI X, ZHANG L X, et al. Improving detection and positioning of insulators in YOLO v7[J]. Computer Engineering and Applications, 2024, 60(4): 183-191.
[23] GIRSHICK R. Fast R-CNN[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision. Washington: IEEE Computer Society, 2015: 1440-1448.
[24] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Washington:IEEE Computer Society, 2017: 2980-2988.
[25] ZHU X, SU W, LU L, et al. Deformable DETR: deformable transformers for end-to-end object detection[J]. arXiv:2010. 04159, 2020. |