[1] DAVIDSON M, KARPLUS V J, ZHANG D, et al. Policies and institutions to support carbon neutrality in China by 2060[J]. Economics of Energy & Environmental Policy, 2021, 10(2): 7-25. [2] VARGAS S A, ESTEVES G R T, MAÇAIRA P M, et al. Wind power generation: a review and a research agenda[J]. Journal of Cleaner Production, 2019, 218: 850-870. [3] COUNCIL G W E. Global offshore wind report 2020[J]. Global Wind Energy Coucil: Brussels, 2020, 19: 10-12. [4] CIVERA M, SURACE C. Non-destructive techniques for the condition and structural health monitoring of wind turbines: a literature review of the last 20 years[J]. Sensors, 2022, 22(4): 1627-1679. [5] YANG R, HE Y, ZHANG H. Progress and trends in nondestructive testing and evaluation for wind turbine composite blade[J]. Renewable and Sustainable Energy Reviews, 2016, 60: 1225-1250. [6] 李春雷, 王洪江, 尹常永, 等. 风机叶片故障诊断技术的研究进展[J]. 沈阳工程学院学报(自然科学版), 2022, 18(3): 1-5, 19. [7] 周季峰, 石腾, 许波峰. 风电机组叶片损伤故障检测技术研究进展[J]. 新能源进展, 2023, 11(6): 556-563. [8] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. [9] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 1-9. [10] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770-778. [11] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 7132-7141. [12] LIU Z, LIN Y, CAO Y, et al. Hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 10012-10022. [13] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: transformers for image recognition at scale[EB/OL]. (2020-10-22)[2023-11-13]. https://arxiv.org/abs/2010.11929. [14] HOWARD A G, ZHU M, CHEN B, et al. MobileNets: efficient convolutional neural networks for mobile vision applications[EB/OL]. (2017-04-17)[2023-11-13]. https//arxiv.org/abs/1704.04861. [15] ZHANG X, ZHOU X, LIN M, et al. ShuffleNet: an extremely efficient convolutional neural network for mobile devices[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 6848-6856. [16] TAN M, LE Q. EfficientNet: rethinking model scaling for convolutional neural networks[EB/OL]. (2019-05-28)[2023-11-13]. https://arxiv.org/abs/1905.11946. [17] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]//Proceedings of the European conference on computer vision. Munich: Springer, 2018: 3-19. |