Investigating the ecological effects of offshore wind farms requires comprehensive surveys of marine ecosystem. Recently, the monitoring of marine soundscapes has been included in the rapid appraisals of geophysical events, marine fauna, and human activities. Machine learning is widely applied in acoustic research to improve the efficiency of audio processing. However, the use of machine learning to analyze marine soundscapes remains limited due to a general lack of human-annotated databases. In this study, we used unsupervised learning to recognize different sound sources underwater. We also quantified the temporal, spatial, and spectral variabilities of long-term underwater recordings collected near Phase I of the Formosa I wind farm. One source separation model was developed to recognize choruses made by fish and snapping shrimp, as well as shipping noise. Another model was developed to identify transient fish calls and echolocation clicks of marine mammals. Models were trained in an unsupervised manner using the periodicity-coded non-negative matrix factorization. After the sound sources were separated, events can be identified using Gaussian mixture models. Our information retrieval techniques facilitate future investigations of the spatiotemporal changes in marine soundscapes and allow to build an annotated database efficiently. The soundscape information can be used to evaluate the potential impacts of noise-generating activities on soniferous marine animals and their acoustic behavior before, during, and after the development of offshore wind farms.