Acoustic recorders are commonly used to remotely monitor and collect data on bats (Order Chiroptera). These efforts result in many acoustic recordings that must be classified by a bat biologist with expertise in call classification in order to obtain useful information. The rarity of this expertise and time constraints have prompted efforts to automatically classify bat species in acoustic recordings using a variety of learning methods. There are several software programs available for this purpose, but they are imperfect and the United States Fish and Wildlife Service often recommends that a qualified acoustic analyst review bat call identifications even if using these software programs. We sought to build a model to classify bat species using modern computer vision techniques. We used images of bat echolocation calls (i.e., plots of the pulses) to train deep learning computer vision models that automatically classify bat calls to species. Our model classifies 10 species, five of which are protected under the Endangered Species Act. We evaluated our models using standard model validation procedures, and performed two external tests. For these tests, an entire dataset was withheld from the procedure before splitting the data into training and validation sets. We found that our validation accuracy (92%) and testing accuracy (90%) were higher than when we used Kaleidoscope Pro and BCID software (65% and 61% accuracy, respectively) to evaluate the same calls. Our results suggest that our approach is effective at classifying bat species from acoustic recordings, and our trained model will be incorporated into new bat call identification software: WEST-EchoVision.