A brief overview of the new spiders that have been created:
Through the use of GovDelivery we were able to sign up for all of the emails from the city of Minneapolis, and eventually St Paul. These emails were directed to a dedicated email address, and parts are programmatically extracted. Recently list serves for the local legislature has also been added.
In addition to scraping legislative email list-serves, a scraping of the legislative calendar has all been written. As well, a scraper to obtain the most recent articles published by the Legislature's newsletter, which required an enjoyable return to Selenium, was created.
Using Twitter, and Facebook's APIs, a quick survey of the UofM College of Liberal Arts social media accounts were made. Or at least the ones that were formally attached to the cla.umn.edu. Certain metrics were recorded, though much was simply to test the mechanics of the APIs, and scripts. This lays the groundwork for a more comprehensive structure in the future.