Some students may find it useful to download the video / audio stream of a single lecture, rather than all lectures. Also, the Python dependency of the bulk downloader is a pain, as is the cookies.txt file - particularly if only one or two lectures are required.
To use the bookmarklet, navigate to a recorded lecture on Media Hopper Replay (a.k.a: Echo360) and open the saved Bookmark. It should display a popup on the page allowing the video / audio source and quality to be selected, so that a MP4 / MP3 download link can be generated. The X in the top-left closes the popup.
The bookmarklet link: Download Echo360
This bookmarklet has been tested on FireFox and Chrome, and also on FireFox for Android (as downloading audio recordings of lectures onto a smartphone is quite a convenient way to catch up / revise!).
In 2019 the University changed the lecture recording policy. Recordings will be kept for 18 months, rather than two years. However there was a grace period for the first two years of lectures. The upshot of this is that:
The first round of lecture recording deletions will start in September 2019 for any recordings made between 2017 - 2019 as per the previous policy and guidance, which has a 24 month retention period.
The second round of lecture recording deletions under the 18 month retention period detailed in the new university policy which commenced in January 2019, will start in September 2020.
I wanted to keep a backup of all the lectures (both the recordings and any slides), so I wrote a simple script to archive them off Echo360 (the external provider that hosts the recordings). The script requires Python 2.7, and the requests module. You can download the script here
Unfortunately, it is not as simple as just running the script. A cookies file must first be created. Probably the easiest way to do this is to install an extension to your browser (if Chrome is used then the cookies.txt (closed source) extension works; if FireFox is used then Export Cookies (MIT/X11) works). Then navigate to the Echo360 homepage at https://echo360.org.uk, where you will be asked to sign in (enter UUN@ed.ac.uk, then signin through EASE). You'll be greeted by a page listing all your courses that Echo360 has recordings for (if a course doesn't show up, you may need to go into Learn and click the 'Media Replay' link to make the recordings show up in Echo360). Once you're on this page, use the browser extension to download a 'cookies.txt' file with the Echo360 cookies, saving the file as 'cookies.txt' in the same directory as the script.
To download all the recordings from all the courses, run the script with no arguments:
To limit the download to specific courses, you'll need to course IDs that Echo360 uses. You can extract this from the link to the course from the Echo360 homepage. It is something like '4ad4021a-c429-468c-a37c-ec26bf31bcd1'. Then run the script with the IDs as an additional parameter, eg:
python2 download.py 4ad4021a-c429-468c-a37c-ec26bf31bcd1 2f6a2330-2181-4a78-b04b-4390609b6c13
Oh, and if you've got a partial download by running this script before, then re-running it will download everything new. All the lectures that were downloaded last time will be kept and not overwritten (this saves on bandwidth too).
Unfortunately Echo360 regularly updates the cookies it uses for access to the recordings, so if you're downloading more than one course it is quite likely the existing cookies will expire and you'll need new ones. When this happens, the script will stop and inform you that a new cookies.txt file is needed. Simply download a new cookies.txt file and run the script again. It will continue where it left off (this also means that if new lecture recordings are uploaded, running the script will fetch only the new ones - provided the files downloaded previously are kept in the same location).
It is recommended to run the script on a filesystem that allows large files. There have been cases where recordings are larger than 4GB, which causes the script to crash on FAT32 filesystems. Using a filesystem that allows for large files prevents this.
The files downloaded by the script include several .JSON files. It is highly recommended that these files are not deleted. I plan to write a further script which will create a locally-browsable offline website of the downloaded recordings, but the information contained in the JSON files is required to do so.
Bug reports, enhancement requests, questions, comments and VC-funding invitations can be directed to #compsoc on IRC. (Although this isn't developed or endorsed by CompSoc, rather it's somewhere I'll actually check on a regular basis).
Oh, and this is provided AS IS, without warranty of any kind, to be used at your own risk.
This tool is under active development. In its current state it is a bit of a mess - that's simply because it was written with a deadline in mind. Futire versions will include more options, such as the generation of a locally-browsable offline website of the downloaded recordings. Oh, and the code won't be a mess and will support Python 3. Every endevour has been made to ensure that these planned enhancements will work with downloads made with the current version, but that only works if the downloaded .JSON files remain in place.