The Lecture Recording Programme is running for three years. MLE004 covered the second phase of our work which focuses on the growth of the service. This consisted of:
Rooms: A doubling of the number of equipped rooms.
Scheduling: The implementation of an automated scheduling application.
Data Management: Ensuring GDPR compliance through manual administration processes.
Service Structure: The link between our work and the Timetabling Spaces Review which is being driven by the Timetabling service.
Reporting: Building appropriate reports to an agreed audience for our Scheduling application and from the service itself.
Training: The launch of a third, more advanced training package and the handover of training into business as usual.
Migration: The development (LTW developed and supported) of a web based application that uploads recordings to Media Hopper Create.
Analysis of Resource Usage:
Staff Usage Estimate: 500 days
Staff Usage Actual: 601.5 days
Staff Usage Variance: 20%
The outcomes have been very positive as the team delivered satisfactorily across all workstreams.
Rooms: As planned an additional 140 rooms were equipped. A highly successful Chalkboard Pilot was completed with colleagues within CSE. The roll-out of dual microphone has also continued.
Scheduling: The implementation of our scheduling application was completed delivering automated scheduling of lecture recording and a WebUI interface to allow academics to 'Opt-Out' of recordings. Highly positive feedback has been received form the Academic User Group, our academic colleagues on the Programme Board and through a feedback form issued to all Course Organisers and Course Secretaries. Combining such large data sets from timetabling and EUGEX into a simple to use application that had no performance impact on those core services has been very well received.
Data Management: We ensured GDPR compliance through manual administration processes. API deletion of data can not be completed as yet as Echo360 have not built the appropriate API.
Service Structure: The link between our work and the Timetabling Spaces Review which is being driven by the Timetabling service. Euan Murray, Head of Learning Spaces Technology (LST) represented our programme of work. That review continues, but the main change brought into service has been that LST now control room data within Scientia, which should begin to improve the quality and consistency of the data and images.
Reporting: The team completed an analysis of our requirements and concluded it was not necessary to include other LTW data sets into our reporting. Our scheduler has a Timetabling feed, a EUGEX feed and a reconciliation of data with Echo360. This data provided us with the ability to create the reports that could satisfy the leadership teams and the Learning and Teaching Committee. The consultation with Apps Service Management was key and the choice of a Power BI read-only link to our Replay Scheduler database has delivered immediate and easily digestible reports. We will continue this work in the final phase.
Training: The launch of a third, more advanced training package was done and the handover of training into business as usual was successfully completed. Training will not form part of Phase 3.
Migration: The development (LTW developed and supported) of a web based application that uploads recordings to Media Hopper Create (MHC) was complete. The migration tool is required as the policy has an 18 month retention period. Migrating recordings to MHR allows staff to retain their recordings beyond the 18 month period while still being able to choose how that content is share and licensed.
Explanation for variance
In terms of the additional days they are explained by the delay in policy being confirmed. The lack of clarity on policy created a difficult development environment for the team as we had to design and build a system whilst not knowing if Opt-Out or Opt-In would be the policy decision.
At the initiation of the project we had planned to go live in the summer months, but this was pushed back because of ongoing policy deliberations. In truth this suited the project team as we had also suffered delays in development. A lack of resource availability, substantial absence and skills requirements meaning we were focused only on the services of Martin Jones had an impact alongside the pressure of the unknown but impending policy decision. That said Martin and the team have developed a system that has been very well received which is an achievement when dealing with such a sensitive subject as Opt-Out of lecture recording.
Policy was finally confirmed during October. We began our launch at the end of October. With the support of Production Management we launched with a series of maintenance releases to LIVE which allowed staff to register their Opt-Outs at the earliest possible dates. The earliest date possible was a requirement given to the team during policy discussion between the unions and the Learning and Teaching Committee. As we completed testing of the connection between our scheduler and the lecture recording Echo360 platform additional features were released. The launch was completed in December.
The Maintenance Period extended to cover the start of semester 2 as explained in the Deployment milestone.
Key Learning Points
The team conducted a Glad, Sad and Mad workshop at the end of our developments to make sure we took key lessons forward and (of course) to celebrate success.
- Use of Capital Budget over multiple years gave the team easy flexibility when policy delays impacted developments.
- Use of a part ‘Agile’ approach worked well.
- As anticipated securing blocks of developer time when we could worked well instead of smaller bookings.
- The team worked well under pressure in key delivery periods.
- System delivered successfully and with no impact to the BAU service. The logic was complicated but the processing was delivered in a simple, intuitive front end. This has been universally well received.
- Demos to schools were productive and very well received. Visits to colleges and schools throughout the process of development of Replay Scheduler gave us a clear set of requirements (other than policy) and resulted in expectations of the user experience that could be well matched by the final product.
- Use of new technologies such as Celery & Celery Beat with Rabbit MQ to manage processes and load.
- A number of valuable features were discovered during course of the work stream such as a ‘Manage Data’ page providing the service team with flexibility during testing was not part of the initial design but became key.
- Performance concerns around the timetabling RDB were well understood by development services. We had excellent design advice from development services within the project team.
- The Maintenance Period was both necessary and very successful. Thanks to Production for the suggestion and the team for the flexibility to plan in this manner. We would not have achieved the Unions/Learning and Teaching Committees requirements with it. Our risk was greatly reduced.
- We have had excellent feedback from our partner Echo360 who want to use scheduling work stream as a Case Study. No other institution has done as comprehensive a rollout of automated scheduling with them.
- We were not Agile enough in our approach and may have benefited from full adoption through the whole development process. For example adopting a Sprint based agile model would have allowed us to deal more effectively with number of changes that were introduced by the changing policy expectations. In addition our user stories were not detailed enough in certain cases. JIRA Descriptions and UAT criteria needed to be broken down further in a small number of cases.
- This then affected the estimation process with increases in the actual time taken to complete. Lack of multiple or senior development resource may have compounded difficulties in creating accurate estimates for user stories. However estimation was always going to be difficult as this had not been done before on this scale and the data sets are enormous.
- Some amount of co-location for the duration of the project may have helped to alleviate some unnecessary confusion and stress during the summer over resourcing and policy.
- A lack of focus on student needs from a small number of schools.
- Poor quality test data caused issues that were only encountered when moving from TEST to LIVE. Significant gulf in the standard of data in the TEST & LIVE environments which were not anticipated when colleagues in timetabling and SSP setup our test data sets. It’s very possible that nobody would having used this data in this way before and so the gaps or anomalies in the data were not understood by the institution. Again this made the Maintenance Period absolutely essential as we had to firefight a number of schedule rejections before the start of term.
- Policy changes/delays meant having to design and develop a system at the same time as key policy going through exactly the same process. This resulted in stress for the team as changes were being introduced close to launch.
- The amount of Project Management time spent conflicting for developer resource. One developer being appropriate for our skills requirements meant a single point of failure throughout the project. This was high risk for such an important student and staff facing initiative. We also had no consistent senior development support.
- The number of occasions when other Project Managers would interrupt and take up the time of our developer. This put terrible pressure onto Martin.
- Staff would accept meetings and fail to show. As communication between the team was vital to success this caused difficulties for the project team during development and deployment of Replay Scheduler. It was said that the number of projects allocated to staff was causing time conflicts.
There are no outstanding issues.