Completion Report

Project Summary:

The objectives for this project were identified as follows:

  1. To implement a fully supported new database infrastructure
  2. To increase transactional performance for appropriate applications
  3. To put in place a scalable solution that offers horizontal scaling, thus reducing future costs when growth is deemed necessary
  4. To ensure we fully utilise both data centres at King's Buildings and Appleton Tower

The deliverables for the project were:

  1. A new database infrastructure based on x86 and Linux
  2. New operating procedures and processes to integrate Active Data Guard
  3. Improved recovery times with clearer recovery models for services
  4. An exemplar migration demonstrating the new benefits of the new infrastructure and automatic failover

None of the above are outstanding, with all objectives met and each of the deliverables in place.

Analysis of Resource Usage:

Staff Usage Estimate: 200 days

Staff Usage Actual: 288 days

Staff Usage Variance: 44%

Other Resource Estimate: 0 days

Other Resource Actual: 0 days

Other Resource Variance: 0%

Explanation for variance:

a) Effort required

The original estimate for this project was for 262 days of effort but this was reduced in January to 200 days because the decision to not virtualise the new environment was taken. This reduced some of the contingencies that had been incorporated into the original estimates. However, the project has ended up using a total of 288 days, a figure that is, obviously, a significant increase on the re-estimated figure. The table below will hopefully provide a quick guide to where effort has differed from the estimates:

WorkEstimateActualDiff.Notes
Plan/Analysis1818- 
Training1616- 
Developing Procedures & Standards1525+10 
Build of servers2021+1 
Build stage3654+18 
Acceptance stage5929-30 
Deployment 6+6 
Project Management2859+31Attributable to extended timeframe
QA/Meetings520+15Attributable to timeframe & number of discussions required
Senior Supplier 20+20Not estimated
Unplanned activity 17+17Not estimated
Closure33- 
Total200288+88 

The main differences, as can be seen, are those parts of work that were not originally estimated (Senior supplier; Unplanned activity) and those that required more effort as the project extended in duration (project management; QA & meetings). As for effort expended on technical tasks, this actually nets out to be a total of 5 days over the total estimated. The differences across the individual stages is simply for the reason that work identified and estimated for the Acceptance stage originally ended up being done under the Build stage as the project progressed.

As for the extra project management time, this is directly a cause of the project doubling in duration and still needing roughly one day a week for a PM. Similarly, as the project duration lengthened and more discussion was required to look at questions arising during the project on a number of relevant themes, the time for meetings also increased. These discussions covered topics such as database sizes &  locations, patching, backup policy, tns names, and also addressed sign-offs, decisions on policy, approaches to testing, and issue resolution. There was also a range of meetings that encompassed the project team (weekly); ITI Unix and the project sponsor, PM and senior supplier.

The two items that were not estimated were senior supplier time, and unplanned activity. The role of senior supplier was not originally identified until several weeks into the project, and the eventual holder of this role played a very active part in the project - being involved in most meetings and discussions. The unplanned activity covers a variety of tasks, including resource allocation, management updates, additional help with the tnsnames issue, reviewing & updating JIRAs, a recovery workshop (5 participants) and ad hoc meetings. The senior supplier effort could have been identified when the project was re-estimated in January, and contingency might have been made for unplanned activity, though it would always be difficult to predict what this might be.

 

b) Timescales

The first plan for this project had a delivery date at the end of January 2013 but this was finally achieved just over six months later. There were a number of delays and these were reported to WIS via the issues log, with the milestones moved out when necessary. The main reasons for this extended timescale were as follows:

October 2012 - resources unavailable to start on the early stages of the project as they were conflicted to work on production issues that had to take precedence. Milestones revised.

December 2012 - milestones had to be revised again because of delays in getting hardware delivered. 

January 2013 - resource conflicts with ASTA bookings because of delay getting servers built.

February 2013 - milestones revised again - a knock-on effect of the above resource conflicts.

April 2013 - missed milestone for Build stage. The reasons for this were that many of the development tasks took longer than originally estimated (because so much of the work was being done for the first time) and that the duration had to increase because of illness absence and resources being conflicted for production issues, which took precedence over project work.

May 2013 - PPBR replanned for 17/05 actually achieved on 24/05 because of final tasks (e.g. the test failover) not being completed by 17/05.

June 2013 - the delayed sign-off the Build stage susbsequently delayed the remaining milestones, with the ASOR being achieved at the end of June. This was delayed because of the tnsnames.ora issue and the Archibus client problem that hindered testing of the Estates & Buildings applications in TEST.

July 2013 - further delays to the LIVE deployment because of the Archibus issue. This led to the dropping of EBIS as the exemplar migration in LIVE, and the adoption of SOA instead. This led to the eventual DSOR in August.

 

c) Comments from Senior Supplier

I believe that we should have been more definitive at the start of the project about what was actually critical to the project and what could be considered 'should have' or 'could have'. Not reaching a determination on this early meant that resources could easily be distracted on stuff that might not have been critical to the delivery of the key enabling technologies.

The project suffered badly from resource contention specifically around getting the infrastructure prepared and ready to be used. This was specifically due to a problem securing sufficient expertise in the ITI Unix team. This impacted the project and resulted in further complications and inefficiencies that came to pass.

We did not involve everyone who might be classed as a stakeholder with the project. While it can be argued that you can "take a horse to water", this translated into further confusion and impacted other projects. We need to be really careful when we are doing infrastructure projects . It is imperative that we are sensitive to this and make sure (for technologies that cut across services like db upgrades) that people have absolutely no chance of missing what we are doing and helping them understand what they need to do to help themselves.

We also suffered at the end of the project trying to secure commitments from the Desktop ServicesTeam and Estates & Buildings. We managed to get past the initial problem of getting the Archibus client built, but then found ourselves in the unenviable position of not being able to get a sign-off to go live from Estates. This, I feel, was something we should not have experienced and again undermines us as a department in the eyes of our colleagues and partners. I think we should be able to avoid this kind of damage.

This may all sound negative but it should be emphasised that the project has delivered on its key deliverables, and we should reflect on that in this report and ensure we recognise that we have done something very large and complex. Above all we should be pleased to have reached this point

 

 

Key Learning Points:

This project has highlighted a number of learning points for all involved, aside from the 'big lessons' on the new technology put in place and the new procedures and processes that have evolved from the development and build work carried out by the technical teams involved in this work. These are:

  • An understanding that not all deliverables can be 'must haves' unless there is an acceptance that some deliverables may dropped from the scope when development effort starts to take longer. There should be greater contingency for additional effort if deliverables cannot easily be graded on a sliding scale of necessity, especially when working with new products and technologies.
  • The difficulties in making plans round internal or external partners that we have no direct control or influence over. On this project, this includes Dell, who supplied the servers, but whose delivery was delayed twice; and internally, Desktop Services, who were responsible for the upgrade of the Archibus client, and Estates & Buildings, whose involvement was needed for user testing and sign-off. Workloads and leave in both these areas were factors that could have been taken into consideration with more knowledge of these.
  • Similarly, the virtualised client (Archibus) was something that threw staff in IS Applications as no-one, at first, had experience of how this was established or how it could be modified or manipulated. Desktop Services have advised that this approach is becoming more popular, so this experience is one that has been shared with colleagues in Project Services already.
  • Regular reviews of the project risk register, and recording of these reviews.
  • Maintaining a rolling Lessons Learned log during the project. Especailly important during 'big' projects.
  • Finally, it was emphasised that even a top priority project, with fully booked resources, can encounter delays because staff are understandably required to attend to production issues.

Outstanding issues:

Several actions (that do not fall under the remit of the project) have emanated from the patching policy and these need to be followed up. These are as follows:

  1. Incorporate new database servers into current patching schedule for Linux servers
  2. Set up Patch Review Group and diairise meetings in line with Oracle releases
  3. Establish a process to assess each patch and whether it is applicable

Project Info

Project
Implement New Database Tier
Code
ITS100
Programme
ISG - IS Applications Infrastructure (INF)
Project Manager
David Watters
Project Sponsor
David Smyth
Current Stage
Close
Status
Closed
Start Date
16-Aug-2012
Planning Date
n/a
Delivery Date
n/a
Close Date
13-Sep-2013
Category
Compliance

Documentation

Close