Creating *Valid* Dummy Data

As this project is ‘piggybacking’ off another project being carried out in the university, we will be taking most of our data from another system that is in the process of being implemented. As such, it has been necessary to create some dummy datasets so that I can continue with this project while the other system is being implemented. Using data that has been taken from various sources, I have been able to create dummy data that resembles the data that will be made available through KIS and the XCRI-CAP feed.

At this stage, the data that I am dealing with focuses around 5 awards offered at the university, rather than trying to encompass every award currently offered. To try and get a grasp of how this information is presented in different departments and areas of the university, I have selected a range of awards, rather than focus on a specific school or college within the university. Hopefully this will mean that there won’t be too many surprises when I can get all of the ‘real’ data.

As part of the process of validating the data that I have collected, I’ve been using Craig Hawker’s XCRI-CAP 1.2 validator, which has proved invaluable in ensuring that the ‘test’ XCRI feed that I’ve been working with is actually valid, again reducing the amount of surprises I should get when I can use the full XCRI feed being made available by the university.

On top of the data being made available through the KIS and XCRI feed, the implementation of a new ‘Academic Programme Management System’ means that I should be able to easily get data regarding the modules/units that make up each of the courses offered by the university. This, in combination with the data available in the KIS and XCRI, should be more than enough data to produce services that are useful to students and present the information in meaningful ways.

Next step, the APIs to get at the data and documentation!!!!!!!