Round Trip Covering Note
Back to WinCan VX GDMS User Guide
Authors: Keith Halsted, Matt Lane @ Mott MacDonald
Chapters
Introduction
This document was written as an introduction to the significant changes for GDMS over DDMS by Mott MacDonald for WinCan, and was never envisaged to be distributed to contractors. However, the information contained in this short read is critical to contractors moving from DDMS to GDMS, and it is presented here as received.
Format Notes
Despite significant changes in the GDMS data structure, we have tried to minimise changes to the SHP/DBF data round-trip format.
The concept of a “scheme” does not exist in GDMS. Data is now downloaded and uploaded as 1 or more complete asset systems. Because connectivity was only possible within a scheme, this aspect hasn’t really changed, but it is now much more flexible which assets can be downloaded together, and there is no need to merge schemes, move assets between schemes, etc. Once uploaded, GDMS will still associate the assets that were uploaded together to the same “Activity Set”, but this does not in any way restrict future downloads or uploads.
A necessary change is to have a new file for “components”, as there can be more than 1 of these per continuous asset. This file is intended to work in a similar way to observations in continuous assets, in terms of referencing to the asset and start/end chainages. It mostly contains inventory fields moved out of continuous assets. There is more info on components below.
We have taken the opportunity to rename some fields to improve consistency and make them more meaningful (as far as possible within 10 characters).
There are a small number of new and deleted fields. Aside from necessary cross-referencing (e.g. components) a lot of the new fields are optional or system populated.
In GDMS the concept of an “Activity” has been added, which includes fields such as date, name of inspector, method, etc. As the data round-trip format only supports 1 activity per asset, these fields have been left within the asset data. There are some other changes to how this data needs to be populated, which I’ve separated out below (also covered in the spreadsheet).
When data is uploaded, GDMS will continue to work on the basis that not all fields may be present, and that the files may contain additional fields that will be ignored. However, we will use the presence of a “SUPP_SCH” field as a marker that the data is likely to be in the old format, and this will cause immediate rejection of the data. We have to do this otherwise fields with an old name would not be recognised and would be dropped.
When data is downloaded, all fields in the spec will be included (as at present). In addition, we may include empty template files, e.g. if there are no region assets or observations.
Attached files handling is basically unchanged, but you can also now attach files to components. The attached files field in Observations.dbf is now called “ATT_DOCS” for consistency.
There are some changes to the additional metadata files that accompany downloaded data, but these remain for info only, and are not part of the upload.
During the upload, all of the data that is uploaded becomes the current data for those assets. Just as with HADDMS, observations etc for round-tripped assets must be included as downloaded.
As with HADDMS, GDMS will score and grade any assets that have at least 1 observation, overwriting any provided scores and grades for those assets.
Assets can be archived by not including them in the upload. They will be archived by GDMS provided they were in the same asset system as at least 1 asset that is reuploaded. However, assets should only be removed from the downloaded data if they do not exist – don’t remove them just because you’ve changed connectivity so they are now in a different asset system, because that would cause them to be archived. Entire asset systems can be archived on GDMS, but not as part of data round-tripping.
The following sections mention some of the most prominent changes to field names, but see the spreadsheet for the full list.
ASSET_REF / SUPP_REF
We have changed “PIPE_REFER” to “ASSET_REF” for consistency across all 3 asset classes, and to reflect that not all continuous assets are pipes.
Data is now downloaded as 1 or more complete asset systems, and therefore old scheme boundaries are removed. This means that more than 1 asset in the data may have the same SUPP_REF.
New assets added to the data will continue to only have a “SUPP_REF”. For all new assets in a dataset the SUPP_REF must be unique within those new assets. A new asset SUPP_REF may be allowed to be the same as another asset that already has an ASSET_REF.
The more general rule is that each asset is uniquely identified by a combination of its ASSET_REF (which may be blank) and its SUPP_REF. As with HADDMS, that combination must be used when recording connectivity between assets, and also for components and observations.
Sub-Catchments / Systems
Assets are now linked by GDMS to a sub-catchment, which is defined as a linear segment of the road. Data will be downloaded with a sub-catchment ID for information, but this will be ignored on upload as GDMS will re-link it. You should not change the contents of this field, as we would just ignore any changes.
Data must be uploaded as 1 or more complete asset systems (you can’t connect an asset to an asset that isn’t in the data). GDMS will recalculate these systems when data is uploaded and assign an arbitrary System ID to all of the assets that are in the same system as each other. Again this field will be populated when data is downloaded, but will be ignored on upload. You should not change the contents of this field. As with HADDMS, you are free to change the connectivity between assets in a dataset as required,
but please do not make any attempt to redefine this ID when you do because GDMS will just ignore it and it will change to a new ID!
Asset Types
Please see the spreadsheet for a list of all changes to codes, but in particular there are a few changes to asset types.
The “CF” asset type has been dropped, and combined pipe and filter drain assets (of any sort) now need to be provided as 2 (or more, if appropriate) continuous assets:
The pipe must be provided as an “FP – Filter Pipe” rather than PW etc. The filter drain medium as
FD, counterfort, soakaway trench, etc.The 2 assets must have the same upstream and downstream point assets as each other (which can
be the opposite way around) but otherwise follow all the normal rules as individual assets, e.g. own
references, components etc – and most importantly can now have their own condition grades.GDMS will detect this arrangement on upload and associate them as an “Asset Set”, but this will not
occur if there is no FP between the pair of points.The concept of an “Asset Set” is not part of the data round-trip format.
Instrumented Gully has been dropped. Now recorded as a Gully (GU), and there is a new optional Y/N “INSTRUMENT” field (which is also available for continuous and region assets).
We have also dropped all of the old duplicate codes (e.g. GY is no longer supported for Gully).
Gravity Drain only had a single letter code (A), which we have needed to change to two characters (GD) for consistency with wider NH standards. Every asset type now has one 2-letter code only.
Activities
The activity related data remains in the asset files, as the data round-trip format only supports 1 activity per asset per round-trip.
Unlike HADDMS, all previous activity related data will be downloaded blank (just like the ATT_DOCS fields). This means when the data is uploaded, it is obvious which assets have had updates and which have simply been round-tripped and are theoretically unchanged, so GDMS can then store a better quality history for each asset.
Although we are blanking out that temporal data, inventory and condition data will still be included in
full.
There is a new field “ACTIV_TYPE” which will be blank on download, but is mandatory for all assets on upload. This can contain one of three values:
N = a new asset that isn’t already on GDMS. The ASSET_REF field must also be blank. Errors will
be fed back if there isn’t a perfect alignment of “N” and blank ASSET_REFs.U = an asset that is on GDMS and has been updated. The ASSET_REF field must be populated, and GDMS will check the asset actually exists, etc. We don’t intend to check if anything has actually changed, we will just take the new data as it is and say it has been updated.
RT = an asset that is on GDMS and has not been updated. As with “U”, ASSET_REF must be
populated and the asset must exist. In general we will actually handle these as if they had been updated (just as HADDMS would), so whatever inventory and condition data is uploaded for them will become the current data. However, we will likely have some simple checks to detect whether the asset has actually been changed, e.g. different asset type, different condition grades or number of observations, and there should not be any new files attached to such assets.
The “DATE_OF_SU” field has been renamed to “ACTIV_DATE” to reflect that not all activities are surveys. That field is now mandatory for any asset where the ACTIV_TYPE is “N” or “U”. The “ACTIV_TIME” field can be used to optionally augment the date with the time (we store this as a combined date+time field in GDMS).
We have removed this field for region assets, as all other activity information is recorded in the linked
point assets.
For “RT” assets, the ACTIV_DATE field and all other activity fields must be left blank. During import GDMS will still create a new activity for each of these assets, and set some of the info as-of the time it is imported.
There are a few other fields in the “inspection related” category that are treated as asset data, and aren’t included in the above, e.g. CONSTRAINT, ORIGIN_DAT. In the spreadsheet I’ve changed the category of the activity related fields to clarify this.
“Maintenance” is not being treated as an activity at present, although if the last maintenance date was changed this would be an “Update”. The Method field is being used to define the physical nature of the activity (or if it is just a data update), so might be extended to achieve this in future, leaving the Activity Type to relate more to an action on the data rather than the asset itself.
Components
These only apply to continuous assets and every continuous asset must have at least 1 of them. The component(s) should cover the full length of the asset without gaps or overlaps, e.g. a 100m long asset might have components from 0-50m and 50-100m. As for observations, chainage is from the upstream end of the continuous asset.
For data migration we are only creating 1 component per continuous asset, covering its whole length.
Observation codes for changes in materials, diameter, etc remain available, but we would expect that these would become co-located with the boundaries of components that each have the appropriate data. I don’t think we’ll start throwing out errors straight away, but perhaps initially we will warn if a continuous asset contains “change” observations, but only has 1 component.
We recognise there are reasons for a difference between the geometric length of an asset as shown on the map and the measured “LENGTH”, and we’ll probably carry over the existing warning if there is a 10% difference. As with observations, we expect the component chainages to relate to the measured “LENGTH”. Because “LENGTH” is often blank or 0, we are likely to populate this from the geometric length if so, because it is now more important for this to be populated.
We haven’t yet determined how strict we will be initially on component coverage of the asset’s length, gaps, overlaps, etc. As an absolute minimum we will check there is 1 component, if not this will be an error. Next level could be that a component starts at 0, and a component ends within a tolerance of the asset’s “LENGTH”.
Components have a reference (COMP_REF) but this only needs to be unique within an asset. For migration, we have simply called them all “C1”.
Observations in continuous assets now also need to be linked to a component. The chainage of that component needs to be compatible with the chainage of the observation. Observation chainages are still measured relative to the whole asset, there is only ever one “0” location per asset. Again, we haven’t currently decided how strictly to validate this, or whether an observation could overlap component boundaries. As a minimum it would be fair to say either the start or end chainage of the observation must fall within or on the component start/end, and without any tolerance to it being entirely outside (an observation at 50.1-51.0m on a component that ends at 50.0 should not be permitted).
Data Checking
We have not yet defined all of the checks that GDMS will carry out but many of them will be the same as HADDMS. In general we don’t want to loosen or tighten existing rules, bearing in mind a lot of data that is uploaded will be data we have migrated from HADDMS, but some changes are required simply due to the new data structure.
In the spreadsheet, I’ve noted many cross-field data checks in the “Required for import” column.
Any checks related to schemes are obviously removed, with the exception that we will reject any data containing a “SUPP_SCH” field.
There will be new checks related to activities and components, which I’ve described above.
There will be a restriction on how far assets may be from the network. Provided at least 1 asset in each asset system is within 500m of the network, this should be fine – this means we can still take assets that are a long distance from the network, as long as they are actually connected to an asset that is relatively near the road. It is possible that some existing data we will migrate into GDMS will not meet this rule, but we need to migrate all of the existing data.
We may introduce some new warning-level checks, particularly where we have found common issues when migrating the existing data.