Duplicated DateTime values within ID are adjusted forward (recursively) by one second until no duplicates are present. This is considered reasonable way of avoiding the nonsensical problem of duplicate times.
adjust.duplicateTimes(time, id)
time | vector of DateTime values |
---|---|
id | vector of ID values, matching DateTimes that are assumed sorted within ID |
The adjusted DateTime vector is returned.
This function is used to remove duplicate time records in animal track data, rather than removing the record completely.
I have no idea what goes on at CLS when they output data that are either not ordered by time or have duplicates. If this problem exists in your data it's probably worth finding out why.
## DateTimes with a duplicate within ID tms <- Sys.time() + c(1:6, 6, 7:10) *10 id <- rep("a", length(tms)) range(diff(tms))#> Time differences in secs #> [1] 0 10## duplicate record is now moved one second forward tms.adj <- adjust.duplicateTimes(tms, id) range(diff(tms.adj))#> Time differences in secs #> [1] 1 10