TL;博士
是否存在其行为不正确的极端情况?
是的,一对。
- 两者仅对 1970 年及以后的时间戳等效;对于 1969 年及更早,他们给出了不同的结果。
- 前者的结果取决于时区,后者的结果不取决于时区,这在某些情况下会产生差异。
1970 年的限制
您当前的版本,将秒和纳秒设置为 0,正在向下舍入(接近时间的开始)。具有除法和乘法的优化版本正在向零舍入。在这种情况下,“零”是 1970 年 1 月 1 日 UTC 第一时刻的纪元。
long exampleTimestamp = Instant.parse("1969-12-15T21:34:56.789Z").toEpochMilli();
long with0Seconds = Instant.ofEpochMilli(exampleTimestamp)
.atZone(ZoneId.systemDefault())
.withNano(0)
.withSecond(0)
.toInstant()
.toEpochMilli();
System.out.println("Set seconds to 0: " + with0Seconds);
long dividedAndMultiplied = exampleTimestamp / 1000 / 60 * 1000 * 60;
System.out.println("Divided and multiplied: " + dividedAndMultiplied);
此片段的输出是(在我的时区和大多数时区):
Set seconds to 0: -1391160000
Divided and multiplied: -1391100000
两个输出之间存在 60 000 毫秒(整分钟)的差异。
时区依赖性
您可能对删除秒数的定义有疑问。所有时区的秒数并不总是相同的。例如:
ZoneId zone = ZoneId.of("Asia/Kuala_Lumpur");
ZonedDateTime exampleTime = ZonedDateTime.of(1905, 5, 15, 10, 34, 56, 789_000_000, zone);
// Truncation in time zone
long longTzTimestamp = exampleTime.truncatedTo(ChronoUnit.MINUTES)
.toInstant()
.toEpochMilli();
System.out.println("After truncation in " + zone + ": " + longTzTimestamp);
// Truncation in UTC
long longUtcTimestamp = exampleTime.toInstant()
.truncatedTo(ChronoUnit.MINUTES)
.toEpochMilli();
System.out.println("After truncation in UTC: " + longUtcTimestamp);
After truncation in Asia/Kuala_Lumpur: -2039631685000
After truncation in UTC: -2039631660000
两个时间戳之间有 25 秒(25 000 毫秒)的差异。我所做的唯一区别是两个操作的顺序:截断为整分钟和转换为 UTC。结果怎么不一样?直到 1905 年 6 月 1 日,马来西亚与格林威治标准时间的偏移量为 +06:55:25。因此,当马来西亚的分秒为 56 分时,格林威治标准时间为 31 分。因此,在这两种情况下,我们不会删除相同的秒数。
同样,我不认为这对于 1973 年之后的时间戳来说是个问题。如今时区倾向于使用距 UTC 整分钟数的偏移量。
编辑:
(这在 1970 年后是否发生过?)
一点点。例如,利比里亚在 1972 年 1 月 6 日之前处于偏移 -0:44:30。任何人都可以猜测某个国家的政客明年或后年会做出什么决定。
检查边缘情况
One way to check whether you are hitting one of the cases mentioned in the foregoing is using assert
:
public static long toMinuteResolution(long timestamp) {
assert timestamp >= 0 : "This optimized method doesn’t work for negative timestamps.";
assert Duration.ofSeconds(Instant.ofEpochMilli(timestamp).atZone(ZoneId.systemDefault()).getOffset().getTotalSeconds())
.toSecondsPart() == 0
: "This optimized method doesn’t work for an offset of "
+ Instant.ofEpochMilli(timestamp).atZone(ZoneId.systemDefault()).getOffset();
return TimeUnit.MINUTES.toMillis(TimeUnit.MILLISECONDS.toMinutes(timestamp));
}
Since you wanted to optimize, I expect these checks to be too expensive for your production environment. You know better then I whether enabling them in your test environments will give you some assurance.
Further suggestions
As Andreas said in the comments, the truncatedTo
method makes the non-optimized version a bit simpler and clearer:
public static long toMinuteResolution(long timestamp) {
return Instant.ofEpochMilli(timestamp)
.atZone(ZoneId.systemDefault())
.truncatedTo(ChronoUnit.MINUTES)
.toInstant()
.toEpochMilli();
}
You can use truncatedTo
directly on the Instant
too if you want, as in Andreas’ comment.
If you want to go with your optimization anyway, for slightly better readability my optimized version would be:
private static final long MILLIS_PER_MINUTE = TimeUnit.MINUTES.toMillis(1);
public static long toMinuteResolution(long timestamp) {
return timestamp / MILLIS_PER_MINUTE * MILLIS_PER_MINUTE;
}
I might even try the following and see whether it is efficient enough. I expect no noticeable difference.
public static long toMinuteResolution(long timestamp) {
return TimeUnit.MINUTES.toMillis(TimeUnit.MILLISECONDS.toMinutes(timestamp));
}
Link
Time Changes in Monrovia Over the Years