-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Description
Description
The software 128-bit division algorithm added in #69204 (@tannergooding @bartonjs) does not work correctly on big-endian platforms, leading to failed assertions when running the test suite on s390x.
Reproduction Steps
Run the System.Tests.Int128Tests_GenericMath.op_DivisionTest test on linux-s390x.
Expected behavior
Test passes.
Actual behavior
Unhandled Exception:
System.Diagnostics.DebugProvider+DebugAssertException: at System.Diagnostics.DebugProvider.Fail(String message, String detailMessage) in /home/uweigand/runtime/src/libraries/System.Private.CoreLib/src/System/Diagnostics/DebugProvider.cs:line 22
at System.Diagnostics.Debug.Fail(String message, String detailMessage) in /home/uweigand/runtime/src/libraries/System.Private.CoreLib/src/System/Diagnostics/Debug.cs:line 133
at System.Diagnostics.Debug.Assert(Boolean condition, String message, String detailMessage) in /home/uweigand/runtime/src/libraries/System.Private.CoreLib/src/System/Diagnostics/Debug.cs:line 97
at System.Diagnostics.Debug.Assert(Boolean condition) in /home/uweigand/runtime/src/libraries/System.Private.CoreLib/src/System/Diagnostics/Debug.cs:line 82
at System.Number.UInt128ToDecStr(UInt128 value) in /home/uweigand/runtime/src/libraries/System.Private.CoreLib/src/System/Number.Formatting.cs:line 2236
at System.Number.UInt128ToDecStr(UInt128 value, Int32 digits) in /home/uweigand/runtime/src/libraries/System.Private.CoreLib/src/System/Number.Formatting.cs:line 2244
at System.Number.FormatInt128(Int128 value, String format, IFormatProvider provider) in /home/uweigand/runtime/src/libraries/System.Private.CoreLib/src/System/Number.Formatting.cs:line 1179
at System.Int128.ToString(String format, IFormatProvider provider) in /home/uweigand/runtime/src/libraries/System.Private.CoreLib/src/System/Int128.cs:line 111
at System.Convert.ToString(Object value, IFormatProvider provider) in /home/uweigand/runtime/src/libraries/System.Private.CoreLib/src/System/Convert.cs:line 1941
at Xunit.Sdk.ArgumentFormatter.Format(Object value, Int32 depth, Nullable`1& pointerPostion, Nullable`1 errorIndex, Boolean isDictionaryEntry) in /_/src/xunit.assert/Asserts/Sdk/ArgumentFormatter.cs:line 187
at Xunit.Sdk.ArgumentFormatter.Format(Object value, Nullable`1 errorIndex) in /_/src/xunit.assert/Asserts/Sdk/ArgumentFormatter.cs:line 79
at Xunit.Sdk.AssertActualExpectedException.ConvertToString(Object value) in /_/src/xunit.assert/Asserts/Sdk/Exceptions/AssertActualExpectedException.cs:line 176
at Xunit.Sdk.AssertActualExpectedException..ctor(Object expected, Object actual, String userMessage, String expectedTitle, String actualTitle, Exception innerException) in /_/src/xunit.assert/Asserts/Sdk/Exceptions/AssertActualExpectedException.cs:line 79
at Xunit.Sdk.AssertActualExpectedException..ctor(Object expected, Object actual, String userMessage, String expectedTitle, String actualTitle) in /_/src/xunit.assert/Asserts/Sdk/Exceptions/AssertActualExpectedException.cs:line 48
at Xunit.Sdk.EqualException..ctor(Object expected, Object actual) in /_/src/xunit.assert/Asserts/Sdk/Exceptions/EqualException.cs:line 47
at Xunit.Assert.Equal[Int128](Int128 expected, Int128 actual, IEqualityComparer`1 comparer) in /_/src/xunit.assert/Asserts/EqualityAsserts.cs:line 101
at Xunit.Assert.Equal[Int128](Int128 expected, Int128 actual) in /_/src/xunit.assert/Asserts/EqualityAsserts.cs:line 63
at System.Tests.Int128Tests_GenericMath.op_DivisionTest() in /home/uweigand/runtime/src/libraries/System.Runtime/tests/System/Int128Tests.GenericMath.cs:line 389
Regression?
Newly added test case now causes the test suite to fail.
Known Workarounds
n/a
Configuration
Current runtime sources, on linux-s390x.
Other information
From an initial investigation, it appears the problem is located in DivideSlow (in src/libraries/System.Private.CoreLib/src/System/UInt128.cs:1038:
// This is the same algorithm currently used by BigInteger so
// we need to get a Span<uint> containing the value represented
// in the least number of elements possible.
uint* pLeft = stackalloc uint[Size / sizeof(uint)];
quotient.WriteLittleEndianUnsafe(new Span<byte>(pLeft, Size));
Span<uint> left = new Span<uint>(pLeft, (Size / sizeof(uint)) - (BitOperations.LeadingZeroCount(quotient) / 32));
uint* pRight = stackalloc uint[Size / sizeof(uint)];
divisor.WriteLittleEndianUnsafe(new Span<byte>(pRight, Size));
Span<uint> right = new Span<uint>(pRight, (Size / sizeof(uint)) - (BitOperations.LeadingZeroCount(divisor) / 32));
WriteLittleEndianUnsafe writes out the 128-bit integer as a little-endian byte sequence. That sequence of bytes is then re-interpreted as an array of uint. This is only correct on systems where uint is little-endian.