diff --git a/docs/design/coreclr/botr/vectors-and-intrinsics.md b/docs/design/coreclr/botr/vectors-and-intrinsics.md index fac5a0800c9bee..2fb16e17da6037 100644 --- a/docs/design/coreclr/botr/vectors-and-intrinsics.md +++ b/docs/design/coreclr/botr/vectors-and-intrinsics.md @@ -38,148 +38,123 @@ For AOT compilation, the situation is far more complex. This is due to the follo 2. If AOT code is generated, it should be used unless there is an overriding reason to avoid using it. 3. It must be exceedingly difficult to misuse the AOT compilation tool to violate principle 1. -There are 2 different implementations of AOT compilation under development at this time. The crossgen1 model (which is currently supported on all platforms and architectures), and the crossgen2 model, which is under active development. Any developer wishing to use hardware intrinsics in the runtime or libraries should be aware of the restrictions imposed by the crossgen1 model. Crossgen2, which we expect will replace crossgen1 at some point in the future, has strictly fewer restrictions. +## Crossgen2 model of hardware intrinsic usage +There are 2 sets of instruction sets known to the compiler. +- The baseline instruction set which defaults to (Sse, Sse2), but may be adjusted via compiler option. +- The optimistic instruction set which defaults to (Sse3, Ssse3, Sse41, Sse42, Popcnt, Pclmulqdq, and Lzcnt). -## Crossgen1 model of hardware intrinsic usage +Code will be compiled using the optimistic instruction set to drive compilation, but any use of an instruction set beyond the baseline instruction set will be recorded, as will any attempt to use an instruction set beyond the optimistic set if that attempted use has a semantic effect. If the baseline instruction set includes `Avx2` then the size and characteristics of of `Vector` is known. Any other decisions about ABI may also be encoded. For instance, it is likely that the ABI of `Vector256` and `Vector512` will vary based on the presence/absence of `Avx` support. -###Code written in System.Private.CoreLib.dll -#### Crossgen implementation rules -- Any code which uses `Vector` will not be compiled AOT. (See code which throws a TypeLoadException using `IDS_EE_SIMD_NGEN_DISALLOWED`) -- Code which uses Sse and Sse2 platform hardware intrinsics is always generated as it would be at jit time. -- Code which uses Sse3, Ssse3, Sse41, Sse42, Popcnt, Pclmulqdq, and Lzcnt instruction sets will be generated, but the associated IsSupported check will be a runtime check. See `FilterNamedIntrinsicMethodAttribs` for details on how this is done. -- Code which uses other instruction sets will be generated as if the processor does not support that instruction set. (For instance, a usage of Avx2.IsSupported in CoreLib will generate native code where it unconditionally returns false, and then if and when tiered compilation occurs, the function may be rejitted and have code where the property returns true.) -- Non-platform intrinsics which require more hardware support than the minimum supported hardware capability will not take advantage of that capability. In particular the code generated for `Vector2/3/4.Dot`, and `Math.Round`, and `MathF.Round`. See `FilterNamedIntrinsicMethodAttribs` for details. MethodImplOptions.AggressiveOptimization may be used to disable precompilation compilation of this sub-par code. +- Any code which uses `Vector` will not be compiled AOT unless the size of `Vector` is known. +- Any code which passes a `Vector256` or `Vector512` as a parameter on a Linux or Mac machine will not be compiled AOT unless the support for the `Avx` instruction set is known. +- Non-platform intrinsics which require more hardware support than the optimistic supported hardware capability will not take advantage of that capability. MethodImplOptions.AggressiveOptimization may be used to disable compilation of this sub-par code. +- Code which takes advantage of instructions sets in the optimistic set will not be used on a machine which only supports the baseline instruction set. +- Code which attempts to use instruction sets outside of the optimistic set will generate code that will not be used on machines with support for the instruction set. #### Characteristics which result from rules -The rules here provide the following characteristics. -- Some platform specific hardware intrinsics can be used in CoreLib without encountering a startup time penalty -- Some uses of platform specific hardware intrinsics will force the compiler to be unable to AOT compile the code. However, if care is taken to only use intrinsics from the Sse, Sse2, Sse3, Ssse3, Sse41, Sse42, Popcnt, Pclmulqdq, or Lzcnt instruction sets, then the code may be AOT compiled. Preventing AOT compilation may cause a startup time penalty for important scenarios. -- Use of `Vector` causes runtime jit and startup time concerns because it is never precompiled. Current analysis indicates this is acceptable, but it is a perennial concern for applications with tight startup time requirements. -- AOT generated code which could take advantage of more advanced hardware support experiences a performance penalty until rejitted. (If a customer chooses to disable tiered compilation, then customer code may always run slowly). - -#### Code review rules for code written in System.Private.CoreLib.dll -- Any use of a platform intrinsic in the codebase MUST be wrapped with a call to the associated IsSupported property. This wrapping MUST be done within the same function that uses the hardware intrinsic, and MUST NOT be in a wrapper function unless it is one of the intrinsics that are enabled by default for crossgen compilation of System.Private.CoreLib (See list above in the implementation rules section). -- Within a single function that uses platform intrinsics, it must behave identically regardless of whether IsSupported returns true or not. This rule is required as code inside of an IsSupported check that calls a helper function cannot assume that the helper function will itself see its use of the same IsSupported check return true. This is due to the impact of tiered compilation on code execution within the process. -- Excessive use of intrinsics may cause startup performance problems due to additional jitting, or may not achieve desired performance characteristics due to suboptimal codegen. - -ACCEPTABLE Code -```csharp -using System.Runtime.Intrinsics.X86; - -public class BitOperations -{ - public static int PopCount(uint value) - { - if (Avx2.IsSupported) - { - Some series of Avx2 instructions that performs the popcount operation. - } - else - return FallbackPath(input); - } +- Code which uses platform intrinsics within the optimistic instruction set will generate good code. +- Code which relies on platform intrinsics not within the baseline or optimistic set will cause runtime jit and startup time concerns if used on hardware which does support the instruction set. +- `Vector` code has runtime jit and startup time concerns unless the baseline is raised to include `Avx2`. - private static int FallbackPath(uint) - { - const uint c1 = 0x_55555555u; - const uint c2 = 0x_33333333u; - const uint c3 = 0x_0F0F0F0Fu; - const uint c4 = 0x_01010101u; +#### Code review rules for use of platform intrinsics +- Any use of a platform intrinsic in the codebase SHOULD be wrapped with a call to the associated IsSupported property. This wrapping may be done within the same function that uses the hardware intrinsic, but this is not required as long as the programmer can control all entrypoints to a function that uses the hardware intrinsic. +- If an application developer is highly concerned about startup performance, developers should avoid use intrinsics beyond Sse42, or should use Crossgen with an updated baseline instruction set support. - value -= (value >> 1) & c1; - value = (value & c2) + ((value >> 2) & c2); - value = (((value + (value >> 4)) & c3) * c4) >> 24; +### Crossgen2 adjustment to rules for System.Private.CoreLib.dll +Since System.Private.CoreLib.dll is known to be code reviewed with the code review rules as written below with System.Private.CoreLib.dll, it is possible to relax rule "Code which attempts to use instruction sets outside of the optimistic set will generate code that will not be used on machines with support for the instruction set." What this will do is allow the generation of non-optimal code for these situations, but through the magic of code review and analyzers, the generated logic will still work correctly. - return (int)value; - } +#### Code review and analyzer rules for code written in System.Private.CoreLib.dll +- Any use of a platform intrinsic in the codebase MUST be wrapped with a call to an associated IsSupported property. This wrapping MUST be done within the same function that uses the hardware intrinsic, OR the function which uses the platform intrinsic must have the `CompExactlyDependsOn` attribute used to indicate that this function will unconditionally call platform intrinsics of from some type. +- Within a single function that uses platform intrinsics, unless marked with the `CompExactlyDependsOn` attribute it must behave identically regardless of whether IsSupported returns true or not. This allows the R2R compiler to compile with a lower set of intrinsics support, and yet expect that the behavior of the function will remain unchanged in the presence of tiered compilation. +- Excessive use of intrinsics may cause startup performance problems due to additional jitting, or may not achieve desired performance characteristics due to suboptimal codegen. To fix this, we may, in the future, change the compilation rules to compile the methods marked with`CompExactlyDependsOn` with the appropriate platform intrinsics enabled. + +Correct use of the `IsSupported` properties and `CompExactlyDependsOn` attribute is checked by an analyzer during build of `System.Private.CoreLib`. This analyzer requires that all usage of `IsSupported` properties conform to a few specific patterns. These patterns are supported via either if statements or the ternary operator. + +The supported conditional checks are + +1. Simple if statement checking IsSupported flag surrounding usage +``` +if (PlatformIntrinsicType.IsSupported) +{ + PlatformIntrinsicType.IntrinsicMethod(); +} +``` + +2. If statement check checking a platform intrinsic type which implies +that the intrinsic used is supported. + +``` +if (Avx2.X64.IsSupported) +{ + Avx2.IntrinsicMethod(); } ``` -UNACCEPTABLE code -```csharp -using System.Runtime.Intrinsics.X86; +3. Nested if statement where there is an outer condition which is an +OR'd together series of IsSupported checks for mutually exclusive +conditions and where the inner check is an else clause where some checks +are excluded from applying. -public class BitOperations +``` +if (Avx2.IsSupported || ArmBase.IsSupported) { - public static int PopCount(uint value) + if (Avx2.IsSupported) { - if (Avx2.IsSupported) - return UseAvx2(value); - else - return FallbackPath(input); + // Do something } - - private static int FallbackPath(uint) + else { - const uint c1 = 0x_55555555u; - const uint c2 = 0x_33333333u; - const uint c3 = 0x_0F0F0F0Fu; - const uint c4 = 0x_01010101u; + ArmBase.IntrinsicMethod(); + } +} +``` - value -= (value >> 1) & c1; - value = (value & c2) + ((value >> 2) & c2); - value = (((value + (value >> 4)) & c3) * c4) >> 24; +4. Within a method marked with `CompExactlyDependsOn` for a less advanced attribute, there may be a use of an explicit IsSupported check for a more advanced cpu feature. If so, the behavior of the overall function must remain the same regardless of whether or not the CPU feature is enabled. The analyzer will detect this usage as a warning, so that any use of IsSupported in a helper method is examined to verify that that use follows the rule of preserving exactly equivalent behavior. - return (int)value; +``` +[CompExactlyDependsOn(typeof(Sse41))] +int DoSomethingHelper() +{ +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The else clause is semantically equivalent + if (Avx2.IsSupported) +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough + { + Avx2.IntrinsicThatDoesTheSameThingAsSse41IntrinsicAndSse41.Intrinsic2(); } - - private static int UseAvx2(uint value) + else { - // THIS IS A BUG!!!!! - Some series of Avx2 instructions that performs the popcount operation. - The bug here is triggered by the presence of tiered compilation and R2R. The R2R version - of this method may be compiled as if the Avx2 feature is not available, and is not reliably rejitted - at the same time as the PopCount function. - - As a special note, on the x86 and x64 platforms, this generally unsafe pattern may be used - with the Sse, Sse2, Sse3, Sssse3, Ssse41 and Sse42 instruction sets as those instruction sets - are treated specially by both crossgen1 and crossgen2 when compiling System.Private.CoreLib.dll. + Sse41.Intrinsic(); + Sse41.Intrinsic2(); } } ``` -### Code written in other assemblies (both first and third party) +- NOTE: If the helper needs to be used AND behave differently with different instruction sets enabled, correct logic requires spreading the `CompExactlyDependsOn` attribute to all callers such that no caller could be compiled expecting the wrong behavior. See the `Vector128.ShuffleUnsafe` method, and various uses. -#### Crossgen implementation rules -- Any code which uses an intrinsic from the `System.Runtime.Intrinsics.Arm` or `System.Runtime.Intrinsics.X86` namespace will not be compiled AOT. (See code which throws a TypeLoadException using `IDS_EE_HWINTRINSIC_NGEN_DISALLOWED`) -- Any code which uses `Vector` will not be compiled AOT. (See code which throws a TypeLoadException using `IDS_EE_SIMD_NGEN_DISALLOWED`) -- Any code which uses `Vector64`, `Vector128`, `Vector256`, or `Vector512` will not be compiled AOT. (See code which throws a TypeLoadException using `IDS_EE_HWINTRINSIC_NGEN_DISALLOWED`) -- Non-platform intrinsics which require more hardware support than the minimum supported hardware capability will not take advantage of that capability. In particular the code generated for Vector2/3/4 is sub-optimal. MethodImplOptions.AggressiveOptimization may be used to disable compilation of this sub-par code. -#### Characteristics which result from rules -The rules here provide the following characteristics. -- Use of platform specific hardware intrinsics causes runtime jit and startup time concerns. -- Use of `Vector` causes runtime jit and startup time concerns -- AOT generated code which could take advantage of more advanced hardware support experiences a performance penalty until rejitted. (If a customer chooses to disable tiered compilation, then customer code may always run slowly). +The behavior of the `CompExactlyDependsOn` is that 1 or more attributes may be applied to a given method. If any of the types specified via the attribute will not have an invariant result for its associated `IsSupported` property at runtime, then the method will not be compiled or inlined into another function during R2R compilation. If no type so described will have a true result for the `IsSupported` method, then the method will not be compiled or inlined into another function during R2R compilation. -#### Code review rules for use of platform intrinsics -- Any use of a platform intrinsic in the codebase SHOULD be wrapped with a call to the associated IsSupported property. This wrapping may be done within the same function that uses the hardware intrinsic, but this is not required as long as the programmer can control all entrypoints to a function that uses the hardware intrinsic. -- If an application developer is highly concerned about startup performance, developers should avoid use of all platform specific hardware intrinsics on startup paths. +5. In addition to directly using the IsSupported properties to enable/disable support for intrinsics, simple static properties written in the following style may be used to reduce code duplication. -## Crossgen2 model of hardware intrinsic usage -There are 2 sets of instruction sets known to the compiler. -- The baseline instruction set which defaults to (Sse, Sse2), but may be adjusted via compiler option. -- The optimistic instruction set which defaults to (Sse3, Ssse3, Sse41, Sse42, Popcnt, Pclmulqdq, and Lzcnt). - -Code will be compiled using the optimistic instruction set to drive compilation, but any use of an instruction set beyond the baseline instruction set will be recorded, as will any attempt to use an instruction set beyond the optimistic set if that attempted use has a semantic effect. If the baseline instruction set includes `Avx2` then the size and characteristics of of `Vector` is known. Any other decisions about ABI may also be encoded. For instance, it is likely that the ABI of `Vector256` and `Vector512` will vary based on the presence/absence of `Avx` support. - -- Any code which uses `Vector` will not be compiled AOT unless the size of `Vector` is known. -- Any code which passes a `Vector256` or `Vector512` as a parameter on a Linux or Mac machine will not be compiled AOT unless the support for the `Avx` instruction set is known. -- Non-platform intrinsics which require more hardware support than the optimistic supported hardware capability will not take advantage of that capability. MethodImplOptions.AggressiveOptimization may be used to disable compilation of this sub-par code. -- Code which takes advantage of instructions sets in the optimistic set will not be used on a machine which only supports the baseline instruction set. -- Code which attempts to use instruction sets outside of the optimistic set will generate code that will not be used on machines with support for the instruction set. - -#### Characteristics which result from rules -- Code which uses platform intrinsics within the optimistic instruction set will generate good code. -- Code which relies on platform intrinsics not within the baseline or optimistic set will cause runtime jit and startup time concerns if used on hardware which does support the instruction set. -- `Vector` code has runtime jit and startup time concerns unless the baseline is raised to include `Avx2`. - -#### Code review rules for use of platform intrinsics -- Any use of a platform intrinsic in the codebase SHOULD be wrapped with a call to the associated IsSupported property. This wrapping may be done within the same function that uses the hardware intrinsic, but this is not required as long as the programmer can control all entrypoints to a function that uses the hardware intrinsic. -- If an application developer is highly concerned about startup performance, developers should avoid use intrinsics beyond Sse42, or should use Crossgen with an updated baseline instruction set support. +``` +static bool IsVectorizationSupported => Avx2.IsSupported || PackedSimd.IsSupported -### Crossgen2 adjustment to rules for System.Private.CoreLib.dll -Since System.Private.CoreLib.dll is known to be code reviewed with the code review rules as written above for crossgen1 with System.Private.CoreLib.dll, it is possible to relax rule "Code which attempts to use instruction sets outside of the optimistic set will generate code that will not be used on machines with support for the instruction set." What this will do is allow the generation of non-optimal code for these situations, but through the magic of code review, the generated logic will still work correctly. +public void SomePublicApi() +{ + if (IsVectorizationSupported) + SomeVectorizationHelper(); + else + { + // Non-Vectorized implementation + } +} +[CompExactlyDependsOn(typeof(Avx2))] +[CompExactlyDependsOn(typeof(PackedSimd))] +private void SomeVectorizationHelper() +{ +} +``` # Mechanisms in the JIT to generate correct code to handle varied instruction set support diff --git a/src/coreclr/tools/Common/JitInterface/CorInfoInstructionSet.cs b/src/coreclr/tools/Common/JitInterface/CorInfoInstructionSet.cs index 73538c68172891..df6c06206bf84d 100644 --- a/src/coreclr/tools/Common/JitInterface/CorInfoInstructionSet.cs +++ b/src/coreclr/tools/Common/JitInterface/CorInfoInstructionSet.cs @@ -1334,4 +1334,369 @@ public void Set64BitInstructionSetVariantsUnconditionally(TargetArchitecture arc } } } + public static class InstructionSetParser + { + public static InstructionSet LookupPlatformIntrinsicInstructionSet(TargetArchitecture targetArch, TypeDesc intrinsicType) + { + MetadataType metadataType = intrinsicType.GetTypeDefinition() as MetadataType; + if (metadataType == null) + return InstructionSet.ILLEGAL; + + string namespaceName; + string typeName = metadataType.Name; + string nestedTypeName = null; + if (metadataType.ContainingType != null) + { + var enclosingType = (MetadataType)metadataType.ContainingType; + namespaceName = enclosingType.Namespace; + nestedTypeName = metadataType.Name; + typeName = enclosingType.Name; + } + else + { + namespaceName = metadataType.Namespace; + } + + string platformIntrinsicNamespace; + + switch (targetArch) + { + case TargetArchitecture.ARM64: + platformIntrinsicNamespace = "System.Runtime.Intrinsics.Arm"; + break; + + case TargetArchitecture.X64: + case TargetArchitecture.X86: + platformIntrinsicNamespace = "System.Runtime.Intrinsics.X86"; + break; + + default: + return InstructionSet.ILLEGAL; + } + + if (namespaceName != platformIntrinsicNamespace) + return InstructionSet.ILLEGAL; + + switch (targetArch) + { + + case TargetArchitecture.ARM64: + switch (typeName) + { + + case "ArmBase": + if (nestedTypeName == "Arm64") + { return InstructionSet.ARM64_ArmBase_Arm64; } + else + { return InstructionSet.ARM64_ArmBase; } + + case "AdvSimd": + if (nestedTypeName == "Arm64") + { return InstructionSet.ARM64_AdvSimd_Arm64; } + else + { return InstructionSet.ARM64_AdvSimd; } + + case "Aes": + if (nestedTypeName == "Arm64") + { return InstructionSet.ARM64_Aes_Arm64; } + else + { return InstructionSet.ARM64_Aes; } + + case "Crc32": + if (nestedTypeName == "Arm64") + { return InstructionSet.ARM64_Crc32_Arm64; } + else + { return InstructionSet.ARM64_Crc32; } + + case "Dp": + if (nestedTypeName == "Arm64") + { return InstructionSet.ARM64_Dp_Arm64; } + else + { return InstructionSet.ARM64_Dp; } + + case "Rdm": + if (nestedTypeName == "Arm64") + { return InstructionSet.ARM64_Rdm_Arm64; } + else + { return InstructionSet.ARM64_Rdm; } + + case "Sha1": + if (nestedTypeName == "Arm64") + { return InstructionSet.ARM64_Sha1_Arm64; } + else + { return InstructionSet.ARM64_Sha1; } + + case "Sha256": + if (nestedTypeName == "Arm64") + { return InstructionSet.ARM64_Sha256_Arm64; } + else + { return InstructionSet.ARM64_Sha256; } + + } + break; + + case TargetArchitecture.X64: + switch (typeName) + { + + case "X86Base": + if (nestedTypeName == "X64") + { return InstructionSet.X64_X86Base_X64; } + else + { return InstructionSet.X64_X86Base; } + + case "Sse": + if (nestedTypeName == "X64") + { return InstructionSet.X64_SSE_X64; } + else + { return InstructionSet.X64_SSE; } + + case "Sse2": + if (nestedTypeName == "X64") + { return InstructionSet.X64_SSE2_X64; } + else + { return InstructionSet.X64_SSE2; } + + case "Sse3": + if (nestedTypeName == "X64") + { return InstructionSet.X64_SSE3_X64; } + else + { return InstructionSet.X64_SSE3; } + + case "Ssse3": + if (nestedTypeName == "X64") + { return InstructionSet.X64_SSSE3_X64; } + else + { return InstructionSet.X64_SSSE3; } + + case "Sse41": + if (nestedTypeName == "X64") + { return InstructionSet.X64_SSE41_X64; } + else + { return InstructionSet.X64_SSE41; } + + case "Sse42": + if (nestedTypeName == "X64") + { return InstructionSet.X64_SSE42_X64; } + else + { return InstructionSet.X64_SSE42; } + + case "Avx": + if (nestedTypeName == "X64") + { return InstructionSet.X64_AVX_X64; } + else + { return InstructionSet.X64_AVX; } + + case "Avx2": + if (nestedTypeName == "X64") + { return InstructionSet.X64_AVX2_X64; } + else + { return InstructionSet.X64_AVX2; } + + case "Aes": + if (nestedTypeName == "X64") + { return InstructionSet.X64_AES_X64; } + else + { return InstructionSet.X64_AES; } + + case "Bmi1": + if (nestedTypeName == "X64") + { return InstructionSet.X64_BMI1_X64; } + else + { return InstructionSet.X64_BMI1; } + + case "Bmi2": + if (nestedTypeName == "X64") + { return InstructionSet.X64_BMI2_X64; } + else + { return InstructionSet.X64_BMI2; } + + case "Fma": + if (nestedTypeName == "X64") + { return InstructionSet.X64_FMA_X64; } + else + { return InstructionSet.X64_FMA; } + + case "Lzcnt": + if (nestedTypeName == "X64") + { return InstructionSet.X64_LZCNT_X64; } + else + { return InstructionSet.X64_LZCNT; } + + case "Pclmulqdq": + if (nestedTypeName == "X64") + { return InstructionSet.X64_PCLMULQDQ_X64; } + else + { return InstructionSet.X64_PCLMULQDQ; } + + case "Popcnt": + if (nestedTypeName == "X64") + { return InstructionSet.X64_POPCNT_X64; } + else + { return InstructionSet.X64_POPCNT; } + + case "AvxVnni": + if (nestedTypeName == "X64") + { return InstructionSet.X64_AVXVNNI_X64; } + else + { return InstructionSet.X64_AVXVNNI; } + + case "Movbe": + if (nestedTypeName == "X64") + { return InstructionSet.X64_MOVBE_X64; } + else + { return InstructionSet.X64_MOVBE; } + + case "X86Serialize": + if (nestedTypeName == "X64") + { return InstructionSet.X64_X86Serialize_X64; } + else + { return InstructionSet.X64_X86Serialize; } + + case "Avx512F": + if (nestedTypeName == "X64") + { return InstructionSet.X64_AVX512F_X64; } + else + if (nestedTypeName == "VL") + { return InstructionSet.X64_AVX512F_VL; } + else + { return InstructionSet.X64_AVX512F; } + + case "Avx512BW": + if (nestedTypeName == "X64") + { return InstructionSet.X64_AVX512BW_X64; } + else + if (nestedTypeName == "VL") + { return InstructionSet.X64_AVX512BW_VL; } + else + { return InstructionSet.X64_AVX512BW; } + + case "Avx512CD": + if (nestedTypeName == "X64") + { return InstructionSet.X64_AVX512CD_X64; } + else + if (nestedTypeName == "VL") + { return InstructionSet.X64_AVX512CD_VL; } + else + { return InstructionSet.X64_AVX512CD; } + + case "Avx512DQ": + if (nestedTypeName == "X64") + { return InstructionSet.X64_AVX512DQ_X64; } + else + if (nestedTypeName == "VL") + { return InstructionSet.X64_AVX512DQ_VL; } + else + { return InstructionSet.X64_AVX512DQ; } + + case "Avx512Vbmi": + if (nestedTypeName == "X64") + { return InstructionSet.X64_AVX512VBMI_X64; } + else + if (nestedTypeName == "VL") + { return InstructionSet.X64_AVX512VBMI_VL; } + else + { return InstructionSet.X64_AVX512VBMI; } + + } + break; + + case TargetArchitecture.X86: + switch (typeName) + { + + case "X86Base": + { return InstructionSet.X86_X86Base; } + + case "Sse": + { return InstructionSet.X86_SSE; } + + case "Sse2": + { return InstructionSet.X86_SSE2; } + + case "Sse3": + { return InstructionSet.X86_SSE3; } + + case "Ssse3": + { return InstructionSet.X86_SSSE3; } + + case "Sse41": + { return InstructionSet.X86_SSE41; } + + case "Sse42": + { return InstructionSet.X86_SSE42; } + + case "Avx": + { return InstructionSet.X86_AVX; } + + case "Avx2": + { return InstructionSet.X86_AVX2; } + + case "Aes": + { return InstructionSet.X86_AES; } + + case "Bmi1": + { return InstructionSet.X86_BMI1; } + + case "Bmi2": + { return InstructionSet.X86_BMI2; } + + case "Fma": + { return InstructionSet.X86_FMA; } + + case "Lzcnt": + { return InstructionSet.X86_LZCNT; } + + case "Pclmulqdq": + { return InstructionSet.X86_PCLMULQDQ; } + + case "Popcnt": + { return InstructionSet.X86_POPCNT; } + + case "AvxVnni": + { return InstructionSet.X86_AVXVNNI; } + + case "Movbe": + { return InstructionSet.X86_MOVBE; } + + case "X86Serialize": + { return InstructionSet.X86_X86Serialize; } + + case "Avx512F": + if (nestedTypeName == "VL") + { return InstructionSet.X86_AVX512F_VL; } + else + { return InstructionSet.X86_AVX512F; } + + case "Avx512BW": + if (nestedTypeName == "VL") + { return InstructionSet.X86_AVX512BW_VL; } + else + { return InstructionSet.X86_AVX512BW; } + + case "Avx512CD": + if (nestedTypeName == "VL") + { return InstructionSet.X86_AVX512CD_VL; } + else + { return InstructionSet.X86_AVX512CD; } + + case "Avx512DQ": + if (nestedTypeName == "VL") + { return InstructionSet.X86_AVX512DQ_VL; } + else + { return InstructionSet.X86_AVX512DQ; } + + case "Avx512Vbmi": + if (nestedTypeName == "VL") + { return InstructionSet.X86_AVX512VBMI_VL; } + else + { return InstructionSet.X86_AVX512VBMI; } + + } + break; + + } + return InstructionSet.ILLEGAL; + } + } } diff --git a/src/coreclr/tools/Common/JitInterface/ThunkGenerator/InstructionSetDesc.txt b/src/coreclr/tools/Common/JitInterface/ThunkGenerator/InstructionSetDesc.txt index fa4107640b494a..d5f74271ed13fb 100644 --- a/src/coreclr/tools/Common/JitInterface/ThunkGenerator/InstructionSetDesc.txt +++ b/src/coreclr/tools/Common/JitInterface/ThunkGenerator/InstructionSetDesc.txt @@ -23,7 +23,7 @@ ; DO NOT CHANGE R2R NUMERIC VALUES OF THE EXISTING SETS. Changing R2R numeric values definitions would be R2R format breaking change. ; Definition of X86 instruction sets -definearch ,X86 ,32Bit ,X64 +definearch ,X86 ,32Bit ,X64, X64 instructionset ,X86 ,X86Base , ,22 ,X86Base ,base instructionset ,X86 ,Sse , ,1 ,SSE ,sse @@ -126,12 +126,12 @@ implication ,X86 ,AVX512VBMI ,AVX512BW implication ,X86 ,AVX512VBMI_VL ,AVX512BW_VL ; Definition of X64 instruction sets -definearch ,X64 ,64Bit ,X64 +definearch ,X64 ,64Bit ,X64, X64 copyinstructionsets,X86 ,X64 ; Definition of Arm64 instruction sets -definearch ,ARM64 ,64Bit ,Arm64 +definearch ,ARM64 ,64Bit ,Arm64, Arm64 instructionset ,ARM64 ,ArmBase , ,16 ,ArmBase ,base instructionset ,ARM64 ,AdvSimd , ,17 ,AdvSimd ,neon diff --git a/src/coreclr/tools/Common/JitInterface/ThunkGenerator/InstructionSetGenerator.cs b/src/coreclr/tools/Common/JitInterface/ThunkGenerator/InstructionSetGenerator.cs index bc218f33d34b59..4547b91e2fdc70 100644 --- a/src/coreclr/tools/Common/JitInterface/ThunkGenerator/InstructionSetGenerator.cs +++ b/src/coreclr/tools/Common/JitInterface/ThunkGenerator/InstructionSetGenerator.cs @@ -92,6 +92,7 @@ public InstructionSetImplication(string architecture, InstructionSetImplication private Dictionary> _architectureVectorInstructionSetJitNames = new Dictionary>(); private HashSet _64BitArchitectures = new HashSet(); private Dictionary _64BitVariantArchitectureJitNameSuffix = new Dictionary(); + private Dictionary _64BitVariantArchitectureManagedNameSuffix = new Dictionary(); // This represents the number of flags fields we currently track private const int FlagsFieldCount = 1; @@ -126,6 +127,11 @@ private string ArchToInstructionSetSuffixArch(string arch) return _64BitVariantArchitectureJitNameSuffix[arch]; } + private string ArchToManagedInstructionSetSuffixArch(string arch) + { + return _64BitVariantArchitectureManagedNameSuffix[arch]; + } + public bool ParseInput(TextReader tr) { int currentLineIndex = 1; @@ -151,7 +157,7 @@ public bool ParseInput(TextReader tr) switch (command[0]) { case "definearch": - if (command.Length != 4) + if (command.Length != 5) throw new Exception($"Incorrect number of args for definearch {command.Length}"); ArchitectureEncountered(command[1]); if (command[2] == "64Bit") @@ -163,6 +169,7 @@ public bool ParseInput(TextReader tr) throw new Exception("Architecture must be 32Bit or 64Bit"); } _64BitVariantArchitectureJitNameSuffix[command[1]] = command[3]; + _64BitVariantArchitectureManagedNameSuffix[command[1]] = command[4]; break; case "instructionset": if (command.Length != 7) @@ -769,6 +776,119 @@ public void Set64BitInstructionSetVariantsUnconditionally(TargetArchitecture arc tr.Write(@" } } } + public static class InstructionSetParser + { + public static InstructionSet LookupPlatformIntrinsicInstructionSet(TargetArchitecture targetArch, TypeDesc intrinsicType) + { + MetadataType metadataType = intrinsicType.GetTypeDefinition() as MetadataType; + if (metadataType == null) + return InstructionSet.ILLEGAL; + + string namespaceName; + string typeName = metadataType.Name; + string nestedTypeName = null; + if (metadataType.ContainingType != null) + { + var enclosingType = (MetadataType)metadataType.ContainingType; + namespaceName = enclosingType.Namespace; + nestedTypeName = metadataType.Name; + typeName = enclosingType.Name; + } + else + { + namespaceName = metadataType.Namespace; + } + + string platformIntrinsicNamespace; + + switch (targetArch) + { + case TargetArchitecture.ARM64: + platformIntrinsicNamespace = ""System.Runtime.Intrinsics.Arm""; + break; + + case TargetArchitecture.X64: + case TargetArchitecture.X86: + platformIntrinsicNamespace = ""System.Runtime.Intrinsics.X86""; + break; + + default: + return InstructionSet.ILLEGAL; + } + + if (namespaceName != platformIntrinsicNamespace) + return InstructionSet.ILLEGAL; + + switch (targetArch) + { +"); + foreach (string architecture in _architectures) + { + tr.Write($@" + case TargetArchitecture.{architecture}: + switch (typeName) + {{ +"); + foreach (var instructionSet in _instructionSets) + { + if (instructionSet.Architecture != architecture) continue; + // VL instructionSets are handled as part of their master instruction set. + if (instructionSet.ManagedName.EndsWith("_VL")) + continue; + + // Instruction sets without a managed name are not handled here. + if (string.IsNullOrEmpty(instructionSet.ManagedName)) + continue; + + InstructionSetInfo vlInstructionSet = null; + foreach (var potentialVLinstructionSet in _instructionSets) + { + if (instructionSet.Architecture != architecture) continue; + string managedName = potentialVLinstructionSet.ManagedName; + if (managedName.EndsWith("_VL") && instructionSet.ManagedName == managedName.Substring(0, managedName.Length - 3)) + { + vlInstructionSet = potentialVLinstructionSet; break; + } + } + + string hasSixtyFourBitInstructionSet = null; + if (_64bitVariants[architecture].Contains(instructionSet.JitName) && _64BitArchitectures.Contains(architecture)) + { + hasSixtyFourBitInstructionSet = ArchToManagedInstructionSetSuffixArch(architecture); + } + + tr.Write(@$" + case ""{instructionSet.ManagedName}"":"); + + if (hasSixtyFourBitInstructionSet != null) + { + tr.Write($@" + if (nestedTypeName == ""{hasSixtyFourBitInstructionSet}"") + {{ return InstructionSet.{architecture}_{instructionSet.JitName}_{ArchToInstructionSetSuffixArch(architecture)}; }} + else"); + } + if (vlInstructionSet != null) + { + tr.Write($@" + if (nestedTypeName == ""VL"") + {{ return InstructionSet.{architecture}_{vlInstructionSet.JitName}; }} + else"); + } + tr.Write($@" + {{ return InstructionSet.{architecture}_{instructionSet.JitName}; }} +"); + } + tr.Write($@" + }} + break; +"); + } + + tr.Write(@" + } + return InstructionSet.ILLEGAL; + } + } } "); return; diff --git a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunCompilationModuleGroupBase.cs b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunCompilationModuleGroupBase.cs index cb4532ac16705e..27d0a9798b0d61 100644 --- a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunCompilationModuleGroupBase.cs +++ b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunCompilationModuleGroupBase.cs @@ -15,6 +15,7 @@ using Internal.TypeSystem.Interop; using Debug = System.Diagnostics.Debug; using Internal.ReadyToRunConstants; +using Internal.JitInterface; namespace ILCompiler { @@ -30,6 +31,7 @@ public class ReadyToRunCompilationModuleGroupConfig public IEnumerable CrossModuleInlineable; public bool CompileGenericDependenciesFromVersionBubbleModuleSet; public bool CompileAllPossibleCrossModuleCode; + public InstructionSetSupport InstructionSetSupport; } public abstract class ReadyToRunCompilationModuleGroupBase : CompilationModuleGroup @@ -64,9 +66,11 @@ public abstract class ReadyToRunCompilationModuleGroupBase : CompilationModuleGr private ConcurrentDictionary _tokenTranslationFreeNonVersionable = new ConcurrentDictionary(); private readonly Func _tokenTranslationFreeNonVersionableUncached; private bool CompileAllPossibleCrossModuleCode = false; + private InstructionSetSupport _instructionSetSupport; public ReadyToRunCompilationModuleGroupBase(ReadyToRunCompilationModuleGroupConfig config) { + _instructionSetSupport = config.InstructionSetSupport; _compilationModuleSet = new HashSet(config.CompilationModuleSet); _isCompositeBuildMode = config.IsCompositeBuildMode; _isInputBubble = config.IsInputBubble; @@ -409,6 +413,11 @@ public sealed override bool CanInline(MethodDesc callerMethod, MethodDesc callee bool canInline = (VersionsWithMethodBody(callerMethod) || CrossModuleInlineable(callerMethod)) && (VersionsWithMethodBody(calleeMethod) || CrossModuleInlineable(calleeMethod) || IsNonVersionableWithILTokensThatDoNotNeedTranslation(calleeMethod)); + if (canInline) + { + if (CorInfoImpl.ShouldCodeNotBeCompiledIntoFinalImage(_instructionSetSupport, calleeMethod)) + canInline = false; + } return canInline; } diff --git a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunCompilerContext.cs b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunCompilerContext.cs index 88a74a9ebc2f00..6eed36223b0992 100644 --- a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunCompilerContext.cs +++ b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunCompilerContext.cs @@ -38,9 +38,10 @@ public partial class ReadyToRunCompilerContext : CompilerTypeSystemContext private VectorFieldLayoutAlgorithm _vectorFieldLayoutAlgorithm; private Int128FieldLayoutAlgorithm _int128FieldLayoutAlgorithm; - public ReadyToRunCompilerContext(TargetDetails details, SharedGenericsMode genericsMode, bool bubbleIncludesCorelib, CompilerTypeSystemContext oldTypeSystemContext = null) + public ReadyToRunCompilerContext(TargetDetails details, SharedGenericsMode genericsMode, bool bubbleIncludesCorelib, InstructionSetSupport instructionSetSupport, CompilerTypeSystemContext oldTypeSystemContext = null) : base(details, genericsMode) { + InstructionSetSupport = instructionSetSupport; _r2rFieldLayoutAlgorithm = new ReadyToRunMetadataFieldLayoutAlgorithm(); _systemObjectFieldLayoutAlgorithm = new SystemObjectFieldLayoutAlgorithm(_r2rFieldLayoutAlgorithm); @@ -67,6 +68,8 @@ public ReadyToRunCompilerContext(TargetDetails details, SharedGenericsMode gener } } + public InstructionSetSupport InstructionSetSupport { get; } + public override FieldLayoutAlgorithm GetLayoutAlgorithmForType(DefType type) { if (type.IsObject) diff --git a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunLibraryRootProvider.cs b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunLibraryRootProvider.cs index 838cf6e9df6c00..dd977c2e437bd7 100644 --- a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunLibraryRootProvider.cs +++ b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunLibraryRootProvider.cs @@ -16,10 +16,12 @@ namespace ILCompiler public class ReadyToRunLibraryRootProvider : ICompilationRootProvider { private EcmaModule _module; + private InstructionSetSupport _instructionSetSupport; public ReadyToRunLibraryRootProvider(EcmaModule module) { _module = module; + _instructionSetSupport = ((ReadyToRunCompilerContext)module.Context).InstructionSetSupport; } public void AddCompilationRoots(IRootingServiceProvider rootProvider) @@ -60,7 +62,7 @@ private void RootMethods(MetadataType type, string reason, IRootingServiceProvid try { - if (!CorInfoImpl.ShouldSkipCompilation(method)) + if (!CorInfoImpl.ShouldSkipCompilation(_instructionSetSupport, method)) { CheckCanGenerateMethod(methodToRoot); rootProvider.AddCompilationRoot(methodToRoot, rootMinimalDependencies: false, reason: reason); diff --git a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunProfilingRootProvider.cs b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunProfilingRootProvider.cs index f7175cf6ddead4..f586d8ab7e3f43 100644 --- a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunProfilingRootProvider.cs +++ b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunProfilingRootProvider.cs @@ -19,11 +19,13 @@ public class ReadyToRunProfilingRootProvider : ICompilationRootProvider { private EcmaModule _module; private IEnumerable _profileData; + private InstructionSetSupport _instructionSetSupport; public ReadyToRunProfilingRootProvider(EcmaModule module, ProfileDataManager profileDataManager) { _module = module; _profileData = profileDataManager.GetInputProfileDataMethodsForModule(module); + _instructionSetSupport = ((ReadyToRunCompilerContext)module.Context).InstructionSetSupport; } public void AddCompilationRoots(IRootingServiceProvider rootProvider) @@ -61,7 +63,7 @@ public void AddCompilationRoots(IRootingServiceProvider rootProvider) if (containsSignatureVariables) continue; - if (!CorInfoImpl.ShouldSkipCompilation(method)) + if (!CorInfoImpl.ShouldSkipCompilation(_instructionSetSupport, method)) { ReadyToRunLibraryRootProvider.CheckCanGenerateMethod(method); rootProvider.AddCompilationRoot(method, rootMinimalDependencies: true, reason: "Profile triggered method"); diff --git a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunVisibilityRootProvider.cs b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunVisibilityRootProvider.cs index 4766937ce694b7..6772f366e22fbc 100644 --- a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunVisibilityRootProvider.cs +++ b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunVisibilityRootProvider.cs @@ -16,10 +16,12 @@ namespace ILCompiler public class ReadyToRunVisibilityRootProvider : ICompilationRootProvider { private EcmaModule _module; + private InstructionSetSupport _instructionSetSupport; public ReadyToRunVisibilityRootProvider(EcmaModule module) { _module = module; + _instructionSetSupport = ((ReadyToRunCompilerContext)module.Context).InstructionSetSupport; } public void AddCompilationRoots(IRootingServiceProvider rootProvider) @@ -102,7 +104,7 @@ private void RootMethods(MetadataType type, string reason, IRootingServiceProvid try { - if (!CorInfoImpl.ShouldSkipCompilation(method)) + if (!CorInfoImpl.ShouldSkipCompilation(_instructionSetSupport, method)) { ReadyToRunLibraryRootProvider.CheckCanGenerateMethod(methodToRoot); rootProvider.AddCompilationRoot(methodToRoot, rootMinimalDependencies: false, reason: reason); diff --git a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunXmlRootProvider.cs b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunXmlRootProvider.cs index 8d168a3a39f2d0..4ea3ab2b3b5ca6 100644 --- a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunXmlRootProvider.cs +++ b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/Compiler/ReadyToRunXmlRootProvider.cs @@ -80,11 +80,13 @@ private class CompilationRootProvider : ProcessLinkerXmlBase private const string NamespaceElementName = "namespace"; private const string _preserve = "preserve"; private readonly IRootingServiceProvider _rootingServiceProvider; + private InstructionSetSupport _instructionSetSupport; public CompilationRootProvider(IRootingServiceProvider provider, TypeSystemContext context, Stream documentStream, ManifestResource resource, ModuleDesc owningModule, string xmlDocumentLocation) : base(null , context, documentStream, resource, owningModule, xmlDocumentLocation, ImmutableDictionary.Empty) { _rootingServiceProvider = provider; + _instructionSetSupport = ((ReadyToRunCompilerContext)owningModule.Context).InstructionSetSupport; } public void ProcessXml() => ProcessXml(false); @@ -141,7 +143,7 @@ private void RootMethod(MethodDesc method) try { - if (!CorInfoImpl.ShouldSkipCompilation(method)) + if (!CorInfoImpl.ShouldSkipCompilation(_instructionSetSupport, method)) { ReadyToRunLibraryRootProvider.CheckCanGenerateMethod(methodToRoot); _rootingServiceProvider.AddCompilationRoot(methodToRoot, rootMinimalDependencies: false, reason: "Linker XML descriptor"); diff --git a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/JitInterface/CorInfoImpl.ReadyToRun.cs b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/JitInterface/CorInfoImpl.ReadyToRun.cs index ac1f19e5f80f0c..bb23738a4dc621 100644 --- a/src/coreclr/tools/aot/ILCompiler.ReadyToRun/JitInterface/CorInfoImpl.ReadyToRun.cs +++ b/src/coreclr/tools/aot/ILCompiler.ReadyToRun/JitInterface/CorInfoImpl.ReadyToRun.cs @@ -493,7 +493,7 @@ private static mdToken FindGenericMethodArgTypeSpec(EcmaModule module) throw new NotSupportedException(); } - public static bool ShouldSkipCompilation(MethodDesc methodNeedingCode) + public static bool ShouldSkipCompilation(InstructionSetSupport instructionSetSupport, MethodDesc methodNeedingCode) { if (methodNeedingCode.IsAggressiveOptimization) { @@ -520,15 +520,92 @@ public static bool ShouldSkipCompilation(MethodDesc methodNeedingCode) // Special methods on delegate types return true; } - if (methodNeedingCode.HasCustomAttribute("System.Runtime", "BypassReadyToRunAttribute")) + if (ShouldCodeNotBeCompiledIntoFinalImage(instructionSetSupport, methodNeedingCode)) { - // This is a quick workaround to opt specific methods out of ReadyToRun compilation to work around bugs. return true; } return false; } + public static bool ShouldCodeNotBeCompiledIntoFinalImage(InstructionSetSupport instructionSetSupport, MethodDesc method) + { + EcmaMethod ecmaMethod = method.GetTypicalMethodDefinition() as EcmaMethod; + + var metadataReader = ecmaMethod.MetadataReader; + var stringComparer = metadataReader.StringComparer; + + var handle = ecmaMethod.Handle; + + List compExactlyDependsOnList = null; + + foreach (var attributeHandle in metadataReader.GetMethodDefinition(handle).GetCustomAttributes()) + { + StringHandle namespaceHandle, nameHandle; + if (!metadataReader.GetAttributeNamespaceAndName(attributeHandle, out namespaceHandle, out nameHandle)) + continue; + + if (metadataReader.StringComparer.Equals(namespaceHandle, "System.Runtime")) + { + if (metadataReader.StringComparer.Equals(nameHandle, "BypassReadyToRunAttribute")) + { + return true; + } + } + else if (metadataReader.StringComparer.Equals(namespaceHandle, "System.Runtime.CompilerServices")) + { + if (metadataReader.StringComparer.Equals(nameHandle, "CompExactlyDependsOnAttribute")) + { + var customAttribute = metadataReader.GetCustomAttribute(attributeHandle); + var typeProvider = new CustomAttributeTypeProvider(ecmaMethod.Module); + var fixedArguments = customAttribute.DecodeValue(typeProvider).FixedArguments; + if (fixedArguments.Length < 1) + continue; + + TypeDesc typeForBypass = fixedArguments[0].Value as TypeDesc; + if (typeForBypass != null) + { + if (compExactlyDependsOnList == null) + compExactlyDependsOnList = new List(); + + compExactlyDependsOnList.Add(typeForBypass); + } + } + } + } + + if (compExactlyDependsOnList != null && compExactlyDependsOnList.Count > 0) + { + // Default to true, and set to false if at least one of the types is actually supported in the current environment, and none of the + // intrinsic types are in an opportunistic state. + bool doBypass = true; + + foreach (var intrinsicType in compExactlyDependsOnList) + { + InstructionSet instructionSet = InstructionSetParser.LookupPlatformIntrinsicInstructionSet(intrinsicType.Context.Target.Architecture, intrinsicType); + if (instructionSet == InstructionSet.ILLEGAL) + { + // This instruction set isn't supported on the current platform at all. + continue; + } + if (instructionSetSupport.IsInstructionSetSupported(instructionSet) || instructionSetSupport.IsInstructionSetExplicitlyUnsupported(instructionSet)) + { + doBypass = false; + } + else + { + // If we reach here this is an instruction set generally supported on this platform, but we don't know what the behavior will be at runtime + return true; + } + } + + return doBypass; + } + + // No reason to bypass compilation and code generation. + return false; + } + private static bool FunctionJustThrows(MethodIL ilBody) { try @@ -582,7 +659,7 @@ private static bool FunctionHasNonReferenceableTypedILCatchClause(MethodIL metho public static bool IsMethodCompilable(Compilation compilation, MethodDesc method) { // This logic must mirror the logic in CompileMethod used to get to the point of calling CompileMethodInternal - if (ShouldSkipCompilation(method) || MethodSignatureIsUnstable(method.Signature, out var _)) + if (ShouldSkipCompilation(compilation.InstructionSetSupport, method) || MethodSignatureIsUnstable(method.Signature, out var _)) return false; MethodIL methodIL = compilation.GetMethodIL(method); @@ -605,7 +682,7 @@ public void CompileMethod(MethodWithGCInfo methodCodeNodeNeedingCode, Logger log try { - if (ShouldSkipCompilation(MethodBeingCompiled)) + if (ShouldSkipCompilation(_compilation.InstructionSetSupport, MethodBeingCompiled)) { if (logger.IsVerbose) logger.Writer.WriteLine($"Info: Method `{MethodBeingCompiled}` was not compiled because it is skipped."); diff --git a/src/coreclr/tools/aot/crossgen2/Program.cs b/src/coreclr/tools/aot/crossgen2/Program.cs index b044eea0b47438..639e5551e96940 100644 --- a/src/coreclr/tools/aot/crossgen2/Program.cs +++ b/src/coreclr/tools/aot/crossgen2/Program.cs @@ -120,7 +120,7 @@ public int Run() // // Initialize type system context // - _typeSystemContext = new ReadyToRunCompilerContext(targetDetails, genericsMode, versionBubbleIncludesCoreLib); + _typeSystemContext = new ReadyToRunCompilerContext(targetDetails, genericsMode, versionBubbleIncludesCoreLib, instructionSetSupport); string compositeRootPath = Get(_command.CompositeRootPath); @@ -262,7 +262,7 @@ public int Run() { bool singleCompilationVersionBubbleIncludesCoreLib = versionBubbleIncludesCoreLib || (String.Compare(inputFile.Key, "System.Private.CoreLib", StringComparison.OrdinalIgnoreCase) == 0); - typeSystemContext = new ReadyToRunCompilerContext(targetDetails, genericsMode, singleCompilationVersionBubbleIncludesCoreLib, _typeSystemContext); + typeSystemContext = new ReadyToRunCompilerContext(targetDetails, genericsMode, singleCompilationVersionBubbleIncludesCoreLib, _typeSystemContext.InstructionSetSupport, _typeSystemContext); typeSystemContext.InputFilePaths = singleCompilationInputFilePaths; typeSystemContext.ReferenceFilePaths = referenceFilePaths; typeSystemContext.SetSystemModule((EcmaModule)typeSystemContext.GetModuleForSimpleName(systemModuleName)); @@ -402,6 +402,7 @@ private void RunSingleCompilation(Dictionary inFilePaths, Instru groupConfig.CrossModuleInlining = groupConfig.CrossModuleGenericCompilation; // Currently we set these flags to the same values groupConfig.CrossModuleInlineable = crossModuleInlineableCode; groupConfig.CompileAllPossibleCrossModuleCode = false; + groupConfig.InstructionSetSupport = instructionSetSupport; // Handle non-local generics command line option ModuleDesc nonLocalGenericsHome = compileBubbleGenerics ? inputModules[0] : null; diff --git a/src/libraries/Common/src/System/HexConverter.cs b/src/libraries/Common/src/System/HexConverter.cs index b80e404442b02c..ccce1cb691f104 100644 --- a/src/libraries/Common/src/System/HexConverter.cs +++ b/src/libraries/Common/src/System/HexConverter.cs @@ -91,6 +91,8 @@ public static void ToCharsBuffer(byte value, Span buffer, int startingInde #if SYSTEM_PRIVATE_CORELIB // Converts Vector128 into 2xVector128 ASCII Hex representation [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] internal static (Vector128, Vector128) AsciiToHexVector128(Vector128 src, Vector128 hexMap) { Debug.Assert(Ssse3.IsSupported || AdvSimd.Arm64.IsSupported); @@ -105,6 +107,8 @@ internal static (Vector128, Vector128) AsciiToHexVector128(Vector128 Vector128.ShuffleUnsafe(hexMap, highNibbles & Vector128.Create((byte)0xF))); } + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] private static void EncodeToUtf16_Vector128(ReadOnlySpan bytes, Span chars, Casing casing) { Debug.Assert(bytes.Length >= Vector128.Count); @@ -236,6 +240,8 @@ public static bool TryDecodeFromUtf16(ReadOnlySpan chars, Span bytes } #if SYSTEM_PRIVATE_CORELIB + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] + [CompExactlyDependsOn(typeof(Ssse3))] public static bool TryDecodeFromUtf16_Vector128(ReadOnlySpan chars, Span bytes) { Debug.Assert(Ssse3.IsSupported || AdvSimd.Arm64.IsSupported); diff --git a/src/libraries/System.Private.CoreLib/gen/IntrinsicsInSystemPrivateCoreLibAnalyzer.cs b/src/libraries/System.Private.CoreLib/gen/IntrinsicsInSystemPrivateCoreLibAnalyzer.cs new file mode 100644 index 00000000000000..75f702eaca8edf --- /dev/null +++ b/src/libraries/System.Private.CoreLib/gen/IntrinsicsInSystemPrivateCoreLibAnalyzer.cs @@ -0,0 +1,685 @@ +// Licensed to the .NET Foundation under one or more agreements. +// The .NET Foundation licenses this file to you under the MIT license. + +using System; +using System.Collections.Generic; +using System.Collections.Immutable; +using System.Data.Common; +using System.Diagnostics; +using System.Linq; +using Microsoft.CodeAnalysis; +using Microsoft.CodeAnalysis.CSharp; +using Microsoft.CodeAnalysis.CSharp.Syntax; +using Microsoft.CodeAnalysis.Diagnostics; +using Microsoft.CodeAnalysis.Operations; + +// This isn't a shipping analyzer, so we don't need release tracking +#pragma warning disable RS2008 + +#nullable enable + +namespace IntrinsicsInSystemPrivateCoreLib +{ + [DiagnosticAnalyzer(LanguageNames.CSharp)] + [CLSCompliant(false)] + public class IntrinsicsInSystemPrivateCoreLibAnalyzer : DiagnosticAnalyzer + { + public const string DiagnosticId = "IntrinsicsInSystemPrivateCoreLib"; + + private const string Title = "System.Private.CoreLib ReadyToRun Intrinsics"; + private const string MessageFormat = "Intrinsics from class '{0}' used without the protection of an explicit if statement checking the correct IsSupported flag or CompExactlyDependsOn"; + private const string Description = "ReadyToRun Intrinsic Safety For System.Private.CoreLib."; + private const string Category = "IntrinsicsCorrectness"; + + private static readonly DiagnosticDescriptor Rule = new DiagnosticDescriptor(DiagnosticId, Title, MessageFormat, Category, DiagnosticSeverity.Error, isEnabledByDefault: true, description: Description); + + public const string DiagnosticIdHelper = "IntrinsicsInSystemPrivateCoreLibHelper"; + private const string MessageHelperFormat = "Helper '{0}' used without the protection of an explicit if statement checking the correct IsSupported flag or CompExactlyDependsOn"; + private static readonly DiagnosticDescriptor RuleHelper = new DiagnosticDescriptor(DiagnosticIdHelper, Title, MessageHelperFormat, Category, DiagnosticSeverity.Error, isEnabledByDefault: true, description: Description); + + public const string DiagnosticIdConditionParsing = "IntrinsicsInSystemPrivateCoreLibConditionParsing"; + private const string MessageNonParseableConditionFormat = "Unable to parse condition to determine if intrinsics are correctly used"; + private static readonly DiagnosticDescriptor RuleCantParse = new DiagnosticDescriptor(DiagnosticIdConditionParsing, Title, MessageNonParseableConditionFormat, Category, DiagnosticSeverity.Error, isEnabledByDefault: true, description: Description); + + public const string DiagnosticIdAttributeNotSpecificEnough = "IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough"; + private const string MessageAttributeNotSpecificEnoughFormat = "CompExactlyDependsOn({0}) attribute found which relates to this IsSupported check, but is not specific enough. Suppress this error if this function has an appropriate if condition so that if the meaning of the function is invariant regardless of the result of the call to IsSupported."; + private static readonly DiagnosticDescriptor RuleAttributeNotSpecificEnough = new DiagnosticDescriptor(DiagnosticIdAttributeNotSpecificEnough, Title, MessageAttributeNotSpecificEnoughFormat, Category, DiagnosticSeverity.Error, isEnabledByDefault: true, description: Description); + + public override ImmutableArray SupportedDiagnostics { get { return ImmutableArray.Create(Rule, RuleHelper, RuleCantParse, RuleAttributeNotSpecificEnough); } } + + private static INamespaceSymbol GetNamespace(IAssemblySymbol assembly, params string[] namespaceNames) + { + INamespaceSymbol outerNamespace = assembly.GlobalNamespace; + + string newFullNamespaceName = ""; + INamespaceSymbol? foundNamespace = null; + + foreach (var namespaceName in namespaceNames) + { + if (newFullNamespaceName == "") + newFullNamespaceName = namespaceName; + else + newFullNamespaceName = newFullNamespaceName + "." + namespaceName; + + foundNamespace = null; + + foreach (var innerNamespace in outerNamespace.GetNamespaceMembers()) + { + if (innerNamespace.Name == namespaceName) + { + foundNamespace = innerNamespace; + break; + } + } + + if (foundNamespace == null) + { + throw new Exception($"Not able to find {newFullNamespaceName} namespace"); + } + + outerNamespace = foundNamespace; + } + + return foundNamespace!; + } + + private static IEnumerable GetNestedTypes(INamedTypeSymbol type) + { + foreach (var typeSymbol in type.GetTypeMembers()) + { + yield return typeSymbol; + foreach (var nestedTypeSymbol in GetNestedTypes(typeSymbol)) + { + yield return nestedTypeSymbol; + } + } + } + + private static IEnumerable GetSubtypes(INamespaceSymbol namespaceSymbol) + { + foreach (var typeSymbol in namespaceSymbol.GetTypeMembers()) + { + yield return typeSymbol; + foreach (var nestedTypeSymbol in GetNestedTypes(typeSymbol)) + { + yield return nestedTypeSymbol; + } + } + + foreach (var namespaceMember in namespaceSymbol.GetNamespaceMembers()) + { + foreach (var typeSymbol in GetSubtypes(namespaceMember)) + { + yield return typeSymbol; + } + } + } + + private sealed class IntrinsicsAnalyzerOnLoadData + { + public IntrinsicsAnalyzerOnLoadData(HashSet namedTypesToBeProtected, + INamedTypeSymbol? bypassReadyToRunAttribute, + INamedTypeSymbol? compExactlyDependsOn) + { + NamedTypesToBeProtected = namedTypesToBeProtected; + BypassReadyToRunAttribute = bypassReadyToRunAttribute; + CompExactlyDependsOn = compExactlyDependsOn; + } + public readonly HashSet NamedTypesToBeProtected; + public readonly INamedTypeSymbol? BypassReadyToRunAttribute; + public readonly INamedTypeSymbol? CompExactlyDependsOn; + } + + public override void Initialize(AnalysisContext context) + { + context.ConfigureGeneratedCodeAnalysis(GeneratedCodeAnalysisFlags.None); + context.EnableConcurrentExecution(); + context.RegisterCompilationStartAction(context => + { + HashSet namedTypesToBeProtected = new HashSet(SymbolEqualityComparer.Default); + INamespaceSymbol systemRuntimeIntrinsicsNamespace = GetNamespace(context.Compilation.Assembly, "System", "Runtime", "Intrinsics"); + INamedTypeSymbol? bypassReadyToRunAttribute = context.Compilation.Assembly.GetTypeByMetadataName("System.Runtime.BypassReadyToRunAttribute"); + INamedTypeSymbol? compExactlyDependsOn = context.Compilation.Assembly.GetTypeByMetadataName("System.Runtime.CompilerServices.CompExactlyDependsOnAttribute"); + + IntrinsicsAnalyzerOnLoadData onLoadData = new IntrinsicsAnalyzerOnLoadData(namedTypesToBeProtected, bypassReadyToRunAttribute, compExactlyDependsOn); + + // Find all types in the System.Runtime.Intrinsics namespace that have an IsSupported property that are NOT + // directly in the System.Runtime.Intrinsics namespace + foreach (var architectureSpecificNamespace in systemRuntimeIntrinsicsNamespace.GetNamespaceMembers()) + { + foreach (var typeSymbol in GetSubtypes(architectureSpecificNamespace)) + { + foreach (var member in typeSymbol.GetMembers()) + { + if (member.Kind == SymbolKind.Property) + { + if (member.Name == "IsSupported") + { + namedTypesToBeProtected.Add(typeSymbol); + } + } + } + } + } + + context.RegisterSymbolStartAction(context => + { + var methodSymbol = (IMethodSymbol)context.Symbol; + + foreach (var attributeData in methodSymbol.GetAttributes()) + { + if (bypassReadyToRunAttribute != null) + { + if (attributeData.AttributeClass.Equals(bypassReadyToRunAttribute, SymbolEqualityComparer.Default)) + { + // This method isn't involved in ReadyToRun, and so doesn't need analysis + return; + } + } + } + + context.RegisterOperationAction(context => + { + AnalyzeOperation(context.Operation, methodSymbol, context, onLoadData); + }, + OperationKind.Invocation, OperationKind.PropertyReference); + }, SymbolKind.Method); + }); + } + + private static ISymbol? GetOperationSymbol(IOperation operation) + => operation switch + { + IInvocationOperation iOperation => iOperation.TargetMethod, + IMemberReferenceOperation mOperation => mOperation.Member, + _ => null, + }; + + private static INamedTypeSymbol? GetIsSupportedTypeSymbol(SemanticModel model, MemberAccessExpressionSyntax memberAccessExpression) + { + if (memberAccessExpression.Name is IdentifierNameSyntax identifierName && identifierName.Identifier.Text == "IsSupported") + { + var symbolInfo = model.GetSymbolInfo(memberAccessExpression); + return symbolInfo.Symbol.ContainingSymbol as INamedTypeSymbol; + } + else + { + return null; + } + } + + private static INamedTypeSymbol? GetIsSupportedTypeSymbol(SemanticModel model, IdentifierNameSyntax identifierName) + { + var symbolInfo = model.GetSymbolInfo(identifierName); + + if (identifierName.Identifier.Text == "IsSupported") + return symbolInfo.Symbol.ContainingSymbol as INamedTypeSymbol; + else + return null; + } + + private static INamedTypeSymbol[] GatherAndConditions(SemanticModel model, ExpressionSyntax expressionToDecompose) + { + if (expressionToDecompose is ParenthesizedExpressionSyntax parenthesizedExpression) + { + return GatherAndConditions(model, parenthesizedExpression.Expression); + } + + if (expressionToDecompose is MemberAccessExpressionSyntax memberAccessExpression) + { + var isSupportedType = GetIsSupportedTypeSymbol(model, memberAccessExpression); + if (isSupportedType == null) + { + return Array.Empty(); + } + else + return new INamedTypeSymbol[] { isSupportedType }; + } + else if (expressionToDecompose is IdentifierNameSyntax identifier) + { + var isSupportedType = GetIsSupportedTypeSymbol(model, identifier); + if (isSupportedType == null) + { + return Array.Empty(); + } + else + return new INamedTypeSymbol[] { isSupportedType }; + } + else if (expressionToDecompose is BinaryExpressionSyntax binaryExpression) + { + if (binaryExpression.OperatorToken is SyntaxToken operatorToken && operatorToken.ValueText == "&&") + { + var decomposedLeft = GatherAndConditions(model, binaryExpression.Left); + var decomposedRight = GatherAndConditions(model, binaryExpression.Right); + int arrayLen = decomposedLeft.Length + decomposedRight.Length; + + if (arrayLen != 0) + { + var retVal = new INamedTypeSymbol[decomposedLeft.Length + decomposedRight.Length]; + Array.Copy(decomposedLeft, retVal, decomposedLeft.Length); + Array.Copy(decomposedRight, 0, retVal, decomposedLeft.Length, decomposedRight.Length); + return retVal; + } + else + { + return Array.Empty(); + } + } + } + + return Array.Empty(); + } + + private static INamedTypeSymbol[][] DecomposePropertySymbolForIsSupportedGroups_Property(OperationAnalysisContext context, SemanticModel model, ExpressionSyntax expressionToDecompose) + { + var symbolInfo = model.GetSymbolInfo(expressionToDecompose); + if (symbolInfo.Symbol.Kind != SymbolKind.Property) + { + return Array.Empty(); + } + + if (symbolInfo.Symbol.Name == "IsSupported") + { + var typeSymbol = symbolInfo.Symbol.ContainingSymbol as INamedTypeSymbol; + if (typeSymbol != null) + { + return new INamedTypeSymbol[][] { new INamedTypeSymbol[] { typeSymbol } }; + } + } + + var propertyDefiningSyntax = symbolInfo.Symbol.DeclaringSyntaxReferences[0].GetSyntax(); + if (propertyDefiningSyntax != null) + { + if (propertyDefiningSyntax is PropertyDeclarationSyntax propertyDeclaration + && propertyDeclaration.ExpressionBody is ArrowExpressionClauseSyntax arrowExpression) + { + return DecomposeConditionForIsSupportedGroups(context, model, arrowExpression.Expression); + } + } + + return Array.Empty(); + } + + private static INamedTypeSymbol[][] DecomposeConditionForIsSupportedGroups(OperationAnalysisContext context, SemanticModel model, ExpressionSyntax expressionToDecompose) + { + if (expressionToDecompose is ParenthesizedExpressionSyntax parenthesizedExpression) + { + return DecomposeConditionForIsSupportedGroups(context, model, parenthesizedExpression.Expression); + } + if (expressionToDecompose is MemberAccessExpressionSyntax || expressionToDecompose is IdentifierNameSyntax) + { + return DecomposePropertySymbolForIsSupportedGroups_Property(context, model, expressionToDecompose); + } + else if (expressionToDecompose is BinaryExpressionSyntax binaryExpression) + { + var decomposedLeft = DecomposeConditionForIsSupportedGroups(context, model, binaryExpression.Left); + var decomposedRight = DecomposeConditionForIsSupportedGroups(context, model, binaryExpression.Right); + if (binaryExpression.OperatorToken is SyntaxToken operatorToken && operatorToken.ValueText == "&&") + { + if (decomposedLeft.Length == 0) + return decomposedRight; + else if (decomposedRight.Length == 0) + return decomposedLeft; + + if ((decomposedLeft.Length > 1) || (decomposedRight.Length > 1)) + { + context.ReportDiagnostic(Diagnostic.Create(RuleCantParse, expressionToDecompose.GetLocation())); + } + + return new INamedTypeSymbol[][] { GatherAndConditions(model, binaryExpression) }; + } + else if (binaryExpression.OperatorToken is SyntaxToken operatorToken2 && operatorToken2.ValueText == "||") + { + if (decomposedLeft.Length == 0 || decomposedRight.Length == 0) + { + if (decomposedLeft.Length != 0 || decomposedRight.Length != 0) + { + context.ReportDiagnostic(Diagnostic.Create(RuleCantParse, expressionToDecompose.GetLocation())); + } + return Array.Empty(); + } + var retVal = new INamedTypeSymbol[decomposedLeft.Length + decomposedRight.Length][]; + Array.Copy(decomposedLeft, retVal, decomposedLeft.Length); + Array.Copy(decomposedRight, 0, retVal, decomposedLeft.Length, decomposedRight.Length); + return retVal; + } + else + { + if (decomposedLeft.Length != 0 || decomposedRight.Length != 0) + { + context.ReportDiagnostic(Diagnostic.Create(RuleCantParse, expressionToDecompose.GetLocation())); + } + } + } + else if (expressionToDecompose is PrefixUnaryExpressionSyntax prefixUnaryExpression) + { + var decomposedOperand = DecomposeConditionForIsSupportedGroups(context, model, prefixUnaryExpression.Operand); + + if (decomposedOperand.Length != 0) + context.ReportDiagnostic(Diagnostic.Create(RuleCantParse, expressionToDecompose.GetLocation())); + } + else if (expressionToDecompose is ConditionalExpressionSyntax conditionalExpressionSyntax) + { + var decomposedTrue = DecomposeConditionForIsSupportedGroups(context, model, conditionalExpressionSyntax.WhenTrue); + var decomposedFalse = DecomposeConditionForIsSupportedGroups(context, model, conditionalExpressionSyntax.WhenFalse); + if (decomposedTrue.Length != 0 || decomposedFalse.Length != 0) + { + context.ReportDiagnostic(Diagnostic.Create(RuleCantParse, expressionToDecompose.GetLocation())); + } + } + return Array.Empty(); + } + + private static IEnumerable GetCompExactlyDependsOnUseList(ISymbol symbol, IntrinsicsAnalyzerOnLoadData onLoadData) + { + var compExactlyDependsOn = onLoadData.CompExactlyDependsOn; + if (compExactlyDependsOn != null) + { + foreach (var attributeData in symbol.GetAttributes()) + { + if (attributeData.AttributeClass.Equals(compExactlyDependsOn, SymbolEqualityComparer.Default)) + { + if (attributeData.ConstructorArguments[0].Value is INamedTypeSymbol attributeTypeSymbol) + { + yield return attributeTypeSymbol; + } + } + } + } + } + + private static bool ConditionAllowsSymbol(ISymbol symbolOfInvokeTarget, INamedTypeSymbol namedTypeThatIsSafeToUse, IntrinsicsAnalyzerOnLoadData onLoadData) + { + HashSet examinedSymbols = new HashSet(SymbolEqualityComparer.Default); + Stack symbolsToExamine = new Stack(); + symbolsToExamine.Push(namedTypeThatIsSafeToUse); + + while (symbolsToExamine.Count > 0) + { + INamedTypeSymbol symbol = symbolsToExamine.Pop(); + if (symbolOfInvokeTarget.ContainingSymbol.Equals(symbol, SymbolEqualityComparer.Default)) + return true; + + foreach (var helperForType in GetCompExactlyDependsOnUseList(symbolOfInvokeTarget, onLoadData)) + { + if (helperForType.Equals(symbol, SymbolEqualityComparer.Default)) + return true; + } + + examinedSymbols.Add(symbol); + if (symbol.ContainingType != null && !examinedSymbols.Contains(symbol.ContainingType)) + symbolsToExamine.Push(symbol.ContainingType); + if (symbol.BaseType != null && !examinedSymbols.Contains(symbol.BaseType)) + symbolsToExamine.Push(symbol.BaseType); + } + + return false; + } + + private static bool TypeSymbolAllowsTypeSymbol(INamedTypeSymbol namedTypeToCheckForSupport, INamedTypeSymbol namedTypeThatProvidesSupport) + { + HashSet examinedSymbols = new HashSet(SymbolEqualityComparer.Default); + Stack symbolsToExamine = new Stack(); + symbolsToExamine.Push(namedTypeThatProvidesSupport); + + while (symbolsToExamine.Count > 0) + { + INamedTypeSymbol symbol = symbolsToExamine.Pop(); + if (namedTypeToCheckForSupport.Equals(symbol, SymbolEqualityComparer.Default)) + return true; + + examinedSymbols.Add(symbol); + if (symbol.ContainingType != null && !examinedSymbols.Contains(symbol.ContainingType)) + symbolsToExamine.Push(symbol.ContainingType); + if (symbol.BaseType != null && !examinedSymbols.Contains(symbol.BaseType)) + symbolsToExamine.Push(symbol.BaseType); + } + + return false; + } + + private static INamespaceSymbol? SymbolToNamespaceSymbol(ISymbol symbol) + { + return symbol.ContainingNamespace; + } + private static void AnalyzeOperation(IOperation operation, IMethodSymbol methodSymbol, OperationAnalysisContext context, IntrinsicsAnalyzerOnLoadData onLoadData) + { + var symbol = GetOperationSymbol(operation); + + if (symbol == null || symbol is ITypeSymbol type && type.SpecialType != SpecialType.None) + { + return; + } + + bool methodNeedsProtectionWithIsSupported = false; +#pragma warning disable RS1024 // The hashset is constructed with the correct comparer + if (onLoadData.NamedTypesToBeProtected.Contains(symbol.ContainingSymbol)) + { + methodNeedsProtectionWithIsSupported = true; + } +#pragma warning restore RS1024 + + // A method on an intrinsic type can call other methods on the intrinsic type safely, as well as methods on the type that contains the method + if (methodNeedsProtectionWithIsSupported && + (methodSymbol.ContainingType.Equals(symbol.ContainingSymbol, SymbolEqualityComparer.Default) + || (methodSymbol.ContainingType.ContainingType != null && methodSymbol.ContainingType.ContainingType.Equals(symbol.ContainingType, SymbolEqualityComparer.Default)))) + { + return; // Intrinsic functions on their containing type can call themselves + } + + if (!methodNeedsProtectionWithIsSupported) + { + if (GetCompExactlyDependsOnUseList(symbol, onLoadData).Any()) + methodNeedsProtectionWithIsSupported = true; + } + + if (!methodNeedsProtectionWithIsSupported) + { + return; + } + + var compExactlyDependsOn = onLoadData.CompExactlyDependsOn; + + ISymbol? symbolThatMightHaveCompExactlyDependsOnAttribute = methodSymbol; + IOperation operationSearch = operation; + while (operationSearch != null) + { + if (operationSearch.Kind == OperationKind.AnonymousFunction) + { + symbolThatMightHaveCompExactlyDependsOnAttribute = null; + break; + } + if (operationSearch.Kind == OperationKind.LocalFunction) + { + // Assign symbolThatMightHaveCompExactlyDependsOnAttribute to the symbol for the LocalFunction + ILocalFunctionOperation localFunctionOperation = (ILocalFunctionOperation)operationSearch; + symbolThatMightHaveCompExactlyDependsOnAttribute = localFunctionOperation.Symbol; + break; + } + + operationSearch = operationSearch.Parent; + } + + if (symbol is IPropertySymbol propertySymbol) + { + if (propertySymbol.Name == "IsSupported") + { + ISymbol? attributeExplicitlyAllowsRelatedSymbol = null; + ISymbol? attributeExplicitlyAllowsExactSymbol = null; + if ((compExactlyDependsOn != null) && symbolThatMightHaveCompExactlyDependsOnAttribute != null) + { + foreach (var attributeData in symbolThatMightHaveCompExactlyDependsOnAttribute.GetAttributes()) + { + if (attributeData.AttributeClass.Equals(compExactlyDependsOn, SymbolEqualityComparer.Default)) + { + if (attributeData.ConstructorArguments[0].Value is INamedTypeSymbol attributeTypeSymbol) + { + var namespaceAttributeTypeSymbol = SymbolToNamespaceSymbol(attributeTypeSymbol); + var namespaceSymbol = SymbolToNamespaceSymbol(symbol); + if ((namespaceAttributeTypeSymbol != null) && (namespaceSymbol != null)) + { + if (namespaceAttributeTypeSymbol.Equals(namespaceSymbol, SymbolEqualityComparer.Default)) + { + if (ConditionAllowsSymbol(symbol, attributeTypeSymbol, onLoadData)) + { + attributeExplicitlyAllowsExactSymbol = attributeTypeSymbol; + } + else + { + attributeExplicitlyAllowsRelatedSymbol = attributeTypeSymbol; + } + } + } + } + } + } + } + + if ((attributeExplicitlyAllowsRelatedSymbol != null) && (attributeExplicitlyAllowsExactSymbol == null)) + { + context.ReportDiagnostic(Diagnostic.Create(RuleAttributeNotSpecificEnough, operation.Syntax.GetLocation(), attributeExplicitlyAllowsRelatedSymbol.ToDisplayString())); + } + + return; + } + } + + if (symbolThatMightHaveCompExactlyDependsOnAttribute != null) + { + foreach (var attributeTypeSymbol in GetCompExactlyDependsOnUseList(symbolThatMightHaveCompExactlyDependsOnAttribute, onLoadData)) + { + if (ConditionAllowsSymbol(symbol, attributeTypeSymbol, onLoadData)) + { + // This attribute indicates that this method will only be compiled into a ReadyToRun image if the behavior + // of the associated IsSupported method is defined to a constant value during ReadyToRun compilation that cannot change at runtime + return; + } + } + } + + var ancestorNodes = operation.Syntax.AncestorsAndSelf(true); + SyntaxNode? previousNode = null; + HashSet notTypes = new HashSet(SymbolEqualityComparer.Default); + + foreach (var ancestorNode in ancestorNodes) + { + if (previousNode != null) + { + if (ancestorNode is LocalFunctionStatementSyntax) + { + // Local functions are not the same ECMA 335 function as the outer function, so don't continue searching for an if statement. + break; + } + if (ancestorNode is LambdaExpressionSyntax) + { + // Lambda functions are not the same ECMA 335 function as the outer function, so don't continue searching for an if statement. + break; + } + if (ancestorNode is IfStatementSyntax ifStatement) + { + if (HandleConditionalCase(ifStatement.Condition, ifStatement.Statement, ifStatement.Else)) + return; + } + if (ancestorNode is ConditionalExpressionSyntax conditionalExpression) + { + if (HandleConditionalCase(conditionalExpression.Condition, conditionalExpression.WhenTrue, conditionalExpression.WhenFalse)) + return; + } + + // Returns true to indicate the wrapping method should return + bool HandleConditionalCase(ExpressionSyntax condition, SyntaxNode? syntaxOnPositiveCondition, SyntaxNode? syntaxOnNegativeCondition) + { + if (previousNode == syntaxOnPositiveCondition) + { + var decomposedCondition = DecomposeConditionForIsSupportedGroups(context, operation.SemanticModel, condition); + + if (decomposedCondition.Length == 0) + return false; + + // Ensure every symbol found in the condition is only in 1 OR clause + HashSet foundSymbols = new HashSet(SymbolEqualityComparer.Default); + foreach (var andClause in decomposedCondition) + { + foreach (var symbolInOrClause in andClause) + { + if (!foundSymbols.Add(symbolInOrClause)) + { + context.ReportDiagnostic(Diagnostic.Create(RuleCantParse, operation.Syntax.GetLocation())); + return true; + } + } + } + + // Determine which sets of conditions have been excluded + List includedClauses = new List(); + for (int andClauseIndex = 0; andClauseIndex < decomposedCondition.Length; andClauseIndex++) + { + bool foundMatchInAndClause = false; + foreach (var symbolInAndClause in decomposedCondition[andClauseIndex]) + { + foreach (var notType in notTypes) + { + if (TypeSymbolAllowsTypeSymbol(notType, symbolInAndClause)) + { + foundMatchInAndClause = true; + break; + } + } + if (foundMatchInAndClause) + break; + } + + if (!foundMatchInAndClause) + { + includedClauses.Add(andClauseIndex); + } + } + + // Each one of these clauses must be supported by the function being called + // or there is a lack of safety + + foreach (var clauseIndex in includedClauses) + { + bool clauseAllowsSymbol = false; + + var andClause = decomposedCondition[clauseIndex]; + foreach (var symbolFromCondition in andClause) + { + if (ConditionAllowsSymbol(symbol, symbolFromCondition, onLoadData)) + { + // There is a good IsSupported check with a positive check for the IsSupported call involved. Do not report. + clauseAllowsSymbol = true; + } + } + + if (!clauseAllowsSymbol) + return false; + } + + return true; + } + else if (previousNode == syntaxOnNegativeCondition) + { + var decomposedCondition = DecomposeConditionForIsSupportedGroups(context, operation.SemanticModel, condition); + if (decomposedCondition.Length == 1) + { + foreach (var symbolFromCondition in decomposedCondition[0]) + { + notTypes.Add(symbolFromCondition); + } + } + } + + return false; + } + } + previousNode = ancestorNode; + } + + if (onLoadData.NamedTypesToBeProtected.Contains(symbol.ContainingType)) + context.ReportDiagnostic(Diagnostic.Create(Rule, operation.Syntax.GetLocation(), symbol.ContainingSymbol.ToDisplayString())); + else + context.ReportDiagnostic(Diagnostic.Create(RuleHelper, operation.Syntax.GetLocation(), symbol.ToDisplayString())); + } + } +} diff --git a/src/libraries/System.Private.CoreLib/gen/System.Private.CoreLib.Generators.csproj b/src/libraries/System.Private.CoreLib/gen/System.Private.CoreLib.Generators.csproj index 34e09d27657253..fa1603a0091095 100644 --- a/src/libraries/System.Private.CoreLib/gen/System.Private.CoreLib.Generators.csproj +++ b/src/libraries/System.Private.CoreLib/gen/System.Private.CoreLib.Generators.csproj @@ -8,6 +8,7 @@ + diff --git a/src/libraries/System.Private.CoreLib/src/System.Private.CoreLib.Shared.projitems b/src/libraries/System.Private.CoreLib/src/System.Private.CoreLib.Shared.projitems index c19e0f2a827d84..e98d8b03412eb5 100644 --- a/src/libraries/System.Private.CoreLib/src/System.Private.CoreLib.Shared.projitems +++ b/src/libraries/System.Private.CoreLib/src/System.Private.CoreLib.Shared.projitems @@ -751,7 +751,6 @@ - @@ -769,6 +768,7 @@ + diff --git a/src/libraries/System.Private.CoreLib/src/System/Buffers/Text/Base64Decoder.cs b/src/libraries/System.Private.CoreLib/src/System/Buffers/Text/Base64Decoder.cs index 5f49e544faf991..6a6c2f018d8c06 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Buffers/Text/Base64Decoder.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Buffers/Text/Base64Decoder.cs @@ -624,8 +624,8 @@ private static OperationStatus DecodeWithWhiteSpaceFromUtf8InPlace(Span ut return status; } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static unsafe void Avx2Decode(ref byte* srcBytes, ref byte* destBytes, byte* srcEnd, int sourceLength, int destLength, byte* srcStart, byte* destStart) { // If we have AVX2 support, pick off 32 bytes at a time for as long as we can, @@ -755,6 +755,8 @@ private static unsafe void Avx2Decode(ref byte* srcBytes, ref byte* destBytes, b } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] private static Vector128 SimdShuffle(Vector128 left, Vector128 right, Vector128 mask8F) { Debug.Assert((Ssse3.IsSupported || AdvSimd.Arm64.IsSupported) && BitConverter.IsLittleEndian); @@ -768,6 +770,8 @@ private static Vector128 SimdShuffle(Vector128 left, Vector128 } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] + [CompExactlyDependsOn(typeof(Ssse3))] private static unsafe void Vector128Decode(ref byte* srcBytes, ref byte* destBytes, byte* srcEnd, int sourceLength, int destLength, byte* srcStart, byte* destStart) { Debug.Assert((Ssse3.IsSupported || AdvSimd.Arm64.IsSupported) && BitConverter.IsLittleEndian); diff --git a/src/libraries/System.Private.CoreLib/src/System/Buffers/Text/Base64Encoder.cs b/src/libraries/System.Private.CoreLib/src/System/Buffers/Text/Base64Encoder.cs index e051bbe932cb07..c0d03fa2d867dc 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Buffers/Text/Base64Encoder.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Buffers/Text/Base64Encoder.cs @@ -226,8 +226,8 @@ public static unsafe OperationStatus EncodeToUtf8InPlace(Span buffer, int } } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static unsafe void Avx2Encode(ref byte* srcBytes, ref byte* destBytes, byte* srcEnd, int sourceLength, int destLength, byte* srcStart, byte* destStart) { // If we have AVX2 support, pick off 24 bytes at a time for as long as we can. @@ -398,6 +398,8 @@ private static unsafe void Avx2Encode(ref byte* srcBytes, ref byte* destBytes, b } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] private static unsafe void Vector128Encode(ref byte* srcBytes, ref byte* destBytes, byte* srcEnd, int sourceLength, int destLength, byte* srcStart, byte* destStart) { // If we have SSSE3 support, pick off 12 bytes at a time for as long as we can. diff --git a/src/libraries/System.Private.CoreLib/src/System/Guid.cs b/src/libraries/System.Private.CoreLib/src/System/Guid.cs index a42bca4c73dc07..8fd0da14bd6c62 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Guid.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Guid.cs @@ -1381,6 +1381,8 @@ private unsafe bool TryFormatX(Span destination, out int charsWrit } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] private static (Vector128, Vector128, Vector128) FormatGuidVector128Utf8(Guid value, bool useDashes) { Debug.Assert((Ssse3.IsSupported || AdvSimd.Arm64.IsSupported) && BitConverter.IsLittleEndian); diff --git a/src/libraries/System.Private.CoreLib/src/System/Numerics/Matrix4x4.Impl.cs b/src/libraries/System.Private.CoreLib/src/System/Numerics/Matrix4x4.Impl.cs index f0a3b4f2128a57..bf692d02f73d22 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Numerics/Matrix4x4.Impl.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Numerics/Matrix4x4.Impl.cs @@ -1055,6 +1055,7 @@ public static bool Invert(in Impl matrix, out Impl result) return SoftwareFallback(in matrix, out result); + [CompExactlyDependsOn(typeof(Sse))] static bool SseImpl(in Impl matrix, out Impl result) { if (!Sse.IsSupported) diff --git a/src/libraries/System.Private.CoreLib/src/System/Numerics/VectorMath.cs b/src/libraries/System.Private.CoreLib/src/System/Numerics/VectorMath.cs index c8b4be93ac6ea4..7c00f683c5d0a8 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Numerics/VectorMath.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Numerics/VectorMath.cs @@ -113,7 +113,7 @@ public static bool NotEqual(Vector128 vector1, Vector128 vector2) // This implementation is based on the DirectX Math Library XMVector4NotEqual method // https://github.com/microsoft/DirectXMath/blob/master/Inc/DirectXMathVector.inl - if (AdvSimd.IsSupported) + if (AdvSimd.Arm64.IsSupported) { Vector128 vResult = AdvSimd.CompareEqual(vector1, vector2).AsUInt32(); diff --git a/src/libraries/System.Private.CoreLib/src/System/Runtime/BypassReadyToRunAttribute.cs b/src/libraries/System.Private.CoreLib/src/System/Runtime/BypassReadyToRunAttribute.cs deleted file mode 100644 index d4def4f9eadb8b..00000000000000 --- a/src/libraries/System.Private.CoreLib/src/System/Runtime/BypassReadyToRunAttribute.cs +++ /dev/null @@ -1,10 +0,0 @@ -// Licensed to the .NET Foundation under one or more agreements. -// The .NET Foundation licenses this file to you under the MIT license. - -namespace System.Runtime -{ - [AttributeUsage(AttributeTargets.Method | AttributeTargets.Constructor, Inherited = false)] - internal sealed class BypassReadyToRunAttribute : Attribute - { - } -} diff --git a/src/libraries/System.Private.CoreLib/src/System/Runtime/CompilerServices/CompExactlyDependsOn.cs b/src/libraries/System.Private.CoreLib/src/System/Runtime/CompilerServices/CompExactlyDependsOn.cs new file mode 100644 index 00000000000000..3169bd22b42d18 --- /dev/null +++ b/src/libraries/System.Private.CoreLib/src/System/Runtime/CompilerServices/CompExactlyDependsOn.cs @@ -0,0 +1,18 @@ +// Licensed to the .NET Foundation under one or more agreements. +// The .NET Foundation licenses this file to you under the MIT license. + +namespace System.Runtime.CompilerServices +{ + // Use this attribute to indicate that a function should only be compiled into a Ready2Run + // binary if the associated type will always have a well defined value for its IsSupported property + [AttributeUsage(AttributeTargets.Method | AttributeTargets.Constructor, AllowMultiple = true, Inherited = false)] + internal sealed class CompExactlyDependsOnAttribute : Attribute + { + public CompExactlyDependsOnAttribute(Type intrinsicsTypeUsedInHelperFunction) + { + IntrinsicsTypeUsedInHelperFunction = intrinsicsTypeUsedInHelperFunction; + } + + public Type IntrinsicsTypeUsedInHelperFunction { get; } + } +} diff --git a/src/libraries/System.Private.CoreLib/src/System/Runtime/CompilerServices/CompExactlyDependsOnAttribute.cs b/src/libraries/System.Private.CoreLib/src/System/Runtime/CompilerServices/CompExactlyDependsOnAttribute.cs new file mode 100644 index 00000000000000..3169bd22b42d18 --- /dev/null +++ b/src/libraries/System.Private.CoreLib/src/System/Runtime/CompilerServices/CompExactlyDependsOnAttribute.cs @@ -0,0 +1,18 @@ +// Licensed to the .NET Foundation under one or more agreements. +// The .NET Foundation licenses this file to you under the MIT license. + +namespace System.Runtime.CompilerServices +{ + // Use this attribute to indicate that a function should only be compiled into a Ready2Run + // binary if the associated type will always have a well defined value for its IsSupported property + [AttributeUsage(AttributeTargets.Method | AttributeTargets.Constructor, AllowMultiple = true, Inherited = false)] + internal sealed class CompExactlyDependsOnAttribute : Attribute + { + public CompExactlyDependsOnAttribute(Type intrinsicsTypeUsedInHelperFunction) + { + IntrinsicsTypeUsedInHelperFunction = intrinsicsTypeUsedInHelperFunction; + } + + public Type IntrinsicsTypeUsedInHelperFunction { get; } + } +} diff --git a/src/libraries/System.Private.CoreLib/src/System/Runtime/Intrinsics/Vector128.cs b/src/libraries/System.Private.CoreLib/src/System/Runtime/Intrinsics/Vector128.cs index 31aac455cd071d..15285f8c9a2bbd 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Runtime/Intrinsics/Vector128.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Runtime/Intrinsics/Vector128.cs @@ -2444,6 +2444,10 @@ public static Vector128 Shuffle(Vector128 vector, Vector128 /// On hardware with support, indices are treated as modulo 16, and if the high bit is set, the result will be set to 0 for that element. /// On hardware with or support, this method behaves the same as Shuffle. [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd))] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] + [CompExactlyDependsOn(typeof(PackedSimd))] internal static Vector128 ShuffleUnsafe(Vector128 vector, Vector128 indices) { if (Ssse3.IsSupported) @@ -3236,6 +3240,8 @@ internal static void SetUpperUnsafe(in this Vector128 vector, Vector64 } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] + [CompExactlyDependsOn(typeof(Sse2))] internal static Vector128 UnpackLow(Vector128 left, Vector128 right) { if (Sse2.IsSupported) @@ -3250,6 +3256,8 @@ internal static Vector128 UnpackLow(Vector128 left, Vector128 } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] + [CompExactlyDependsOn(typeof(Sse2))] internal static Vector128 UnpackHigh(Vector128 left, Vector128 right) { if (Sse2.IsSupported) @@ -3266,6 +3274,8 @@ internal static Vector128 UnpackHigh(Vector128 left, Vector128 // TODO: Make generic versions of these public, see https://github.com/dotnet/runtime/issues/82559 [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] + [CompExactlyDependsOn(typeof(Sse2))] internal static Vector128 AddSaturate(Vector128 left, Vector128 right) { if (Sse2.IsSupported) @@ -3280,6 +3290,8 @@ internal static Vector128 AddSaturate(Vector128 left, Vector128 SubtractSaturate(Vector128 left, Vector128 right) { if (Sse2.IsSupported) diff --git a/src/libraries/System.Private.CoreLib/src/System/SearchValues/IndexOfAnyAsciiSearcher.cs b/src/libraries/System.Private.CoreLib/src/System/SearchValues/IndexOfAnyAsciiSearcher.cs index 3ffa18800262d3..87de3a404b84e7 100644 --- a/src/libraries/System.Private.CoreLib/src/System/SearchValues/IndexOfAnyAsciiSearcher.cs +++ b/src/libraries/System.Private.CoreLib/src/System/SearchValues/IndexOfAnyAsciiSearcher.cs @@ -170,13 +170,18 @@ private static unsafe bool TryLastIndexOfAny(ref short searchSpace, in return false; } + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd))] + [CompExactlyDependsOn(typeof(PackedSimd))] internal static int IndexOfAnyVectorized(ref short searchSpace, int searchSpaceLength, ref Vector256 bitmapRef) where TNegator : struct, INegator where TOptimizations : struct, IOptimizations { ref short currentSearchSpace = ref searchSpace; +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The behavior of the rest of the function remains the same if Avx2.IsSupported is false if (Avx2.IsSupported && searchSpaceLength > 2 * Vector128.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { Vector256 bitmap256 = bitmapRef; @@ -231,7 +236,9 @@ internal static int IndexOfAnyVectorized(ref short sea Vector128 bitmap = bitmapRef._lower; +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The behavior of the rest of the function remains the same if Avx2.IsSupported is false if (!Avx2.IsSupported && searchSpaceLength > 2 * Vector128.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { // Process the input in chunks of 16 characters (2 * Vector128). // We're mainly interested in a single byte of each character, and the core lookup operates on a Vector128. @@ -280,13 +287,18 @@ internal static int IndexOfAnyVectorized(ref short sea return -1; } + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd))] + [CompExactlyDependsOn(typeof(PackedSimd))] internal static int LastIndexOfAnyVectorized(ref short searchSpace, int searchSpaceLength, ref Vector256 bitmapRef) where TNegator : struct, INegator where TOptimizations : struct, IOptimizations { ref short currentSearchSpace = ref Unsafe.Add(ref searchSpace, searchSpaceLength); +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The else clause is semantically equivalent if (Avx2.IsSupported && searchSpaceLength > 2 * Vector128.Count) +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { Vector256 bitmap256 = bitmapRef; @@ -390,12 +402,17 @@ internal static int LastIndexOfAnyVectorized(ref short return -1; } + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd))] + [CompExactlyDependsOn(typeof(PackedSimd))] internal static int IndexOfAnyVectorized(ref byte searchSpace, int searchSpaceLength, ref Vector256 bitmapRef) where TNegator : struct, INegator { ref byte currentSearchSpace = ref searchSpace; +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The behavior of the rest of the function remains the same if Avx2.IsSupported is false if (Avx2.IsSupported && searchSpaceLength > Vector128.Count) +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { Vector256 bitmap256 = bitmapRef; @@ -495,12 +512,17 @@ internal static int IndexOfAnyVectorized(ref byte searchSpace, int sea return -1; } + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd))] + [CompExactlyDependsOn(typeof(PackedSimd))] internal static int LastIndexOfAnyVectorized(ref byte searchSpace, int searchSpaceLength, ref Vector256 bitmapRef) where TNegator : struct, INegator { ref byte currentSearchSpace = ref Unsafe.Add(ref searchSpace, searchSpaceLength); +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The behavior of the rest of the function remains the same if Avx2.IsSupported is false if (Avx2.IsSupported && searchSpaceLength > Vector128.Count) +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { Vector256 bitmap256 = bitmapRef; @@ -600,12 +622,17 @@ internal static int LastIndexOfAnyVectorized(ref byte searchSpace, int return -1; } + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd))] + [CompExactlyDependsOn(typeof(PackedSimd))] internal static int IndexOfAnyVectorizedAnyByte(ref byte searchSpace, int searchSpaceLength, ref Vector512 bitmapsRef) where TNegator : struct, INegator { ref byte currentSearchSpace = ref searchSpace; +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The behavior of the rest of the function remains the same if Avx2.IsSupported is false if (Avx2.IsSupported && searchSpaceLength > Vector128.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { Vector256 bitmap256_0 = bitmapsRef._lower; Vector256 bitmap256_1 = bitmapsRef._upper; @@ -660,7 +687,9 @@ internal static int IndexOfAnyVectorizedAnyByte(ref byte searchSpace, Vector128 bitmap0 = bitmapsRef._lower._lower; Vector128 bitmap1 = bitmapsRef._upper._lower; +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The behavior of the rest of the function remains the same if Avx2.IsSupported is false if (!Avx2.IsSupported && searchSpaceLength > Vector128.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { // Process the input in chunks of 16 bytes. // If the input length is a multiple of 16, don't consume the last 16 characters in this loop. @@ -707,13 +736,18 @@ internal static int IndexOfAnyVectorizedAnyByte(ref byte searchSpace, return -1; } + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd))] + [CompExactlyDependsOn(typeof(PackedSimd))] internal static int LastIndexOfAnyVectorizedAnyByte(ref byte searchSpace, int searchSpaceLength, ref Vector512 bitmapsRef) where TNegator : struct, INegator { ref byte currentSearchSpace = ref Unsafe.Add(ref searchSpace, searchSpaceLength); +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The behavior of the rest of the function remains the same if Avx2.IsSupported is false if (Avx2.IsSupported && searchSpaceLength > Vector128.Count) { +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough Vector256 bitmap256_0 = bitmapsRef._lower; Vector256 bitmap256_1 = bitmapsRef._upper; @@ -767,7 +801,9 @@ internal static int LastIndexOfAnyVectorizedAnyByte(ref byte searchSpa Vector128 bitmap0 = bitmapsRef._lower._lower; Vector128 bitmap1 = bitmapsRef._upper._lower; +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The behavior of the rest of the function remains the same if Avx2.IsSupported is false if (!Avx2.IsSupported && searchSpaceLength > Vector128.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { // Process the input in chunks of 16 bytes. // If the input length is a multiple of 16, don't consume the last 16 characters in this loop. @@ -815,6 +851,9 @@ internal static int LastIndexOfAnyVectorizedAnyByte(ref byte searchSpa } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Sse2))] + [CompExactlyDependsOn(typeof(AdvSimd))] + [CompExactlyDependsOn(typeof(PackedSimd))] private static Vector128 IndexOfAnyLookup(Vector128 source0, Vector128 source1, Vector128 bitmapLookup) where TNegator : struct, INegator where TOptimizations : struct, IOptimizations @@ -827,6 +866,9 @@ private static Vector128 IndexOfAnyLookup(Vector } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd))] + [CompExactlyDependsOn(typeof(PackedSimd))] private static Vector128 IndexOfAnyLookupCore(Vector128 source, Vector128 bitmapLookup) { // On X86, the Ssse3.Shuffle instruction will already perform an implicit 'AND 0xF' on the indices, so we can skip it. @@ -854,6 +896,7 @@ private static Vector128 IndexOfAnyLookupCore(Vector128 source, Vect } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static Vector256 IndexOfAnyLookup(Vector256 source0, Vector256 source1, Vector256 bitmapLookup) where TNegator : struct, INegator where TOptimizations : struct, IOptimizations @@ -865,8 +908,8 @@ private static Vector256 IndexOfAnyLookup(Vector return TNegator.NegateIfNeeded(result); } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static Vector256 IndexOfAnyLookupCore(Vector256 source, Vector256 bitmapLookup) { // See comments in IndexOfAnyLookupCore(Vector128) above for more details. @@ -878,6 +921,9 @@ private static Vector256 IndexOfAnyLookupCore(Vector256 source, Vect } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd))] + [CompExactlyDependsOn(typeof(PackedSimd))] private static Vector128 IndexOfAnyLookup(Vector128 source, Vector128 bitmapLookup0, Vector128 bitmapLookup1) where TNegator : struct, INegator { @@ -899,8 +945,8 @@ private static Vector128 IndexOfAnyLookup(Vector128 source return TNegator.NegateIfNeeded(result); } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static Vector256 IndexOfAnyLookup(Vector256 source, Vector256 bitmapLookup0, Vector256 bitmapLookup1) where TNegator : struct, INegator { @@ -970,8 +1016,8 @@ private static unsafe int ComputeLastIndexOverlapped(ref T searchSp return offsetInVector - Vector128.Count + (int)((nuint)Unsafe.ByteOffset(ref searchSpace, ref secondVector) / (nuint)sizeof(T)); } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static unsafe int ComputeFirstIndex(ref T searchSpace, ref T current, Vector256 result) where TNegator : struct, INegator { @@ -986,8 +1032,8 @@ private static unsafe int ComputeFirstIndex(ref T searchSpace, ref return offsetInVector + (int)((nuint)Unsafe.ByteOffset(ref searchSpace, ref current) / (nuint)sizeof(T)); } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static unsafe int ComputeFirstIndexOverlapped(ref T searchSpace, ref T current0, ref T current1, Vector256 result) where TNegator : struct, INegator { @@ -1008,8 +1054,8 @@ private static unsafe int ComputeFirstIndexOverlapped(ref T searchS return offsetInVector + (int)((nuint)Unsafe.ByteOffset(ref searchSpace, ref current0) / (nuint)sizeof(T)); } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static unsafe int ComputeLastIndex(ref T searchSpace, ref T current, Vector256 result) where TNegator : struct, INegator { @@ -1024,8 +1070,8 @@ private static unsafe int ComputeLastIndex(ref T searchSpace, ref T return offsetInVector + (int)((nuint)Unsafe.ByteOffset(ref searchSpace, ref current) / (nuint)sizeof(T)); } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static unsafe int ComputeLastIndexOverlapped(ref T searchSpace, ref T secondVector, Vector256 result) where TNegator : struct, INegator { @@ -1046,8 +1092,8 @@ private static unsafe int ComputeLastIndexOverlapped(ref T searchSp return offsetInVector - Vector256.Count + (int)((nuint)Unsafe.ByteOffset(ref searchSpace, ref secondVector) / (nuint)sizeof(T)); } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static Vector256 FixUpPackedVector256Result(Vector256 result) { Debug.Assert(Avx2.IsSupported); @@ -1103,6 +1149,8 @@ internal interface IOptimizations { // Replace with Vector128.NarrowWithSaturation once https://github.com/dotnet/runtime/issues/75724 is implemented. [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Sse2))] + [CompExactlyDependsOn(typeof(PackedSimd))] public static Vector128 PackSources(Vector128 lower, Vector128 upper) { Vector128 lowerMin = Vector128.Min(lower, Vector128.Create((ushort)255)).AsInt16(); @@ -1114,7 +1162,7 @@ public static Vector128 PackSources(Vector128 lower, Vector128 PackSources(Vector256 lower, Vector256 upper) { @@ -1127,6 +1175,9 @@ public static Vector256 PackSources(Vector256 lower, Vector256 PackSources(Vector128 lower, Vector128 upper) { return @@ -1135,7 +1186,7 @@ public static Vector128 PackSources(Vector128 lower, Vector128 PackSources(Vector256 lower, Vector256 upper) { diff --git a/src/libraries/System.Private.CoreLib/src/System/SearchValues/ProbabilisticMap.cs b/src/libraries/System.Private.CoreLib/src/System/SearchValues/ProbabilisticMap.cs index efd348aa99ae11..564e65f4d0dc11 100644 --- a/src/libraries/System.Private.CoreLib/src/System/SearchValues/ProbabilisticMap.cs +++ b/src/libraries/System.Private.CoreLib/src/System/SearchValues/ProbabilisticMap.cs @@ -8,6 +8,7 @@ using System.Runtime.InteropServices; using System.Runtime.Intrinsics; using System.Runtime.Intrinsics.Arm; +using System.Runtime.Intrinsics.Wasm; using System.Runtime.Intrinsics.X86; #pragma warning disable IDE0060 // https://github.com/dotnet/roslyn-analyzers/issues/6228 @@ -105,8 +106,8 @@ ref Unsafe.As(ref MemoryMarshal.GetReference(values)), (short)ch, values.Length); - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static Vector256 ContainsMask32CharsAvx2(Vector256 charMapLower, Vector256 charMapUpper, ref char searchSpace) { Vector256 source0 = Vector256.LoadUnsafe(ref searchSpace); @@ -126,8 +127,8 @@ private static Vector256 ContainsMask32CharsAvx2(Vector256 charMapLo return resultLower & resultUpper; } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static Vector256 IsCharBitSetAvx2(Vector256 charMapLower, Vector256 charMapUpper, Vector256 values) { // X86 doesn't have a logical right shift intrinsic for bytes: https://github.com/dotnet/runtime/issues/82564 @@ -145,6 +146,8 @@ private static Vector256 IsCharBitSetAvx2(Vector256 charMapLower, Ve } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] + [CompExactlyDependsOn(typeof(Sse2))] private static Vector128 ContainsMask16Chars(Vector128 charMapLower, Vector128 charMapUpper, ref char searchSpace) { Vector128 source0 = Vector128.LoadUnsafe(ref searchSpace); @@ -165,6 +168,11 @@ private static Vector128 ContainsMask16Chars(Vector128 charMapLower, } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Sse2))] + [CompExactlyDependsOn(typeof(Ssse3))] + [CompExactlyDependsOn(typeof(AdvSimd))] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] + [CompExactlyDependsOn(typeof(PackedSimd))] private static Vector128 IsCharBitSet(Vector128 charMapLower, Vector128 charMapUpper, Vector128 values) { // X86 doesn't have a logical right shift intrinsic for bytes: https://github.com/dotnet/runtime/issues/82564 @@ -354,6 +362,8 @@ internal static int LastIndexOfAny(ref uint charMap, ref char searchSp return -1; } + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] + [CompExactlyDependsOn(typeof(Sse41))] private static int IndexOfAnyVectorized(ref uint charMap, ref char searchSpace, int searchSpaceLength, ReadOnlySpan values) { Debug.Assert(Sse41.IsSupported || AdvSimd.Arm64.IsSupported); @@ -365,7 +375,9 @@ private static int IndexOfAnyVectorized(ref uint charMap, ref char searchSpace, Vector128 charMapLower = Vector128.LoadUnsafe(ref Unsafe.As(ref charMap)); Vector128 charMapUpper = Vector128.LoadUnsafe(ref Unsafe.As(ref charMap), (nuint)Vector128.Count); +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // In this case, we have an else clause which has the same semantic meaning whether or not Avx2 is considered supported or unsupported if (Avx2.IsSupported && searchSpaceLength >= 32) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { Vector256 charMapLower256 = Vector256.Create(charMapLower, charMapLower); Vector256 charMapUpper256 = Vector256.Create(charMapUpper, charMapUpper); diff --git a/src/libraries/System.Private.CoreLib/src/System/SpanHelpers.Packed.cs b/src/libraries/System.Private.CoreLib/src/System/SpanHelpers.Packed.cs index c48622ca293550..1851d1e26ffefa 100644 --- a/src/libraries/System.Private.CoreLib/src/System/SpanHelpers.Packed.cs +++ b/src/libraries/System.Private.CoreLib/src/System/SpanHelpers.Packed.cs @@ -35,37 +35,46 @@ public static unsafe bool CanUsePackedIndexOf(T value) } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Sse2))] public static int IndexOf(ref char searchSpace, char value, int length) => IndexOf>(ref Unsafe.As(ref searchSpace), (short)value, length); [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Sse2))] public static int IndexOfAnyExcept(ref char searchSpace, char value, int length) => IndexOf>(ref Unsafe.As(ref searchSpace), (short)value, length); [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Sse2))] public static int IndexOfAny(ref char searchSpace, char value0, char value1, int length) => IndexOfAny>(ref Unsafe.As(ref searchSpace), (short)value0, (short)value1, length); [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Sse2))] public static int IndexOfAnyExcept(ref char searchSpace, char value0, char value1, int length) => IndexOfAny>(ref Unsafe.As(ref searchSpace), (short)value0, (short)value1, length); [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Sse2))] public static int IndexOfAny(ref char searchSpace, char value0, char value1, char value2, int length) => IndexOfAny>(ref Unsafe.As(ref searchSpace), (short)value0, (short)value1, (short)value2, length); [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Sse2))] public static int IndexOfAnyExcept(ref char searchSpace, char value0, char value1, char value2, int length) => IndexOfAny>(ref Unsafe.As(ref searchSpace), (short)value0, (short)value1, (short)value2, length); [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Sse2))] public static int IndexOfAnyInRange(ref char searchSpace, char lowInclusive, char rangeInclusive, int length) => IndexOfAnyInRange>(ref Unsafe.As(ref searchSpace), (short)lowInclusive, (short)rangeInclusive, length); [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Sse2))] public static int IndexOfAnyExceptInRange(ref char searchSpace, char lowInclusive, char rangeInclusive, int length) => IndexOfAnyInRange>(ref Unsafe.As(ref searchSpace), (short)lowInclusive, (short)rangeInclusive, length); + [CompExactlyDependsOn(typeof(Sse2))] public static bool Contains(ref short searchSpace, short value, int length) { Debug.Assert(CanUsePackedIndexOf(value)); @@ -105,7 +114,9 @@ public static bool Contains(ref short searchSpace, short value, int length) { ref short currentSearchSpace = ref searchSpace; +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The else condition for this if statement is identical in semantics to Avx2 specific code if (Avx2.IsSupported && length > Vector256.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { Vector256 packedValue = Vector256.Create((byte)value); @@ -158,7 +169,16 @@ public static bool Contains(ref short searchSpace, short value, int length) { Vector128 packedValue = Vector128.Create((byte)value); +#pragma warning disable IntrinsicsInSystemPrivateCoreLibConditionParsing // A negated IsSupported condition isn't parseable by the intrinsics analyzer, but in this case, it is only used in combination + // with the check above of Avx2.IsSupported && length > Vector256.Count which makes the logic + // in this if statement dead code when Avx2.IsSupported. Presumably this negated IsSupported check is to assist the JIT in + // not generating dead code. +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // This is paired with the check above, and since these if statements are contained in 1 function, the code + // may take a dependence on the JIT compiler producing a consistent value for the result of a call to IsSupported + // This logic MUST NOT be extracted to a helper function if (!Avx2.IsSupported && length > 2 * Vector128.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough +#pragma warning restore IntrinsicsInSystemPrivateCoreLibConditionParsing { // Process the input in chunks of 16 characters (2 * Vector128). // If the input length is a multiple of 16, don't consume the last 16 characters in this loop. @@ -208,6 +228,7 @@ public static bool Contains(ref short searchSpace, short value, int length) return false; } + [CompExactlyDependsOn(typeof(Sse2))] private static int IndexOf(ref short searchSpace, short value, int length) where TNegator : struct, SpanHelpers.INegator { @@ -242,7 +263,9 @@ private static int IndexOf(ref short searchSpace, short value, int len { ref short currentSearchSpace = ref searchSpace; +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The else condition for this if statement is identical in semantics to Avx2 specific code if (Avx2.IsSupported && length > Vector256.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { Vector256 packedValue = Vector256.Create((byte)value); @@ -297,7 +320,16 @@ private static int IndexOf(ref short searchSpace, short value, int len { Vector128 packedValue = Vector128.Create((byte)value); +#pragma warning disable IntrinsicsInSystemPrivateCoreLibConditionParsing // A negated IsSupported condition isn't parseable by the intrinsics analyzer, but in this case, it is only used in combination + // with the check above of Avx2.IsSupported && length > Vector256.Count which makes the logic + // in this if statement dead code when Avx2.IsSupported. Presumably this negated IsSupported check is to assist the JIT in + // not generating dead code. +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // This is paired with the check above, and since these if statements are contained in 1 function, the code + // may take a dependence on the JIT compiler producing a consistent value for the result of a call to IsSupported + // This logic MUST NOT be extracted to a helper function if (!Avx2.IsSupported && length > 2 * Vector128.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough +#pragma warning restore IntrinsicsInSystemPrivateCoreLibConditionParsing { // Process the input in chunks of 16 characters (2 * Vector128). // If the input length is a multiple of 16, don't consume the last 16 characters in this loop. @@ -349,6 +381,7 @@ private static int IndexOf(ref short searchSpace, short value, int len return -1; } + [CompExactlyDependsOn(typeof(Sse2))] private static int IndexOfAny(ref short searchSpace, short value0, short value1, int length) where TNegator : struct, SpanHelpers.INegator { @@ -390,7 +423,9 @@ private static int IndexOfAny(ref short searchSpace, short value0, sho { ref short currentSearchSpace = ref searchSpace; +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The else condition for this if statement is identical in semantics to Avx2 specific code if (Avx2.IsSupported && length > Vector256.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { Vector256 packedValue0 = Vector256.Create((byte)value0); Vector256 packedValue1 = Vector256.Create((byte)value1); @@ -447,7 +482,16 @@ private static int IndexOfAny(ref short searchSpace, short value0, sho Vector128 packedValue0 = Vector128.Create((byte)value0); Vector128 packedValue1 = Vector128.Create((byte)value1); +#pragma warning disable IntrinsicsInSystemPrivateCoreLibConditionParsing // A negated IsSupported condition isn't parseable by the intrinsics analyzer, but in this case, it is only used in combination + // with the check above of Avx2.IsSupported && length > Vector256.Count which makes the logic + // in this if statement dead code when Avx2.IsSupported. Presumably this negated IsSupported check is to assist the JIT in + // not generating dead code. +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // This is paired with the check above, and since these if statements are contained in 1 function, the code + // may take a dependence on the JIT compiler producing a consistent value for the result of a call to IsSupported + // This logic MUST NOT be extracted to a helper function if (!Avx2.IsSupported && length > 2 * Vector128.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough +#pragma warning restore IntrinsicsInSystemPrivateCoreLibConditionParsing { // Process the input in chunks of 16 characters (2 * Vector128). // If the input length is a multiple of 16, don't consume the last 16 characters in this loop. @@ -499,6 +543,7 @@ private static int IndexOfAny(ref short searchSpace, short value0, sho return -1; } + [CompExactlyDependsOn(typeof(Sse2))] private static int IndexOfAny(ref short searchSpace, short value0, short value1, short value2, int length) where TNegator : struct, SpanHelpers.INegator { @@ -541,7 +586,9 @@ private static int IndexOfAny(ref short searchSpace, short value0, sho { ref short currentSearchSpace = ref searchSpace; +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The else condition for this if statement is identical in semantics to Avx2 specific code if (Avx2.IsSupported && length > Vector256.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { Vector256 packedValue0 = Vector256.Create((byte)value0); Vector256 packedValue1 = Vector256.Create((byte)value1); @@ -600,7 +647,16 @@ private static int IndexOfAny(ref short searchSpace, short value0, sho Vector128 packedValue1 = Vector128.Create((byte)value1); Vector128 packedValue2 = Vector128.Create((byte)value2); +#pragma warning disable IntrinsicsInSystemPrivateCoreLibConditionParsing // A negated IsSupported condition isn't parseable by the intrinsics analyzer, but in this case, it is only used in combination + // with the check above of Avx2.IsSupported && length > Vector256.Count which makes the logic + // in this if statement dead code when Avx2.IsSupported. Presumably this negated IsSupported check is to assist the JIT in + // not generating dead code. +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // This is paired with the check above, and since these if statements are contained in 1 function, the code + // may take a dependence on the JIT compiler producing a consistent value for the result of a call to IsSupported + // This logic MUST NOT be extracted to a helper function if (!Avx2.IsSupported && length > 2 * Vector128.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough +#pragma warning restore IntrinsicsInSystemPrivateCoreLibConditionParsing { // Process the input in chunks of 16 characters (2 * Vector128). // If the input length is a multiple of 16, don't consume the last 16 characters in this loop. @@ -652,6 +708,7 @@ private static int IndexOfAny(ref short searchSpace, short value0, sho return -1; } + [CompExactlyDependsOn(typeof(Sse2))] private static int IndexOfAnyInRange(ref short searchSpace, short lowInclusive, short rangeInclusive, int length) where TNegator : struct, SpanHelpers.INegator { @@ -676,7 +733,9 @@ private static int IndexOfAnyInRange(ref short searchSpace, short lowI { ref short currentSearchSpace = ref searchSpace; +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // The else condition for this if statement is identical in semantics to Avx2 specific code if (Avx2.IsSupported && length > Vector256.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { Vector256 lowVector = Vector256.Create((byte)lowInclusive); Vector256 rangeVector = Vector256.Create((byte)rangeInclusive); @@ -733,7 +792,16 @@ private static int IndexOfAnyInRange(ref short searchSpace, short lowI Vector128 lowVector = Vector128.Create((byte)lowInclusive); Vector128 rangeVector = Vector128.Create((byte)rangeInclusive); +#pragma warning disable IntrinsicsInSystemPrivateCoreLibConditionParsing // A negated IsSupported condition isn't parseable by the intrinsics analyzer, but in this case, it is only used in combination + // with the check above of Avx2.IsSupported && length > Vector256.Count which makes the logic + // in this if statement dead code when Avx2.IsSupported. Presumably this negated IsSupported check is to assist the JIT in + // not generating dead code. +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // This is paired with the check above, and since these if statements are contained in 1 function, the code + // may take a dependence on the JIT compiler producing a consistent value for the result of a call to IsSupported + // This logic MUST NOT be extracted to a helper function if (!Avx2.IsSupported && length > 2 * Vector128.Count) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough +#pragma warning restore IntrinsicsInSystemPrivateCoreLibConditionParsing { // Process the input in chunks of 16 characters (2 * Vector128). // If the input length is a multiple of 16, don't consume the last 16 characters in this loop. @@ -785,8 +853,8 @@ private static int IndexOfAnyInRange(ref short searchSpace, short lowI return -1; } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static Vector256 PackSources(Vector256 source0, Vector256 source1) { Debug.Assert(Avx2.IsSupported); @@ -798,6 +866,7 @@ private static Vector256 PackSources(Vector256 source0, Vector256 PackSources(Vector128 source0, Vector128 source1) { Debug.Assert(Sse2.IsSupported); @@ -826,8 +895,8 @@ private static int ComputeFirstIndex(ref short searchSpace, ref short current, V return index + (int)((nuint)Unsafe.ByteOffset(ref searchSpace, ref current) / sizeof(short)); } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static int ComputeFirstIndex(ref short searchSpace, ref short current, Vector256 equals) { uint notEqualsElements = FixUpPackedVector256Result(equals).ExtractMostSignificantBits(); @@ -849,8 +918,8 @@ private static int ComputeFirstIndexOverlapped(ref short searchSpace, ref short return offsetInVector + (int)((nuint)Unsafe.ByteOffset(ref searchSpace, ref current0) / sizeof(short)); } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static int ComputeFirstIndexOverlapped(ref short searchSpace, ref short current0, ref short current1, Vector256 equals) { uint notEqualsElements = FixUpPackedVector256Result(equals).ExtractMostSignificantBits(); @@ -864,8 +933,8 @@ private static int ComputeFirstIndexOverlapped(ref short searchSpace, ref short return offsetInVector + (int)((nuint)Unsafe.ByteOffset(ref searchSpace, ref current0) / sizeof(short)); } - [BypassReadyToRun] [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx2))] private static Vector256 FixUpPackedVector256Result(Vector256 result) { Debug.Assert(Avx2.IsSupported); diff --git a/src/libraries/System.Private.CoreLib/src/System/Text/Ascii.Equality.cs b/src/libraries/System.Private.CoreLib/src/System/Text/Ascii.Equality.cs index d857b1236dd0e4..5a9e9ef09cfb14 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Text/Ascii.Equality.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Text/Ascii.Equality.cs @@ -61,38 +61,37 @@ private static bool Equals(ref TLeft left, ref TRight ri } } } - else if (!Vector256.IsHardwareAccelerated || length < (uint)Vector256.Count) + else if (Avx.IsSupported && length >= (uint)Vector256.Count) { ref TLeft currentLeftSearchSpace = ref left; - ref TLeft oneVectorAwayFromLeftEnd = ref Unsafe.Add(ref currentLeftSearchSpace, length - TLoader.Count128); + ref TLeft oneVectorAwayFromLeftEnd = ref Unsafe.Add(ref currentLeftSearchSpace, length - TLoader.Count256); ref TRight currentRightSearchSpace = ref right; - ref TRight oneVectorAwayFromRightEnd = ref Unsafe.Add(ref currentRightSearchSpace, length - (uint)Vector128.Count); + ref TRight oneVectorAwayFromRightEnd = ref Unsafe.Add(ref currentRightSearchSpace, length - (uint)Vector256.Count); - Vector128 leftValues; - Vector128 rightValues; + Vector256 leftValues; + Vector256 rightValues; // Loop until either we've finished all elements or there's less than a vector's-worth remaining. do { - // it's OK to widen the bytes, it's NOT OK to narrow the chars (we could loose some information) - leftValues = TLoader.Load128(ref currentLeftSearchSpace); - rightValues = Vector128.LoadUnsafe(ref currentRightSearchSpace); + leftValues = TLoader.Load256(ref currentLeftSearchSpace); + rightValues = Vector256.LoadUnsafe(ref currentRightSearchSpace); if (leftValues != rightValues || !AllCharsInVectorAreAscii(leftValues | rightValues)) { return false; } - currentRightSearchSpace = ref Unsafe.Add(ref currentRightSearchSpace, (uint)Vector128.Count); - currentLeftSearchSpace = ref Unsafe.Add(ref currentLeftSearchSpace, TLoader.Count128); + currentRightSearchSpace = ref Unsafe.Add(ref currentRightSearchSpace, Vector256.Count); + currentLeftSearchSpace = ref Unsafe.Add(ref currentLeftSearchSpace, TLoader.Count256); } while (!Unsafe.IsAddressGreaterThan(ref currentRightSearchSpace, ref oneVectorAwayFromRightEnd)); // If any elements remain, process the last vector in the search space. - if (length % (uint)Vector128.Count != 0) + if (length % (uint)Vector256.Count != 0) { - leftValues = TLoader.Load128(ref oneVectorAwayFromLeftEnd); - rightValues = Vector128.LoadUnsafe(ref oneVectorAwayFromRightEnd); + leftValues = TLoader.Load256(ref oneVectorAwayFromLeftEnd); + rightValues = Vector256.LoadUnsafe(ref oneVectorAwayFromRightEnd); if (leftValues != rightValues || !AllCharsInVectorAreAscii(leftValues | rightValues)) { @@ -103,34 +102,35 @@ private static bool Equals(ref TLeft left, ref TRight ri else { ref TLeft currentLeftSearchSpace = ref left; - ref TLeft oneVectorAwayFromLeftEnd = ref Unsafe.Add(ref currentLeftSearchSpace, length - TLoader.Count256); + ref TLeft oneVectorAwayFromLeftEnd = ref Unsafe.Add(ref currentLeftSearchSpace, length - TLoader.Count128); ref TRight currentRightSearchSpace = ref right; - ref TRight oneVectorAwayFromRightEnd = ref Unsafe.Add(ref currentRightSearchSpace, length - (uint)Vector256.Count); + ref TRight oneVectorAwayFromRightEnd = ref Unsafe.Add(ref currentRightSearchSpace, length - (uint)Vector128.Count); - Vector256 leftValues; - Vector256 rightValues; + Vector128 leftValues; + Vector128 rightValues; // Loop until either we've finished all elements or there's less than a vector's-worth remaining. do { - leftValues = TLoader.Load256(ref currentLeftSearchSpace); - rightValues = Vector256.LoadUnsafe(ref currentRightSearchSpace); + // it's OK to widen the bytes, it's NOT OK to narrow the chars (we could loose some information) + leftValues = TLoader.Load128(ref currentLeftSearchSpace); + rightValues = Vector128.LoadUnsafe(ref currentRightSearchSpace); if (leftValues != rightValues || !AllCharsInVectorAreAscii(leftValues | rightValues)) { return false; } - currentRightSearchSpace = ref Unsafe.Add(ref currentRightSearchSpace, Vector256.Count); - currentLeftSearchSpace = ref Unsafe.Add(ref currentLeftSearchSpace, TLoader.Count256); + currentRightSearchSpace = ref Unsafe.Add(ref currentRightSearchSpace, (uint)Vector128.Count); + currentLeftSearchSpace = ref Unsafe.Add(ref currentLeftSearchSpace, TLoader.Count128); } while (!Unsafe.IsAddressGreaterThan(ref currentRightSearchSpace, ref oneVectorAwayFromRightEnd)); // If any elements remain, process the last vector in the search space. - if (length % (uint)Vector256.Count != 0) + if (length % (uint)Vector128.Count != 0) { - leftValues = TLoader.Load256(ref oneVectorAwayFromLeftEnd); - rightValues = Vector256.LoadUnsafe(ref oneVectorAwayFromRightEnd); + leftValues = TLoader.Load128(ref oneVectorAwayFromLeftEnd); + rightValues = Vector128.LoadUnsafe(ref oneVectorAwayFromRightEnd); if (leftValues != rightValues || !AllCharsInVectorAreAscii(leftValues | rightValues)) { @@ -206,73 +206,72 @@ private static bool EqualsIgnoreCase(ref TLeft left, ref } } } - else if (!Vector256.IsHardwareAccelerated || length < (uint)Vector256.Count) + else if (Avx.IsSupported && length >= (uint)Vector256.Count) { ref TLeft currentLeftSearchSpace = ref left; - ref TLeft oneVectorAwayFromLeftEnd = ref Unsafe.Add(ref currentLeftSearchSpace, length - TLoader.Count128); + ref TLeft oneVectorAwayFromLeftEnd = ref Unsafe.Add(ref currentLeftSearchSpace, length - TLoader.Count256); ref TRight currentRightSearchSpace = ref right; - ref TRight oneVectorAwayFromRightEnd = ref Unsafe.Add(ref currentRightSearchSpace, length - (uint)Vector128.Count); + ref TRight oneVectorAwayFromRightEnd = ref Unsafe.Add(ref currentRightSearchSpace, length - (uint)Vector256.Count); - Vector128 leftValues; - Vector128 rightValues; + Vector256 leftValues; + Vector256 rightValues; - Vector128 loweringMask = Vector128.Create(TRight.CreateTruncating(0x20)); - Vector128 vecA = Vector128.Create(TRight.CreateTruncating('a')); - Vector128 vecZMinusA = Vector128.Create(TRight.CreateTruncating(('z' - 'a'))); + Vector256 loweringMask = Vector256.Create(TRight.CreateTruncating(0x20)); + Vector256 vecA = Vector256.Create(TRight.CreateTruncating('a')); + Vector256 vecZMinusA = Vector256.Create(TRight.CreateTruncating(('z' - 'a'))); // Loop until either we've finished all elements or there's less than a vector's-worth remaining. do { - // it's OK to widen the bytes, it's NOT OK to narrow the chars (we could loose some information) - leftValues = TLoader.Load128(ref currentLeftSearchSpace); - rightValues = Vector128.LoadUnsafe(ref currentRightSearchSpace); + leftValues = TLoader.Load256(ref currentLeftSearchSpace); + rightValues = Vector256.LoadUnsafe(ref currentRightSearchSpace); if (!AllCharsInVectorAreAscii(leftValues | rightValues)) { return false; } - Vector128 notEquals = ~Vector128.Equals(leftValues, rightValues); + Vector256 notEquals = ~Vector256.Equals(leftValues, rightValues); - if (notEquals != Vector128.Zero) + if (notEquals != Vector256.Zero) { // not exact match leftValues |= loweringMask; rightValues |= loweringMask; - if (Vector128.GreaterThanAny((leftValues - vecA) & notEquals, vecZMinusA) || leftValues != rightValues) + if (Vector256.GreaterThanAny((leftValues - vecA) & notEquals, vecZMinusA) || leftValues != rightValues) { return false; // first input isn't in [A-Za-z], and not exact match of lowered } } - currentRightSearchSpace = ref Unsafe.Add(ref currentRightSearchSpace, (uint)Vector128.Count); - currentLeftSearchSpace = ref Unsafe.Add(ref currentLeftSearchSpace, TLoader.Count128); + currentRightSearchSpace = ref Unsafe.Add(ref currentRightSearchSpace, (uint)Vector256.Count); + currentLeftSearchSpace = ref Unsafe.Add(ref currentLeftSearchSpace, TLoader.Count256); } while (!Unsafe.IsAddressGreaterThan(ref currentRightSearchSpace, ref oneVectorAwayFromRightEnd)); // If any elements remain, process the last vector in the search space. - if (length % (uint)Vector128.Count != 0) + if (length % (uint)Vector256.Count != 0) { - leftValues = TLoader.Load128(ref oneVectorAwayFromLeftEnd); - rightValues = Vector128.LoadUnsafe(ref oneVectorAwayFromRightEnd); + leftValues = TLoader.Load256(ref oneVectorAwayFromLeftEnd); + rightValues = Vector256.LoadUnsafe(ref oneVectorAwayFromRightEnd); if (!AllCharsInVectorAreAscii(leftValues | rightValues)) { return false; } - Vector128 notEquals = ~Vector128.Equals(leftValues, rightValues); + Vector256 notEquals = ~Vector256.Equals(leftValues, rightValues); - if (notEquals != Vector128.Zero) + if (notEquals != Vector256.Zero) { // not exact match leftValues |= loweringMask; rightValues |= loweringMask; - if (Vector128.GreaterThanAny((leftValues - vecA) & notEquals, vecZMinusA) || leftValues != rightValues) + if (Vector256.GreaterThanAny((leftValues - vecA) & notEquals, vecZMinusA) || leftValues != rightValues) { return false; // first input isn't in [A-Za-z], and not exact match of lowered } @@ -282,69 +281,70 @@ private static bool EqualsIgnoreCase(ref TLeft left, ref else { ref TLeft currentLeftSearchSpace = ref left; - ref TLeft oneVectorAwayFromLeftEnd = ref Unsafe.Add(ref currentLeftSearchSpace, length - TLoader.Count256); + ref TLeft oneVectorAwayFromLeftEnd = ref Unsafe.Add(ref currentLeftSearchSpace, length - TLoader.Count128); ref TRight currentRightSearchSpace = ref right; - ref TRight oneVectorAwayFromRightEnd = ref Unsafe.Add(ref currentRightSearchSpace, length - (uint)Vector256.Count); + ref TRight oneVectorAwayFromRightEnd = ref Unsafe.Add(ref currentRightSearchSpace, length - (uint)Vector128.Count); - Vector256 leftValues; - Vector256 rightValues; + Vector128 leftValues; + Vector128 rightValues; - Vector256 loweringMask = Vector256.Create(TRight.CreateTruncating(0x20)); - Vector256 vecA = Vector256.Create(TRight.CreateTruncating('a')); - Vector256 vecZMinusA = Vector256.Create(TRight.CreateTruncating(('z' - 'a'))); + Vector128 loweringMask = Vector128.Create(TRight.CreateTruncating(0x20)); + Vector128 vecA = Vector128.Create(TRight.CreateTruncating('a')); + Vector128 vecZMinusA = Vector128.Create(TRight.CreateTruncating(('z' - 'a'))); // Loop until either we've finished all elements or there's less than a vector's-worth remaining. do { - leftValues = TLoader.Load256(ref currentLeftSearchSpace); - rightValues = Vector256.LoadUnsafe(ref currentRightSearchSpace); + // it's OK to widen the bytes, it's NOT OK to narrow the chars (we could loose some information) + leftValues = TLoader.Load128(ref currentLeftSearchSpace); + rightValues = Vector128.LoadUnsafe(ref currentRightSearchSpace); if (!AllCharsInVectorAreAscii(leftValues | rightValues)) { return false; } - Vector256 notEquals = ~Vector256.Equals(leftValues, rightValues); + Vector128 notEquals = ~Vector128.Equals(leftValues, rightValues); - if (notEquals != Vector256.Zero) + if (notEquals != Vector128.Zero) { // not exact match leftValues |= loweringMask; rightValues |= loweringMask; - if (Vector256.GreaterThanAny((leftValues - vecA) & notEquals, vecZMinusA) || leftValues != rightValues) + if (Vector128.GreaterThanAny((leftValues - vecA) & notEquals, vecZMinusA) || leftValues != rightValues) { return false; // first input isn't in [A-Za-z], and not exact match of lowered } } - currentRightSearchSpace = ref Unsafe.Add(ref currentRightSearchSpace, (uint)Vector256.Count); - currentLeftSearchSpace = ref Unsafe.Add(ref currentLeftSearchSpace, TLoader.Count256); + currentRightSearchSpace = ref Unsafe.Add(ref currentRightSearchSpace, (uint)Vector128.Count); + currentLeftSearchSpace = ref Unsafe.Add(ref currentLeftSearchSpace, TLoader.Count128); } while (!Unsafe.IsAddressGreaterThan(ref currentRightSearchSpace, ref oneVectorAwayFromRightEnd)); // If any elements remain, process the last vector in the search space. - if (length % (uint)Vector256.Count != 0) + if (length % (uint)Vector128.Count != 0) { - leftValues = TLoader.Load256(ref oneVectorAwayFromLeftEnd); - rightValues = Vector256.LoadUnsafe(ref oneVectorAwayFromRightEnd); + leftValues = TLoader.Load128(ref oneVectorAwayFromLeftEnd); + rightValues = Vector128.LoadUnsafe(ref oneVectorAwayFromRightEnd); if (!AllCharsInVectorAreAscii(leftValues | rightValues)) { return false; } - Vector256 notEquals = ~Vector256.Equals(leftValues, rightValues); + Vector128 notEquals = ~Vector128.Equals(leftValues, rightValues); - if (notEquals != Vector256.Zero) + if (notEquals != Vector128.Zero) { // not exact match leftValues |= loweringMask; rightValues |= loweringMask; - if (Vector256.GreaterThanAny((leftValues - vecA) & notEquals, vecZMinusA) || leftValues != rightValues) + if (Vector128.GreaterThanAny((leftValues - vecA) & notEquals, vecZMinusA) || leftValues != rightValues) { return false; // first input isn't in [A-Za-z], and not exact match of lowered } diff --git a/src/libraries/System.Private.CoreLib/src/System/Text/Ascii.Utility.cs b/src/libraries/System.Private.CoreLib/src/System/Text/Ascii.Utility.cs index 2537f9c770ef8e..cbec6d4acdb3f2 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Text/Ascii.Utility.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Text/Ascii.Utility.cs @@ -53,6 +53,7 @@ private static bool AllCharsInUInt64AreAscii(ulong value) } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] private static int GetIndexOfFirstNonAsciiByteInLane_AdvSimd(Vector128 value, Vector128 bitmask) { if (!AdvSimd.Arm64.IsSupported || !BitConverter.IsLittleEndian) @@ -1478,6 +1479,7 @@ private static bool AllCharsInVectorAreAscii(Vector128 vector) } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(Avx))] private static bool AllCharsInVectorAreAscii(Vector256 vector) where T : unmanaged { diff --git a/src/libraries/System.Private.CoreLib/src/System/Text/Ascii.cs b/src/libraries/System.Private.CoreLib/src/System/Text/Ascii.cs index 0b9af8cf0c6ae9..85801e101a1735 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Text/Ascii.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Text/Ascii.cs @@ -5,6 +5,7 @@ using System.Runtime.CompilerServices; using System.Runtime.InteropServices; using System.Runtime.Intrinsics; +using System.Runtime.Intrinsics.X86; namespace System.Text { @@ -112,7 +113,7 @@ private static unsafe bool IsValidCore(ref T searchSpace, int length) where T Vector128.LoadUnsafe(ref Unsafe.Subtract(ref searchSpaceEnd, Vector128.Count))); } - if (Vector256.IsHardwareAccelerated) + if (Avx.IsSupported) { // Process inputs with lengths [33, 64] bytes. if (length <= 2 * Vector256.Count) diff --git a/src/libraries/System.Private.CoreLib/src/System/Text/Latin1Utility.cs b/src/libraries/System.Private.CoreLib/src/System/Text/Latin1Utility.cs index e579f6fa9d56b1..92f4c259c9f976 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Text/Latin1Utility.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Text/Latin1Utility.cs @@ -163,6 +163,7 @@ private static unsafe nuint GetIndexOfFirstNonLatin1Char_Default(char* pBuffer, goto Finish; } + [CompExactlyDependsOn(typeof(Sse2))] private static unsafe nuint GetIndexOfFirstNonLatin1Char_Sse2(char* pBuffer, nuint bufferLength /* in chars */) { // This method contains logic optimized for both SSE2 and SSE41. Much of the logic in this method @@ -260,7 +261,9 @@ private static unsafe nuint GetIndexOfFirstNonLatin1Char_Sse2(char* pBuffer, nui secondVector = Sse2.LoadAlignedVector128((ushort*)pBuffer + SizeOfVector128InChars); Vector128 combinedVector = Sse2.Or(firstVector, secondVector); +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // In this case, we have an else clause which has the same semantic meaning whether or not Sse41 is considered supported or unsupported if (Sse41.IsSupported) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { // If a non-Latin-1 bit is set in any WORD of the combined vector, we have seen non-Latin-1 data. // Jump to the non-Latin-1 handler to figure out which particular vector contained non-Latin-1 data. @@ -303,7 +306,9 @@ private static unsafe nuint GetIndexOfFirstNonLatin1Char_Sse2(char* pBuffer, nui firstVector = Sse2.LoadAlignedVector128((ushort*)pBuffer); +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // In this case, we have an else clause which has the same semantic meaning whether or not Sse41 is considered supported or unsupported if (Sse41.IsSupported) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { // If a non-Latin-1 bit is set in any WORD of the combined vector, we have seen non-Latin-1 data. // Jump to the non-Latin-1 handler to figure out which particular vector contained non-Latin-1 data. @@ -336,7 +341,9 @@ private static unsafe nuint GetIndexOfFirstNonLatin1Char_Sse2(char* pBuffer, nui pBuffer = (char*)((byte*)pBuffer + (bufferLength & (SizeOfVector128InBytes - 1)) - SizeOfVector128InBytes); firstVector = Sse2.LoadVector128((ushort*)pBuffer); // unaligned load +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // In this case, we have an else clause which has the same semantic meaning whether or not Sse41 is considered supported or unsupported if (Sse41.IsSupported) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { // If a non-Latin-1 bit is set in any WORD of the combined vector, we have seen non-Latin-1 data. // Jump to the non-Latin-1 handler to figure out which particular vector contained non-Latin-1 data. @@ -370,7 +377,9 @@ private static unsafe nuint GetIndexOfFirstNonLatin1Char_Sse2(char* pBuffer, nui // we'll make sure the first vector local is the one that contains the non-Latin-1 data. // See comment earlier in the method for an explanation of how the below logic works. +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // In this case, we have an else clause which has the same semantic meaning whether or not Sse41 is considered supported or unsupported if (Sse41.IsSupported) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { if (!Sse41.TestZ(firstVector, latin1MaskForTestZ)) { @@ -445,7 +454,9 @@ private static unsafe nuint GetIndexOfFirstNonLatin1Char_Sse2(char* pBuffer, nui if ((bufferLength & 4) != 0) { +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // In this case, we have an else clause which has the same semantic meaning whether or not Bmi1.X64 is considered supported or unsupported if (Bmi1.X64.IsSupported) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { // If we can use 64-bit tzcnt to count the number of leading Latin-1 chars, prefer it. @@ -750,6 +761,7 @@ public static unsafe nuint NarrowUtf16ToLatin1(char* pUtf16Buffer, byte* pLatin1 goto Finish; } + [CompExactlyDependsOn(typeof(Sse2))] private static unsafe nuint NarrowUtf16ToLatin1_Sse2(char* pUtf16Buffer, byte* pLatin1Buffer, nuint elementCount) { // This method contains logic optimized for both SSE2 and SSE41. Much of the logic in this method @@ -779,7 +791,9 @@ private static unsafe nuint NarrowUtf16ToLatin1_Sse2(char* pUtf16Buffer, byte* p // If there's non-Latin-1 data in the first 8 elements of the vector, there's nothing we can do. // See comments in GetIndexOfFirstNonLatin1Char_Sse2 for information about how this works. +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // In this case, we have an else clause which has the same semantic meaning whether or not Sse41 is considered supported or unsupported if (Sse41.IsSupported) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { if (!Sse41.TestZ(utf16VectorFirst, latin1MaskForTestZ)) { @@ -819,7 +833,9 @@ private static unsafe nuint NarrowUtf16ToLatin1_Sse2(char* pUtf16Buffer, byte* p utf16VectorFirst = Sse2.LoadVector128((short*)pUtf16Buffer + currentOffsetInElements); // unaligned load // See comments earlier in this method for information about how this works. +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // In this case, we have an else clause which has the same semantic meaning whether or not Sse41 is considered supported or unsupported if (Sse41.IsSupported) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { if (!Sse41.TestZ(utf16VectorFirst, latin1MaskForTestZ)) { @@ -858,7 +874,9 @@ private static unsafe nuint NarrowUtf16ToLatin1_Sse2(char* pUtf16Buffer, byte* p Vector128 combinedVector = Sse2.Or(utf16VectorFirst, utf16VectorSecond); // See comments in GetIndexOfFirstNonLatin1Char_Sse2 for information about how this works. +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // In this case, we have an else clause which has the same semantic meaning whether or not Sse41 is considered supported or unsupported if (Sse41.IsSupported) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { if (!Sse41.TestZ(combinedVector, latin1MaskForTestZ)) { @@ -892,7 +910,9 @@ private static unsafe nuint NarrowUtf16ToLatin1_Sse2(char* pUtf16Buffer, byte* p // Can we at least narrow the high vector? // See comments in GetIndexOfFirstNonLatin1Char_Sse2 for information about how this works. +#pragma warning disable IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough // In this case, we have an else clause which has the same semantic meaning whether or not Sse41 is considered supported or unsupported if (Sse41.IsSupported) +#pragma warning restore IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough { if (!Sse41.TestZ(utf16VectorFirst, latin1MaskForTestZ)) { @@ -940,6 +960,7 @@ public static unsafe void WidenLatin1ToUtf16(byte* pLatin1Buffer, char* pUtf16Bu } } + [CompExactlyDependsOn(typeof(Sse2))] private static unsafe void WidenLatin1ToUtf16_Sse2(byte* pLatin1Buffer, char* pUtf16Buffer, nuint elementCount) { // JIT turns the below into constants diff --git a/src/libraries/System.Private.CoreLib/src/System/Text/Unicode/Utf16Utility.Validation.cs b/src/libraries/System.Private.CoreLib/src/System/Text/Unicode/Utf16Utility.Validation.cs index 199d494f6e174a..8818a96d16f107 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Text/Unicode/Utf16Utility.Validation.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Text/Unicode/Utf16Utility.Validation.cs @@ -130,7 +130,7 @@ internal static unsafe partial class Utf16Utility // bit for 1-byte or 2-byte elements. The 0x0080 bit will already have been set for non-ASCII (2-byte // and 3-byte) elements. - if (AdvSimd.IsSupported) + if (AdvSimd.Arm64.IsSupported) { charIsThreeByteUtf8Encoded = AdvSimd.AddSaturate(utf16Data, vector7800); mask = GetNonAsciiBytes(AdvSimd.Or(charIsNonAscii, charIsThreeByteUtf8Encoded).AsByte(), bitMask128); @@ -489,6 +489,7 @@ internal static unsafe partial class Utf16Utility } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] private static uint GetNonAsciiBytes(Vector128 value, Vector128 bitMask128) { Debug.Assert(AdvSimd.Arm64.IsSupported); diff --git a/src/libraries/System.Private.CoreLib/src/System/Text/Unicode/Utf8Utility.Transcoding.cs b/src/libraries/System.Private.CoreLib/src/System/Text/Unicode/Utf8Utility.Transcoding.cs index d0b31f57072067..76c4bed4d9a769 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Text/Unicode/Utf8Utility.Transcoding.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Text/Unicode/Utf8Utility.Transcoding.cs @@ -953,7 +953,7 @@ public static OperationStatus TranscodeToUtf8(char* pInputBuffer, int inputLengt utf16Data = Unsafe.ReadUnaligned>(pInputBuffer); - if (AdvSimd.IsSupported) + if (AdvSimd.Arm64.IsSupported) { Vector128 isUtf16DataNonAscii = AdvSimd.CompareTest(utf16Data, nonAsciiUtf16DataMask); bool hasNonAsciiDataInVector = AdvSimd.Arm64.MinPairwise(isUtf16DataNonAscii, isUtf16DataNonAscii).AsUInt64().ToScalar() != 0; diff --git a/src/libraries/System.Private.CoreLib/src/System/Text/Unicode/Utf8Utility.Validation.cs b/src/libraries/System.Private.CoreLib/src/System/Text/Unicode/Utf8Utility.Validation.cs index 91bd846b41e091..a542dad72b5c33 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Text/Unicode/Utf8Utility.Validation.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Text/Unicode/Utf8Utility.Validation.cs @@ -740,6 +740,7 @@ internal static unsafe partial class Utf8Utility } [MethodImpl(MethodImplOptions.AggressiveInlining)] + [CompExactlyDependsOn(typeof(AdvSimd.Arm64))] private static ulong GetNonAsciiBytes(Vector128 value, Vector128 bitMask128) { if (!AdvSimd.Arm64.IsSupported || !BitConverter.IsLittleEndian) diff --git a/src/libraries/System.Private.CoreLib/tests/IntrinsicsInSystemPrivatecoreLibAnalyzer.Tests/CSharpAnalyzerVerifier`1+Test.cs b/src/libraries/System.Private.CoreLib/tests/IntrinsicsInSystemPrivatecoreLibAnalyzer.Tests/CSharpAnalyzerVerifier`1+Test.cs new file mode 100644 index 00000000000000..1372bc02caacde --- /dev/null +++ b/src/libraries/System.Private.CoreLib/tests/IntrinsicsInSystemPrivatecoreLibAnalyzer.Tests/CSharpAnalyzerVerifier`1+Test.cs @@ -0,0 +1,24 @@ +using Microsoft.CodeAnalysis.CSharp.Testing; +using Microsoft.CodeAnalysis.Diagnostics; +using Microsoft.CodeAnalysis.Testing.Verifiers; + +namespace IntrinsicsInSystemPrivateCoreLib.Test +{ + public static partial class CSharpAnalyzerVerifier + where TAnalyzer : DiagnosticAnalyzer, new() + { + public class Test : CSharpAnalyzerTest + { + public Test() + { + SolutionTransforms.Add((solution, projectId) => + { + var compilationOptions = solution.GetProject(projectId).CompilationOptions; + solution = solution.WithProjectCompilationOptions(projectId, compilationOptions); + + return solution; + }); + } + } + } +} diff --git a/src/libraries/System.Private.CoreLib/tests/IntrinsicsInSystemPrivatecoreLibAnalyzer.Tests/CSharpAnalyzerVerifier`1.cs b/src/libraries/System.Private.CoreLib/tests/IntrinsicsInSystemPrivatecoreLibAnalyzer.Tests/CSharpAnalyzerVerifier`1.cs new file mode 100644 index 00000000000000..6677ef554399b9 --- /dev/null +++ b/src/libraries/System.Private.CoreLib/tests/IntrinsicsInSystemPrivatecoreLibAnalyzer.Tests/CSharpAnalyzerVerifier`1.cs @@ -0,0 +1,38 @@ +using Microsoft.CodeAnalysis; +using Microsoft.CodeAnalysis.CSharp.Testing; +using Microsoft.CodeAnalysis.Diagnostics; +using Microsoft.CodeAnalysis.Testing; +using Microsoft.CodeAnalysis.Testing.Verifiers; +using System.Threading; +using System.Threading.Tasks; + +namespace IntrinsicsInSystemPrivateCoreLib.Test +{ + public static partial class CSharpAnalyzerVerifier + where TAnalyzer : DiagnosticAnalyzer, new() + { + /// + public static DiagnosticResult Diagnostic() + => CSharpAnalyzerVerifier.Diagnostic(); + + /// + public static DiagnosticResult Diagnostic(string diagnosticId) + => CSharpAnalyzerVerifier.Diagnostic(diagnosticId); + + /// + public static DiagnosticResult Diagnostic(DiagnosticDescriptor descriptor) + => CSharpAnalyzerVerifier.Diagnostic(descriptor); + + /// + public static async Task VerifyAnalyzerAsync(string source, params DiagnosticResult[] expected) + { + var test = new Test + { + TestCode = source, + }; + + test.ExpectedDiagnostics.AddRange(expected); + await test.RunAsync(CancellationToken.None); + } + } +} diff --git a/src/libraries/System.Private.CoreLib/tests/IntrinsicsInSystemPrivatecoreLibAnalyzer.Tests/IntrinsicsInSystemPrivateCoreLib.Tests.csproj b/src/libraries/System.Private.CoreLib/tests/IntrinsicsInSystemPrivatecoreLibAnalyzer.Tests/IntrinsicsInSystemPrivateCoreLib.Tests.csproj new file mode 100644 index 00000000000000..790a0c81a36356 --- /dev/null +++ b/src/libraries/System.Private.CoreLib/tests/IntrinsicsInSystemPrivatecoreLibAnalyzer.Tests/IntrinsicsInSystemPrivateCoreLib.Tests.csproj @@ -0,0 +1,21 @@ + + + + $(NetCoreAppCurrent) + + + + + + + + + + + + + + + + + diff --git a/src/libraries/System.Private.CoreLib/tests/IntrinsicsInSystemPrivatecoreLibAnalyzer.Tests/IntrinsicsInSystemPrivateCoreLibUnitTests.cs b/src/libraries/System.Private.CoreLib/tests/IntrinsicsInSystemPrivatecoreLibAnalyzer.Tests/IntrinsicsInSystemPrivateCoreLibUnitTests.cs new file mode 100644 index 00000000000000..6c783c7ae3aeb4 --- /dev/null +++ b/src/libraries/System.Private.CoreLib/tests/IntrinsicsInSystemPrivatecoreLibAnalyzer.Tests/IntrinsicsInSystemPrivateCoreLibUnitTests.cs @@ -0,0 +1,634 @@ +using Xunit; +using System.Threading.Tasks; +using VerifyCS = IntrinsicsInSystemPrivateCoreLib.Test.CSharpAnalyzerVerifier< + IntrinsicsInSystemPrivateCoreLib.IntrinsicsInSystemPrivateCoreLibAnalyzer>; + +namespace IntrinsicsInSystemPrivateCoreLib.Test +{ + [ActiveIssue("https://github.com/dotnet/runtime/issues/60650", TestRuntimes.Mono)] + public class IntrinsicsInSystemPrivateCoreLibUnitTest + { + string BoilerPlate = @" +using System; +using System.Collections.Generic; +using System.Linq; +using System.Text; +using System.Threading.Tasks; +using System.Diagnostics; +using System.Runtime; +using System.Runtime.CompilerServices; +using System.Runtime.Intrinsics.X86; +using System.Runtime.Intrinsics.Arm; +using System.Runtime.Intrinsics.Wasm; + +namespace System.Runtime.CompilerServices +{ + [AttributeUsage(AttributeTargets.Method | AttributeTargets.Constructor, Inherited = false, AllowMultiple = true)] + internal sealed class CompExactlyDependsOnAttribute : Attribute + { + public CompExactlyDependsOnAttribute(Type intrinsicsTypeUsedInHelperFunction) + { + } + } +} + +namespace System.Runtime.Intrinsics.X86 +{ + class Sse + { + public static bool IsSupported => true; + public static bool DoSomething() { return true; } + public class X64 + { + public static bool IsSupported => true; + public static bool DoSomethingX64() { return true; } + } + } + class Avx : Sse + { + public static bool IsSupported => true; + public static bool DoSomething() { return true; } + public class X64 + { + public static bool IsSupported => true; + public static bool DoSomethingX64() { return true; } + } + } + class Avx2 : Avx + { + public static bool IsSupported => true; + public static bool DoSomething() { return true; } + public class X64 + { + public static bool IsSupported => true; + public static bool DoSomethingX64() { return true; } + } + } +} +namespace System.Runtime.Intrinsics.Arm +{ + class ArmBase + { + public static bool IsSupported => true; + public static bool DoSomething() { return true; } + public class Arm64 + { + public static bool IsSupported => true; + public static bool DoSomethingArm64() { return true; } + } + } +} + +namespace System.Runtime.Intrinsics.Wasm +{ + class PackedSimd + { + public static bool IsSupported => true; + public static bool DoSomething() { return true; } + } +} + +"; + [Fact] + public async Task TestMethodUnprotectedUse() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + static void FuncBad() + { + {|#0:Avx2.DoSomething()|}; + } + } + }"; + + var expected = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLib").WithLocation(0).WithArguments("System.Runtime.Intrinsics.X86.Avx2"); + await VerifyCS.VerifyAnalyzerAsync(test, expected); + } + + [Fact] + public async Task TestMethodUnprotectedUseWithIntrinsicsHelperAttribute() + { + var test = BoilerPlate + @" + +namespace ConsoleApplication1 +{ + class TypeName + { + [CompExactlyDependsOn(typeof(Avx2))] + static void FuncGood() + { + Avx2.DoSomething(); + } + } +}"; + + await VerifyCS.VerifyAnalyzerAsync(test); + } + + [Fact] + public async Task TestMethodUnprotectedUseWithIntrinsicsHelperAttributeComplex() + { + var test = BoilerPlate + @" + +namespace ConsoleApplication1 +{ + class TypeName + { + [CompExactlyDependsOn(typeof(Avx))] + [CompExactlyDependsOn(typeof(Avx2))] + static void FuncGood() + { + // This tests the behavior of a function which behaves differently when Avx2 is supported (Somehting like Vector128.ShuffleUnsafe) + if (Avx2.IsSupported) + Avx2.DoSomething(); + else + Avx.DoSomething(); + } + } +}"; + + await VerifyCS.VerifyAnalyzerAsync(test); + } + + [Fact] + public async Task TestMethodUnprotectedUseInLocalFunctionWithIntrinsicsHelperAttributeNotOnLocalFunction() + { + var test = BoilerPlate + @" +namespace ConsoleApplication1 +{ + class TypeName + { + [CompExactlyDependsOn(typeof(Avx2))] + static void FuncBad() + { + LocalFunc(); + + static void LocalFunc() + { + {|#0:Avx2.DoSomething()|}; + } + } + } +}"; + + var expected = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLib").WithLocation(0).WithArguments("System.Runtime.Intrinsics.X86.Avx2"); + await VerifyCS.VerifyAnalyzerAsync(test, expected); + } + + [Fact] + public async Task TestMethodUnprotectedUseInLambdaWithIntrinsicsHelperAttributeOnOuterFunction() + { + var test = BoilerPlate + @" + +namespace ConsoleApplication1 +{ + class TypeName + { + [CompExactlyDependsOn(typeof(Avx2))] + static void FuncBad() + { + Action act = () => + { + {|#0:Avx2.DoSomething()|}; + }; + act(); + } + } +}"; + + var expected = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLib").WithLocation(0).WithArguments("System.Runtime.Intrinsics.X86.Avx2"); + await VerifyCS.VerifyAnalyzerAsync(test, expected); + } + + [Fact] + public async Task TestMethodUnprotectedUseInLocalFunctionWithIntrinsicsHelperAttributeOnLocalFunction() + { + var test = BoilerPlate + @" +namespace ConsoleApplication1 +{ + class TypeName + { + static void FuncBad() + { + [CompExactlyDependsOn(typeof(Avx2))] + static void LocalFunc() + { + Avx2.DoSomething(); + } + + if (Avx2.IsSupported) + LocalFunc(); + } + } +}"; + + await VerifyCS.VerifyAnalyzerAsync(test); + } + + [Fact] + public async Task TestMethodUnprotectedNestedTypeUse() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + static void FuncBad() + { + {|#0:Avx2.X64.DoSomethingX64()|}; + } + } + }"; + + var expected = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLib").WithLocation(0).WithArguments("System.Runtime.Intrinsics.X86.Avx2.X64"); + await VerifyCS.VerifyAnalyzerAsync(test, expected); + } + + [Fact] + public async Task TestMethodWithIfStatement() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + static void FuncGood() + { + if (Avx2.IsSupported) + Avx2.DoSomething(); + } + } + }"; + + await VerifyCS.VerifyAnalyzerAsync(test); + } + + + [Fact] + public async Task TestMethodWithIfStatementButWithInadequateHelperMethodAttribute() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + [CompExactlyDependsOn(typeof(Avx))] + static void FuncBad() + { + if ({|#0:Avx2.IsSupported|}) + Avx2.DoSomething(); + } + } + }"; + + var expected = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLibAttributeNotSpecificEnough").WithLocation(0).WithArguments("System.Runtime.Intrinsics.X86.Avx"); + await VerifyCS.VerifyAnalyzerAsync(test, expected); + } + + [Fact] + public async Task TestMethodWithIfStatementButWithAdequateHelperMethodAttribute() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + [CompExactlyDependsOn(typeof(Avx2))] + static void FuncBad() + { + if (Avx2.IsSupported) + Avx2.DoSomething(); + } + } + }"; + + await VerifyCS.VerifyAnalyzerAsync(test); + } + + [Fact] + public async Task TestMethodWithIfStatementWithNestedAndBaseTypeLookupRequired() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + static void FuncGood() + { + if (Avx2.X64.IsSupported) + Sse.DoSomething(); + } + } + }"; + + await VerifyCS.VerifyAnalyzerAsync(test); + } + + [Fact] + public async Task TestMethodWithTernaryOperator() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + static bool FuncGood() + { + return Avx2.IsSupported ? Avx2.DoSomething() : false; + } + } + }"; + + await VerifyCS.VerifyAnalyzerAsync(test); + } + + [Fact] + public async Task TestMethodWithIfStatementWithOrOperationCase() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + static void FuncGood() + { + if (ArmBase.IsSupported || (Avx2.IsSupported && BitConverter.IsLittleEndian)) + { + if (ArmBase.IsSupported) + ArmBase.DoSomething(); + else + Avx2.DoSomething(); + + if (Avx2.IsSupported) + Avx2.DoSomething(); + else + ArmBase.DoSomething(); + } + } + } + }"; + + await VerifyCS.VerifyAnalyzerAsync(test); + } + + [Fact] + public async Task TestMethodWithIfStatementWithOrOperationCaseWithImplicationProcessingRequired() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + static void FuncGood() + { + if (ArmBase.Arm64.IsSupported || (Avx2.IsSupported && BitConverter.IsLittleEndian)) + { + if (ArmBase.IsSupported) + ArmBase.DoSomething(); + else + Avx2.DoSomething(); + + if (Avx2.IsSupported) + Avx2.DoSomething(); + else + ArmBase.DoSomething(); + } + } + } + }"; + + await VerifyCS.VerifyAnalyzerAsync(test); + } + + [Fact] + public async Task TestMethodWithIfStatementAroundLocalFunctionDefinition() + { + var test = BoilerPlate + @" + + namespace ConsoleApplication1 + { + class TypeName + { + static void FuncGood() + { + if (Avx2.IsSupported) + { + LocalFunction(); + + // Local functions should cause an error to be reported, as they are NOT the same function from a runtime point of view + void LocalFunction() + { + {|#0:Avx2.DoSomething()|}; + } + } + } + } + }"; + + var expected = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLib").WithLocation(0).WithArguments("System.Runtime.Intrinsics.X86.Avx2"); + await VerifyCS.VerifyAnalyzerAsync(test, expected); + } + + [Fact] + public async Task TestMethodWithIfStatementAroundLambdaFunctionDefinition() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + static void FuncGood() + { + if (Avx2.IsSupported) + { + // Lambda functions should cause an error to be reported, as they are NOT the same function from a runtime point of view + Action a = () => + { + {|#0:Avx2.DoSomething()|}; + }; + } + } + } + }"; + + var expected = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLib").WithLocation(0).WithArguments("System.Runtime.Intrinsics.X86.Avx2"); + await VerifyCS.VerifyAnalyzerAsync(test, expected); + } + + [Fact] + public async Task TestHelperMethodsCanOnlyBeCalledWithAppropriateIsSupportedChecksError() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + [CompExactlyDependsOn(typeof(Avx))] + static void FuncHelper() + { + } + + [CompExactlyDependsOn(typeof(Avx))] + [CompExactlyDependsOn(typeof(ArmBase))] + static void FuncHelper2() + { + } + + static bool SomeIrrelevantProperty => true; + + static void FuncBad() + { + {|#0:FuncHelper()|}; + if (Avx2.IsSupported || ArmBase.IsSupported) + { + {|#1:FuncHelper()|}; + } + + if ({|#3:(Avx.IsSupported || ArmBase.IsSupported) && PackedSimd.IsSupported|}) + { + {|#2:FuncHelper2()|}; + } + + + if (Avx.IsSupported || (SomeIrrelevantProperty && ArmBase.IsSupported)) + { + {|#4:FuncHelper()|}; + } + } + } + }"; + + var expected = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLibHelper").WithLocation(0).WithArguments("ConsoleApplication1.TypeName.FuncHelper()"); + var expected2 = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLibHelper").WithLocation(1).WithArguments("ConsoleApplication1.TypeName.FuncHelper()"); + var expected3 = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLibHelper").WithLocation(2).WithArguments("ConsoleApplication1.TypeName.FuncHelper2()"); + var expected4 = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLibConditionParsing").WithLocation(3); + var expected5 = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLibHelper").WithLocation(4).WithArguments("ConsoleApplication1.TypeName.FuncHelper()"); + await VerifyCS.VerifyAnalyzerAsync(test, expected, expected2, expected3, expected4, expected5); + } + [Fact] + public async Task TestHelperMethodsCanOnlyBeCalledWithAppropriateIsSupportedChecksSuccess() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + [CompExactlyDependsOn(typeof(Avx))] + [CompExactlyDependsOn(typeof(ArmBase))] + static void FuncHelper() + { + } + + static bool SomeIrrelevantProperty => true; + static void FuncGood() + { + if (Avx2.IsSupported) + { + FuncHelper(); + } + if (ArmBase.IsSupported) + { + FuncHelper(); + } + if (Avx2.IsSupported || ArmBase.IsSupported) + { + FuncHelper(); + } + if ((Avx2.IsSupported || ArmBase.IsSupported) && SomeIrrelevantProperty) + { + FuncHelper(); + } + } + } + }"; + + await VerifyCS.VerifyAnalyzerAsync(test); + } + + [Fact] + public async Task TestHelperMethodsUnrelatedPropertyDoesntHelp() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + [CompExactlyDependsOn(typeof(Avx))] + [CompExactlyDependsOn(typeof(ArmBase))] + static void FuncHelper() + { + } + + static bool HelperIsSupported => true; + + static void FuncBad() + { + if (HelperIsSupported) + { + {|#0:FuncHelper()|}; + } + } + } + }"; + + var expected = VerifyCS.Diagnostic("IntrinsicsInSystemPrivateCoreLibHelper").WithLocation(0).WithArguments("ConsoleApplication1.TypeName.FuncHelper()"); + await VerifyCS.VerifyAnalyzerAsync(test, expected); + } + + [Fact] + public async Task TestHelperMethodsWithHelperProperty() + { + var test = BoilerPlate + @" + namespace ConsoleApplication1 + { + class TypeName + { + [CompExactlyDependsOn(typeof(Avx))] + [CompExactlyDependsOn(typeof(ArmBase))] + static void FuncHelper() + { + } + + static bool HelperIsSupported => Avx.IsSupported || ArmBase.IsSupported; + + static void FuncGood() + { + if (HelperIsSupported) + { + FuncHelper(); + } + } + } + }"; + + await VerifyCS.VerifyAnalyzerAsync(test); + } + + + [Fact] + public async Task TestMethodUseOfIntrinsicsFromWithinOtherMethodOnIntrinsicType() + { + var test = @" +namespace System.Runtime.Intrinsics.X86 +{ + class Sse + { + public static bool IsSupported => true; + public static bool DoSomething() { return true; } + public static bool DoSomethingElse() { return !Sse.DoSomething(); } + public class X64 + { + public static bool IsSupported => true; + public static bool DoSomethingX64() { return !Sse.DoSomething(); } + } + } +} +"; + + await VerifyCS.VerifyAnalyzerAsync(test); + } + } +}