Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
175 changes: 166 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,171 @@
CUDA Stream Compaction
======================
<p align="center">
<h1 align="center">Prefix Sum and Stream Compaction</h2>
<h2 align="center">Author: (Charles) Zixin Zhang</h2>
<p align="center">
CPU and GPU Implementations of Exclusive Prefix Sum(Scan) Algorithm and Stream Compaction in CUDA C
</p>
</p>

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**
---

* (TODO) YOUR NAME HERE
* (TODO) [LinkedIn](), [personal website](), [twitter](), etc.
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
## Features

### (TODO: Your README)
- CPU Scan & Stream Compaction
- Recusive Naive GPU Scan Algorithm Using Shared Memory
- Work-Efficient GPU Scan Using Shared Memory & Stream Compaction
- Thrust's Scan Algorithm

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)
For all GPU Scan algorithms, I choose to implement inclusive Scan first, and then convert the result of inclusive Scan to exclusive Scan. This can be done in parallel with minimal code.

## Performance Analysis

![scan](images/scan.png)

When the array size is under 20,000, CPU Scan performs better than other algorithms. As the array size increases, GPU Naive Scan performs better than the rest of the algorithms. The Thrust implementation has more stable performance than the rest of the algorithms.

Output when array size is 65536:

```
[SM 8.6 NVIDIA GeForce RTX 3080]
Max threads per block: 1024
Shared memory per block: 49152 bytes
Max threads per SM: 1536
Max blocks per SM: 16
Max grid size: 2147483647, 65535, 65535
****************
** SCAN TESTS **
****************
[ 27 40 6 30 21 41 41 26 20 5 6 29 41 ... 32 0 ]
==== cpu scan, power-of-two ====
elapsed time: 0.0972ms (std::chrono Measured)
[ 0 27 67 73 103 124 165 206 232 252 257 263 292 ... 1599954 1599986 ]

==== cpu scan, non-power-of-two ====
elapsed time: 0.085ms (std::chrono Measured)
[ 0 27 67 73 103 124 165 206 232 252 257 263 292 ... 1599856 1599858 ]
passed

==== work-efficient scan, power-of-two ====
elapsed time: 0.178144ms (CUDA Measured)
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 0.096544ms (CUDA Measured)
passed
==== naive scan, power-of-two ====
elapsed time: 0.091232ms (CUDA Measured)
passed
==== naive scan, non-power-of-two ====
elapsed time: 0.182464ms (CUDA Measured)
passed
==== thrust scan, power-of-two ====
elapsed time: 0.10432ms (CUDA Measured)
[ 0 27 67 73 103 124 165 206 232 252 257 263 292 ... 1599954 1599986 ]
passed
==== thrust scan, non-power-of-two ====
elapsed time: 0.075776ms (CUDA Measured)
[ 0 27 67 73 103 124 165 206 232 252 257 263 292 ... 1599856 1599858 ]
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
[ 0 1 0 1 3 3 2 1 0 1 2 1 2 ... 3 0 ]
==== cpu compact without scan, power-of-two ====
elapsed time: 0.1293ms (std::chrono Measured)
[ 1 1 3 3 2 1 1 2 1 2 2 1 3 ... 3 2 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 0.1319ms (std::chrono Measured)
[ 1 1 3 3 2 1 1 2 1 2 2 1 3 ... 3 3 ]
passed
==== cpu compact with scan ====
elapsed time: 0.6768ms (std::chrono Measured)
[ 1 1 3 3 2 1 1 2 1 2 2 1 3 ... 3 2 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 0.096544ms (CUDA Measured)
[ 1 1 3 3 2 1 1 2 1 2 2 1 3 ... 3 2 ]
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 0.096544ms (CUDA Measured)
[ 1 1 3 3 2 1 1 2 1 2 2 1 3 ... 3 3 ]
passed
Press any key to continue . . .
```



### Block Size

RTX 3080 Stats:

```
[SM 8.6 NVIDIA GeForce RTX 3080]
Max threads per block: 1024
Shared memory per block: 49152 bytes
Max threads per SM: 1536
Max blocks per SM: 16
Max grid size: 2147483647, 65535, 65535
```

I want to choose a block configuration that would result in the largest number of threads in the SM.

:heavy_check_mark: 512 threads per block

- You need 1536/512 = 3 blocks to fully occupy the SM. Fortunately, SM allows up to 16 blocks. Thus, the actual number of threads that can run on this SM is 3 * 512 = 1536. We have occupied 1536/1536 = 100% of the SM.

## Naive Scan

- Implemented ```NaiveGPUScan``` using shared memory.
- Each thread is assigned to evolve the contents of one element in the input array.
- This is largely a four step process:
- compute the scan result for individual sections. Then, store their block sum to ```sumArray```
- scan block sums
- add scanned block sum ```i``` to all values of scanned block ```i + 1```
- convert from inclusive to exclusive scan

In my implementation, the naive kernel can process up to 128 elements in each section by using 128 threads in each block. If the input data consists of 1,000,000 elements, we can use ceil(1,000,000 / 128) = 7813 thread blocks. With up to 2147483647 thread blocks in the x-dimension of the grid, the naive kernel can process up to 2147483647 * 128 = around 274 billion elements.

## Work Efficient Scan

Understand thread to data mapping:

```int index = (threadIdx.x + 1) * stride * 2 - 1;```

- (threadIdx.x + 1) shifts thread indices from 0, 1, 2, 3, ... to 1, 2, 3, 4, ...All indices become non-zero integers.
- (threadIdx.x + 1) * stride * 2 - 1
- For example, when stride = 1, we want thread 0 maps to data index [1], thread 1 maps to data index[3], etc.
- (threadIdx.x + 1) * stride * 2 - 1 = (0 + 1) * 1 * 2 - 1 = 1
- (threadIdx.x + 1) * stride * 2 - 1 = (1 + 1) * 1 * 2 - 1 = 3
- For example, when stride = 2, we want thread 0 maps to data index [3], thread 1 maps to data index[7], etc.
- (threadIdx.x + 1) * stride * 2 - 1 = (0 + 1) * 2 * 2 - 1 = 3
- (threadIdx.x + 1) * stride * 2 - 1 = (1 + 1) * 2 * 2 - 1 = 7



## Bloopers

### #1

```
CUDA error (d:\dev\565\project2-stream-compaction\stream_compaction\naive.cu:84): memCpy back failed!: an illegal memory access was encountered

83 cudaMemcpy(odata, d_OutputData, size, cudaMemcpyDeviceToHost);
84 checkCUDAError("memCpy back failed!");
```

- I encountered this error when implementing the naive version (without considering arbirary-length inputs) of the scan algorithm. At first, I suspected the culprit is on line 83 (because the line 84 reports the error). However, the culprit actually resides in my ```kernNaiveGPUScan``` function where I accessed ```XY[-1]``` inside the for loop.
- Fix: Need a if-statement to make sure we never access```XY[-1]```. Also need to make sure ```__syncthreads()``` are **not** in the if-statement.

> When a ```__syncthread()``` statement is placed in an if-statement, either all or none of the threads in a block execute the path that includes the __syncthreads(). PMPP p.59

## Note

- CPU sequential scan algorithms are linear algorithms and are extremely work-efficient.
- Expected speed: Thrust > GPU Efficient(Brent Kung) >= CPU > Naive GPU (koggle stone)
- Why is Naive GPU slower than CPU ?
- Naive GPU has control divergence in the first warp. Performance hit is worse for smaller block size.
- Naive GPU is not work-efficient. Naive GPU has NlogN - (N - 1), whereas CPU has only (N - 1)
- Why is GPU Efficient quicker?
- reduction step takes N - 1 operations, distribution phase takes N operations. Overall, it is a work-efficient algorithm.

189 changes: 189 additions & 0 deletions images/plotting/.ipynb_checkpoints/CUDA Flocking-checkpoint.ipynb

Large diffs are not rendered by default.

91 changes: 91 additions & 0 deletions images/plotting/CUDA Flocking.ipynb

Large diffs are not rendered by default.

Binary file added images/scan.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
100 changes: 80 additions & 20 deletions src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -7,19 +7,68 @@
*/

#include <cstdio>
#include <sstream>
#include <fstream>
#include <stream_compaction/cpu.h>
#include <stream_compaction/naive.h>
#include <stream_compaction/efficient.h>
#include <stream_compaction/thrust.h>
#include "testing_helpers.hpp"

const int SIZE = 1 << 8; // feel free to change the size of array
// The tests default to an array of size 1 << 8 = 256
const int SIZE = 1 << 25; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
int *a = new int[SIZE];
int *b = new int[SIZE];
int *c = new int[SIZE];

int* bookArraya = new int[8]{ 3, 1, 7, 0 ,4 ,1 ,6, 3 };
int* bookArrayb = new int[8]{};
const int BOOK_SIZE = 8;

std::string deviceName;
int deviceMaxThreadsPerBlock;
int deviceSharedMemPerBlock;
int deviceMaxThreadsPerSM;
int deviceMaxBlocksPerSM;

int main(int argc, char* argv[]) {
cudaDeviceProp deviceProp;
int gpuDevice = 0;
int device_count = 0;
cudaGetDeviceCount(&device_count);
if (gpuDevice > device_count) {
std::cout
<< "Error: GPU device number is greater than the number of devices!"
<< " Perhaps a CUDA-capable GPU is not installed?"
<< std::endl;
return false;
}
cudaGetDeviceProperties(&deviceProp, gpuDevice);
int major = deviceProp.major;
int minor = deviceProp.minor;
deviceMaxThreadsPerBlock = deviceProp.maxThreadsPerBlock;
deviceSharedMemPerBlock = deviceProp.sharedMemPerBlock;
deviceMaxThreadsPerSM = deviceProp.maxThreadsPerMultiProcessor;
deviceMaxBlocksPerSM = deviceProp.maxBlocksPerMultiProcessor;



std::ostringstream ss;
ss << " [SM " << major << "." << minor << " " << deviceProp.name << "]"
<< "\n Max threads per block: " << deviceMaxThreadsPerBlock
<< "\n Shared memory per block: " << deviceSharedMemPerBlock << " bytes"
// << "\n Shared memory in each block can fit " << deviceSharedMemPerBlock / sizeof(int) << " number of integers"
<< "\n Max threads per SM: " << deviceMaxThreadsPerSM
<< "\n Max blocks per SM: " << deviceMaxBlocksPerSM
<< "\n Max grid size: " << deviceProp.maxGridSize[0] << ", "
<< deviceProp.maxGridSize[1] << ", " << deviceProp.maxGridSize[2];


deviceName = ss.str();

std::cout << deviceName << '\n';

// Scan tests

printf("\n");
Expand All @@ -31,27 +80,47 @@ int main(int argc, char* argv[]) {
a[SIZE - 1] = 0;
printArray(SIZE, a, true);


// initialize b using StreamCompaction::CPU::scan you implement
// We use b for further comparison. Make sure your StreamCompaction::CPU::scan is correct.
// At first all cases passed because b && c are all zeroes.
zeroArray(SIZE, b);
// Here, power-of-two refers to the length of the input array
printDesc("cpu scan, power-of-two");
StreamCompaction::CPU::scan(SIZE, b, a);
printElapsedTime(StreamCompaction::CPU::timer().getCpuElapsedTimeForPreviousOperation(), "(std::chrono Measured)");
printArray(SIZE, b, true);

printf("\n");

zeroArray(SIZE, c);
printDesc("cpu scan, non-power-of-two");
StreamCompaction::CPU::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::CPU::timer().getCpuElapsedTimeForPreviousOperation(), "(std::chrono Measured)");
printArray(NPOT, b, true);
printCmpResult(NPOT, b, c);

printf("\n");
#if 0
zeroArray(SIZE, c);
printDesc("work-efficient scan, power-of-two");
StreamCompaction::Efficient::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

zeroArray(SIZE, c);
printDesc("work-efficient scan, non-power-of-two");
StreamCompaction::Efficient::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
printArray(NPOT, c, true);
printCmpResult(NPOT, b, c);
#endif
zeroArray(SIZE, c);
printDesc("naive scan, power-of-two");
StreamCompaction::Naive::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

/* For bug-finding only: Array of 1s to help find bugs in stream compaction or scan
Expand All @@ -64,37 +133,25 @@ int main(int argc, char* argv[]) {
printDesc("naive scan, non-power-of-two");
StreamCompaction::Naive::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printArray(SIZE, c, true);
printCmpResult(NPOT, b, c);

zeroArray(SIZE, c);
printDesc("work-efficient scan, power-of-two");
StreamCompaction::Efficient::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

zeroArray(SIZE, c);
printDesc("work-efficient scan, non-power-of-two");
StreamCompaction::Efficient::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(NPOT, c, true);
printCmpResult(NPOT, b, c);

zeroArray(SIZE, c);
printDesc("thrust scan, power-of-two");
StreamCompaction::Thrust::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
// printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

zeroArray(SIZE, c);
printDesc("thrust scan, non-power-of-two");
StreamCompaction::Thrust::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(NPOT, c, true);
// printArray(NPOT, c, true);
printCmpResult(NPOT, b, c);


printf("\n");
printf("*****************************\n");
printf("** STREAM COMPACTION TESTS **\n");
Expand Down Expand Up @@ -137,18 +194,21 @@ int main(int argc, char* argv[]) {
printDesc("work-efficient compact, power-of-two");
count = StreamCompaction::Efficient::compact(SIZE, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(count, c, true);
printArray(count, c, true);
printCmpLenResult(count, expectedCount, b, c);

zeroArray(SIZE, c);
printDesc("work-efficient compact, non-power-of-two");
count = StreamCompaction::Efficient::compact(NPOT, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(count, c, true);
printArray(count, c, true);
printCmpLenResult(count, expectedNPOT, b, c);

system("pause"); // stop Win32 console from closing on exit
delete[] a;
delete[] b;
delete[] c;

delete[] bookArraya;
delete[] bookArrayb;
}
2 changes: 2 additions & 0 deletions src/testing_helpers.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ void onesArray(int n, int *a) {
}
}

// This function populates n elements of array a with values
// between 0 and maxval - 1
void genArray(int n, int *a, int maxval) {
srand(time(nullptr));

Expand Down
10 changes: 10 additions & 0 deletions stream_compaction/common.h
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,16 @@
#include <chrono>
#include <stdexcept>

#if 0
/*! Block size used for CUDA kernel launch. */
#define blockSize 512
#define sectionSize 512

#define MAX_SUM_ARRAY_SIZE 1024
#endif

#define FILENAME (strrchr(__FILE__, '/') ? strrchr(__FILE__, '/') + 1 : __FILE__)
// usage: checkCUDAError("a descriptive name of this error")
#define checkCUDAError(msg) checkCUDAErrorFn(msg, FILENAME, __LINE__)

/**
Expand All @@ -26,6 +35,7 @@ inline int ilog2(int x) {
return lg;
}

// computes the ceiling of log2(x), as an integer.
inline int ilog2ceil(int x) {
return x == 1 ? 0 : ilog2(x - 1) + 1;
}
Expand Down
Loading