You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A while back we implemented a hueristic that if a chunk was large it was assumed to be produced by the render and thus was safe to stream which results in transferring the underlying object memory. Later we ran into an issue where a precomputed chunk grew large enough to trigger this hueristic and it started causing renders to fail because once a second render had occured the precomputed chunk would not have an underlying buffer of bytes to send and these bytes would be omitted from the stream. We implemented a technique to detect large preocmputed chunks and we enforced that these always be cloned before writing. Unfortunately our test coverage was not perfect and there has been for a very long time now a usage pattern where if you complete a boundary in one flush and then complete a boundary that has stylehsheet dependencies in another flush you can get a large precomputed chunk that was not being cloned to be sent twice causing streaming errors.
I've thought about why we even went with this solution in the first place and I think it was a mistake. It relies on a dev only check to catch paired with potentially version speicifc order of operations on the streaming side. This is too unreliable. Additionally the low limit of view size for Edge is not used in Node.js but there is not real justification for this.
In this change I updated the view size for edge streaming to match Node at 2048 bytes which is still relatively small and we have no data one way or another to preference 512 over this. Then I updated the assertion logic to error anytime a precomputed chunk exceeds the size. This eliminates the need to clone these chunks by just making sure our view size is awlays larger than the largest precomputed chunk we can possibly write. I'm generally in favor of this for a few reasons.
First, we'll always know during testing whether we've violated the limit as long as we exercise each stream config because the precomputed chunks are created in module scope.
Second, we can always split up large chunks so making sure the precomptued chunk is smaller than whatever view size we actually desire is relatively trivial.
Copy file name to clipboardExpand all lines: packages/react-server/src/ReactServerStreamConfigBrowser.js
+6-25Lines changed: 6 additions & 25 deletions
Original file line number
Diff line number
Diff line change
@@ -40,15 +40,6 @@ export function writeChunk(
40
40
}
41
41
42
42
if(chunk.byteLength>VIEW_SIZE){
43
-
if(__DEV__){
44
-
if(precomputedChunkSet.has(chunk)){
45
-
console.error(
46
-
'A large precomputed chunk was passed to writeChunk without being copied.'+
47
-
' Large chunks get enqueued directly and are not copied. This is incompatible with precomputed chunks because you cannot enqueue the same precomputed chunk twice.'+
48
-
' Use "cloneChunk" to make a copy of this large precomputed chunk before writing it. This is a bug in React.',
49
-
);
50
-
}
51
-
}
52
43
// this chunk may overflow a single view which implies it was not
53
44
// one that is cached by the streaming renderer. We will enqueu
54
45
// it directly and expect it is not re-used
@@ -120,18 +111,16 @@ export function stringToChunk(content: string): Chunk {
'A large precomputed chunk was passed to writeChunk without being copied.'+
47
-
' Large chunks get enqueued directly and are not copied. This is incompatible with precomputed chunks because you cannot enqueue the same precomputed chunk twice.'+
48
-
' Use "cloneChunk" to make a copy of this large precomputed chunk before writing it. This is a bug in React.',
49
-
);
50
-
}
51
-
}
52
43
// this chunk may overflow a single view which implies it was not
53
44
// one that is cached by the streaming renderer. We will enqueu
54
45
// it directly and expect it is not re-used
@@ -120,18 +111,16 @@ export function stringToChunk(content: string): Chunk {
'A large precomputed chunk was passed to writeChunk without being copied.'+
106
-
' Large chunks get enqueued directly and are not copied. This is incompatible with precomputed chunks because you cannot enqueue the same precomputed chunk twice.'+
107
-
' Use "cloneChunk" to make a copy of this large precomputed chunk before writing it. This is a bug in React.',
108
-
);
109
-
}
110
-
}
111
102
// this chunk may overflow a single view which implies it was not
112
103
// one that is cached by the streaming renderer. We will enqueu
113
104
// it directly and expect it is not re-used
@@ -201,18 +192,16 @@ export function stringToChunk(content: string): Chunk {
0 commit comments