-
-
Notifications
You must be signed in to change notification settings - Fork 441
Refactor AI Generation with AI SDK v5 #210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor AI Generation with AI SDK v5 #210
Conversation
@llanesluis is attempting to deploy a commit to the tweakcn OSS program Team on Vercel. A member of the Team first needs to authorize it. |
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughRefactors AI theme generation into a chat-provider and streaming pipeline: adds ChatProvider/useChatContext, streaming API/tools, new hooks and UI primitives, converts messages to a parts/metadata model, removes legacy one-shot hooks/utilities, and updates many component props to onThemeGeneration/isGeneratingTheme/onCancelThemeGeneration. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant UI as Client UI
participant Ctx as ChatProvider
participant API as /api/generate-theme
participant Tool as THEME_GENERATION_TOOLS
participant Stream as UI Message Stream
participant Editor as Editor (applyGeneratedTheme)
User->>UI: onThemeGeneration(promptData)
UI->>Ctx: sendMessage(text + metadata: promptData)
Ctx->>API: stream messages to endpoint
API->>Tool: invoke generateTheme tool (streamObject)
Tool-->>Stream: write "data-generated-theme-styles" (status: streaming, chunk)
Stream-->>Ctx: deliver UI event
Ctx->>Editor: applyGeneratedTheme(chunk) (transient)
Tool-->>Stream: write "data-generated-theme-styles" (status: ready, themeStyles)
Stream-->>Ctx: final UI event
Ctx->>Editor: applyGeneratedTheme(final)
API-->>Ctx: finish (usage recorded)
Ctx-->>UI: update state -> ready
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 42
🔭 Outside diff range comments (6)
utils/ai/ai-prompt.tsx (1)
21-45
: Type-only React import is missing for React.ReactNode return typebuildAIPromptRender returns React.ReactNode but React isn’t imported in this module, which will fail type-checking unless you rely on a global React namespace.
Apply this diff to use a type-only import and a direct ReactNode reference:
-import { useEditorStore } from "@/store/editor-store"; +import { useEditorStore } from "@/store/editor-store"; import { useThemePresetStore } from "@/store/theme-preset-store"; import { AIPromptData, MentionReference, PromptImage } from "@/types/ai"; import { JSONContent } from "@tiptap/react"; +import type { ReactNode } from "react"; ... -export const buildAIPromptRender = (promptData: AIPromptData): React.ReactNode => { +export const buildAIPromptRender = (promptData: AIPromptData): ReactNode => {app/ai/components/suggested-pill-actions.tsx (1)
53-59
: Reset file input value to allow re-uploading the same file.Without resetting the input value, onChange may not fire when the same file is selected again.
const handleImageUpload = (event: React.ChangeEvent<HTMLInputElement>) => { const fileList = event.target.files; if (!fileList) return; const files = Array.from(fileList); handleImagesUpload(files); + // Allow selecting the same file again + event.currentTarget.value = ""; };app/api/generate-theme/route.ts (1)
31-36
: Parse x-forwarded-for correctly and drop unnecessary await on headers()headers() is synchronous; also, x-forwarded-for may contain multiple IPs—use the first, and fall back to x-real-ip.
Apply this diff:
- const headersList = await headers(); + const headersList = headers(); @@ - const ip = headersList.get("x-forwarded-for") ?? "anonymous"; + const forwardedFor = headersList.get("x-forwarded-for") ?? ""; + const ip = + forwardedFor.split(",")[0]?.trim() || + headersList.get("x-real-ip") || + "anonymous";components/editor/ai/message-edit-form.tsx (2)
83-86
: Honor external disabled state in DragAndDropImageUploaderCurrently only blocks while uploading; should also block when disabled is true.
Apply this diff:
- <DragAndDropImageUploader - onDrop={handleImagesUpload} - disabled={uploadedImages.some((img) => img.loading)} - /> + <DragAndDropImageUploader + onDrop={handleImagesUpload} + disabled={disabled || uploadedImages.some((img) => img.loading)} + />
116-121
: Also disable ImageUploader when the form is disabledPrevents new uploads during a disabled state.
Apply this diff:
<ImageUploader fileInputRef={fileInputRef} onImagesUpload={handleImagesUpload} onClick={() => fileInputRef.current?.click()} - disabled={!canUploadMore} + disabled={disabled || !canUploadMore} />hooks/use-dialog-actions.tsx (1)
189-216
: Critical: Context provider used incorrectly — must use.Provider
Rendering will not provide the value; consumers will receive null and throw. Use the Provider component.
Apply this diff:
- return ( - <DialogActionsContext value={store}> - {children} - - {/* Global Dialogs */} - <CssImportDialog - open={store.cssImportOpen} - onOpenChange={store.setCssImportOpen} - onImport={store.handleCssImport} - /> - <CodePanelDialog - open={store.codePanelOpen} - onOpenChange={store.setCodePanelOpen} - themeEditorState={themeState} - /> - <ThemeSaveDialog - open={store.saveDialogOpen} - onOpenChange={store.setSaveDialogOpen} - onSave={store.saveTheme} - isSaving={store.isCreatingTheme} - /> - <ShareDialog - open={store.shareDialogOpen} - onOpenChange={store.setShareDialogOpen} - url={store.shareUrl} - /> - </DialogActionsContext> - ); + return ( + <DialogActionsContext.Provider value={store}> + {children} + + {/* Global Dialogs */} + <CssImportDialog + open={store.cssImportOpen} + onOpenChange={store.setCssImportOpen} + onImport={store.handleCssImport} + /> + <CodePanelDialog + open={store.codePanelOpen} + onOpenChange={store.setCodePanelOpen} + themeEditorState={themeState} + /> + <ThemeSaveDialog + open={store.saveDialogOpen} + onOpenChange={store.setSaveDialogOpen} + onSave={store.saveTheme} + isSaving={store.isCreatingTheme} + /> + <ShareDialog + open={store.shareDialogOpen} + onOpenChange={store.setShareDialogOpen} + url={store.shareUrl} + /> + </DialogActionsContext.Provider> + );
🧹 Nitpick comments (43)
lib/constants.ts (1)
5-5
: Consider making this env-configurable for safer rollouts.Reading the limit from an env var (with a sane default) allows staged rollouts and quick rollback without redeploys.
Apply this minimal change and document the env var:
-export const AI_REQUEST_FREE_TIER_LIMIT = 500; +export const AI_REQUEST_FREE_TIER_LIMIT = Number(process.env.NEXT_PUBLIC_AI_REQUEST_FREE_TIER_LIMIT ?? 500);hooks/use-ai-chat-form.ts (2)
62-65
: Remove redundant content length check.The trim() emptiness check already covers both empty and whitespace-only prompts; the length === 0 check is redundant.
Apply:
- const isEmptyPrompt = - uploadedImages.length === 0 && - (!promptData?.content?.trim() || promptData.content.length === 0); + const isEmptyPrompt = + uploadedImages.length === 0 && !promptData?.content?.trim();
58-60
: Optional: memoize promptData to avoid recomputation on unrelated renders.If convertJSONContentToPromptData is non-trivial, memoization will prevent unnecessary work.
-import { useReducer, useEffect, useRef, useTransition } from "react"; +import { useReducer, useEffect, useRef, useTransition, useMemo } from "react"; @@ - const promptData = convertJSONContentToPromptData( - editorContentDraft || { type: "doc", content: [] } - ); + const promptData = useMemo( + () => + convertJSONContentToPromptData( + editorContentDraft || { type: "doc", content: [] } + ), + [editorContentDraft] + );types/errors.ts (2)
62-67
: Rename generic schema/type to a domain-appropriate name.“MyErrorResponse…” is ambiguous. Prefer a consistent domain name to improve readability.
-export const MyErrorResponseSchema = z.object({ +export const ApiErrorResponseSchema = z.object({ code: z.string().optional(), message: z.string().optional(), data: z.unknown().optional(), status: z.number().optional(), }); -export type MyErrorResponseType = z.infer<typeof MyErrorResponseSchema>; +export type ApiErrorResponseType = z.infer<typeof ApiErrorResponseSchema>;Update imports in dependents accordingly:
// lib/error-response.ts, utils/ai/parse-ai-sdk-transport-error.ts import { ApiErrorResponseType } from "@/types/errors";
62-67
: Optional: constraincode
to known values while remaining forward-compatible.If you control the API error codes, use z.enum to validate them; if not, keep string. You can also union with string to allow unknown codes.
-export const ApiErrorResponseSchema = z.object({ - code: z.string().optional(), +export const ApiErrorResponseSchema = z.object({ + code: z.enum(["SUBSCRIPTION_REQUIRED", "VALIDATION_ERROR", "UNAUTHORIZED", "UNKNOWN_ERROR"]).optional(), message: z.string().optional(), data: z.unknown().optional(), status: z.number().optional(), });If you expect external codes too, relax as:
code: z.union([z.enum(["SUBSCRIPTION_REQUIRED","VALIDATION_ERROR","UNAUTHORIZED","UNKNOWN_ERROR"]), z.string()]).optional(),components/editor/custom-textarea.tsx (1)
55-56
: Tailwind class seems malformed: move "!" before the utility
max-sm:text-[16px]!
won’t be recognized by Tailwind. Use the important modifier before the utility:max-sm:!text-[16px]
.Apply this diff:
- "min-w-0 min-h-[50px] max-h-[120px] wrap-anywhere text-foreground/90 scrollbar-thin overflow-y-auto w-full bg-background p-1 text-sm focus-visible:outline-none disabled:opacity-50 max-sm:text-[16px]!", + "min-w-0 min-h-[50px] max-h-[120px] wrap-anywhere text-foreground/90 scrollbar-thin overflow-y-auto w-full bg-background p-1 text-sm focus-visible:outline-none disabled:opacity-50 max-sm:!text-[16px]",components/editor/ai/no-messages-placeholder.tsx (1)
53-59
: Add aria-busy for better a11y feedback during generationButtons are disabled; also mark the container as busy to improve screen-reader feedback when
isGeneratingTheme
is true.Apply this diff to the wrapper (outside the selected ranges):
- <div className="mx-auto flex max-w-lg flex-col gap-4"> + <div + className="mx-auto flex max-w-lg flex-col gap-4" + aria-busy={isGeneratingTheme || undefined} + aria-live="polite" + >Also applies to: 72-75, 88-90
components/editor/ai/alert-banner.tsx (1)
90-112
: Make wrapper state explicit and motion-safeExpose state to CSS and assistive tech, and prefer motion-reduced fallbacks for users who prefer less animation.
Apply this diff:
-export function BannerWrapper({ children, show }: { children: React.ReactNode; show: boolean }) { +export function BannerWrapper({ children, show }: { children: React.ReactNode; show: boolean }) { return ( - <div className={cn("@container/alert-banner")}> + <div className={cn("@container/alert-banner")} data-state={show ? "open" : "closed"}> <div className={cn( - "relative w-full origin-bottom transition-all duration-300 ease-in-out", + "relative w-full origin-bottom overflow-hidden transition-all duration-300 ease-in-out motion-reduce:transition-none motion-reduce:transform-none", show ? "pointer-events-auto max-h-6 @2xl/alert-banner:max-h-7.5" : "pointer-events-none max-h-0 translate-y-full" )} + aria-hidden={!show} style={{ willChange: "transform, max-height", }} > <div className="bg-muted text-muted-foreground flex h-6 items-center rounded-t-lg px-3 text-xs @2xl/alert-banner:h-7.5"> {children} </div> <div className="bg-muted h-4 w-full"></div> </div> </div> ); }components/prompt-kit/loader.tsx (1)
6-23
: Avoid repeated literal unions and centralize shared typessize and variant literal unions are repeated across components, increasing maintenance cost and risk of drift.
Apply this diff to centralize types and reuse them:
-export interface LoaderProps { - variant?: - | "circular" - | "classic" - | "pulse" - | "pulse-dot" - | "dots" - | "typing" - | "wave" - | "bars" - | "terminal" - | "text-blink" - | "text-shimmer" - | "loading-dots" - size?: "sm" | "md" | "lg" - text?: string - className?: string -} +export type LoaderSize = "sm" | "md" | "lg" +export type LoaderVariant = + | "circular" + | "classic" + | "pulse" + | "pulse-dot" + | "dots" + | "typing" + | "wave" + | "bars" + | "terminal" + | "text-blink" + | "text-shimmer" + | "loading-dots" + +export interface LoaderProps { + variant?: LoaderVariant + size?: LoaderSize + text?: string + className?: string +}Then adjust the individual component props to use LoaderSize for size.
app/globals.css (1)
5-5
: Potential collision with Tailwind’s built-in dark variantDefining a custom variant named "dark" risks shadowing/overriding Tailwind’s built-in dark: variant semantics. This can lead to confusing cascades or unexpected specificity.
- Prefer renaming to avoid ambiguity, e.g., dark-descendant.
- Also consider a more permissive selector: &:where(.dark, .dark *).
Example:
-@custom-variant dark (&:is(.dark *)); +@custom-variant dark-descendant (&:where(.dark, .dark *));app/ai/components/chat-heading.tsx (1)
6-8
: No-op conditional for --gradient-accent; simplifyThe value for --gradient-accent is the same regardless of isGeneratingTheme. Either remove the conditional or switch to an intended different color for the generating state.
Suggested fix:
- "--gradient-accent": isGeneratingTheme ? "var(--foreground)" : "var(--foreground)", + "--gradient-accent": "var(--foreground)",Optional typing hardening to avoid depending on the React namespace:
Add this import at the top of the file:
import type { CSSProperties } from "react";Then update the assertion:
- } as React.CSSProperties + } as CSSPropertiescomponents/examples/ai-chat-demo.tsx (3)
59-61
: Message flags wired correctly; consider deriving isLastMessageStreaming contextuallyHardcoding both flags to false is fine for static placeholders. For a slightly richer demo, you could set isLastMessageStreaming to true for the last assistant message to showcase the streaming UI state.
100-109
: Avoid redundant ThemeStyles casts on metadata.themeStylesIf defaultPresets[...].styles is already typed as ThemeStyles, the explicit cast is unnecessary and hides potential type mismatches. Prefer letting the compiler infer the type.
Apply this diff at each occurrence:
- themeStyles: defaultPresets["midnight-bloom"].styles as ThemeStyles, + themeStyles: defaultPresets["midnight-bloom"].styles,- themeStyles: defaultPresets["twitter"].styles as ThemeStyles, + themeStyles: defaultPresets["twitter"].styles,- themeStyles: defaultPresets["supabase"].styles as ThemeStyles, + themeStyles: defaultPresets["supabase"].styles,Also applies to: 123-132, 146-155
113-119
: Optional: include promptData metadata for consistencyFor user messages that conceptually carry a prompt (mentions like @twitter), consider adding metadata.promptData to mirror the first message. This keeps the placeholder data uniformly shaped for downstream conversions.
lib/error-response.ts (1)
1-6
: Useimport type
for type-only importsAvoids unnecessary runtime imports and keeps the bundle leaner.
Apply:
-import { - MyErrorResponseType, - SubscriptionRequiredError, - UnauthorizedError, - ValidationError, -} from "@/types/errors"; +import type { MyErrorResponseType } from "@/types/errors"; +import { SubscriptionRequiredError, UnauthorizedError, ValidationError } from "@/types/errors";utils/ai/model.ts (2)
14-15
: Make model id configurable via env with sensible defaultAllows easy switching between models without code changes.
Apply:
-export const baseModel = google("gemini-2.5-pro"); +export const baseModel = google(process.env.GOOGLE_MODEL_ID ?? "gemini-2.5-pro");
16-24
: Provider options look correct; consider freezing and documenting
- Options satisfy the provider types; good.
- Optional: export as const or Object.freeze to prevent accidental mutation.
- If OpenAI provider isn’t used yet, consider a comment explaining why options are present (helps future maintainers).
Example:
-export const baseProviderOptions = { +export const baseProviderOptions = Object.freeze({ google: { thinkingConfig: { includeThoughts: false, thinkingBudget: 128, }, } satisfies GoogleGenerativeAIProviderOptions, openai: { reasoningEffort: "low" } satisfies OpenAIResponsesProviderOptions, -}; +});utils/ai/messages.ts (1)
5-9
: Ensure boolean coercion for attachment check.hasAttachments can be undefined due to short-circuiting. Coerce to boolean to avoid returning undefined from the filter predicate.
- const hasAttachments = images && images.length > 0; + const hasAttachments = Boolean(images?.length);app/ai/components/suggested-pill-actions.tsx (1)
76-84
: Prevent prompt buttons during uploads to avoid mixed-generation states.Mirror the disabled condition so users can't trigger prompts mid-upload.
<PillActionButton key={key} onClick={() => handleSetPrompt(prompt)} - disabled={isGeneratingTheme} + disabled={isGeneratingTheme || isSomeImageUploading} >utils/ai/generate-theme.ts (2)
36-38
: Clarify WCAG contrast requirement (AA) to avoid ambiguous “adequate” wordingBeing explicit prevents under-contrasted pairs slipping through. Suggest updating the rule to reference AA thresholds.
Apply this diff:
- - Ensure adequate contrast for each base/foreground pair. + - Ensure WCAG AA contrast for each base/foreground pair (≥ 4.5:1 for normal text, ≥ 3:1 for large text).
67-69
: Add guidance to prefer a single tool call per turn to avoid loopingMinor guardrail to reduce unnecessary multi-step chains when not needed.
Apply this diff:
- Always announce in one sentence what you'll do before calling the tool. -- Never output the JSON in your message — the UI will show it. +- Prefer a single tool call per user turn whenever possible; only ask clarifying questions once before calling a tool. +- Never output the JSON in your message — the UI will show it.hooks/use-dialog-actions.tsx (2)
48-48
: Setter renamed to_setDialogKey
but never used — consider removing or exposing intentIf it’s intentionally private and unused, remove it to reduce noise. If you plan to bump dialogKey externally, expose a method.
Apply this diff to remove it:
- const [dialogKey, _setDialogKey] = useState(0); + const [dialogKey] = useState(0);
41-58
: Optional: expose cancel action for theme generation via contextMany UIs pair a generating flag with a cancel callback. Consider surfacing cancelThemeGeneration from useAIThemeGenerationCore here.
If desired, I can draft the changes in a follow-up.
components/editor/ai/message-actions.tsx (1)
27-43
: Harden copy-content extraction and clean up helper.
- Remove the unreachable nullish coalescing on join.
- Avoid shadowing the outer message variable in the helper for readability.
- Consider falling back to parts when promptData.content is an empty string (today it returns an empty copy buffer).
Apply:
- const getCopyContent = () => { - const convertTextPartsToString = (message: ChatMessage) => { - return ( - message.parts - .filter((part) => part.type === "text") - .map((part) => part.text) - .join("\n") ?? "" - ); - }; + const getCopyContent = () => { + const convertTextPartsToString = (msg: ChatMessage) => { + return msg.parts + .filter((part) => part.type === "text") + .map((part) => part.text) + .join("\n"); + }; - if (isUser && message.metadata) { - return message.metadata.promptData?.content ?? convertTextPartsToString(message); - } + if (isUser && message.metadata) { + const content = message.metadata.promptData?.content; + return content && content.trim().length > 0 + ? content + : convertTextPartsToString(message); + } return convertTextPartsToString(message); };If the empty-string fallback is not desired, keep current behavior and only adopt the helper rename + removal of the redundant nullish coalescing.
utils/ai/parse-ai-sdk-transport-error.ts (2)
33-46
: Deduplicate and harden JSON parsing (prefer safeParse).The Error and string branches duplicate the same logic and rely on thrown exceptions for control flow. Use zod.safeParse and a shared parser to reduce complexity and avoid exceptions.
export function parseAiSdkTransportError( error: unknown, fallbackMessage = "An unexpected error occurred." ): ParsedAiSdkTransportError { const defaultResult: ParsedAiSdkTransportError = { message: fallbackMessage }; + const normalize = (raw: string): ParsedAiSdkTransportError => { + try { + const result = MyErrorResponseSchema.safeParse(JSON.parse(raw)); + if (!result.success) return { message: raw || fallbackMessage }; + const parsed = result.data; + return { + code: isApiErrorCode(parsed.code) ? parsed.code : undefined, + message: parsed.message ?? fallbackMessage, + status: parsed.status, + data: parsed.data, + }; + } catch { + return { message: raw || fallbackMessage }; + } + }; if (error instanceof Error) { - const raw = error.message; - try { - const parsed = MyErrorResponseSchema.parse(JSON.parse(raw)); - return { - code: isApiErrorCode(parsed.code) ? parsed.code : undefined, - message: parsed.message ?? fallbackMessage, - status: parsed.status, - data: parsed.data, - }; - } catch { - return { message: raw || fallbackMessage }; - } + return normalize(error.message); } if (typeof error === "string") { - try { - const parsed = MyErrorResponseSchema.parse(JSON.parse(error)); - return { - code: isApiErrorCode(parsed.code) ? parsed.code : undefined, - message: parsed.message ?? fallbackMessage, - status: parsed.status, - data: parsed.data, - }; - } catch { - return { message: error }; - } + return normalize(error); } return defaultResult; }Also applies to: 48-60
16-25
: Avoid drifting hardcoded error codes.Maintaining KNOWN_CODES manually risks desynchronization with ApiErrorCode in types/errors.ts. Consider either:
- Exporting a const array source of truth from types/errors.ts; or
- Returning unknown codes as-is (cast) or mapping to "UNKNOWN_ERROR" while preserving the original code in data for diagnostics.
utils/ai/message-converter.ts (3)
14-17
: Preserve separation between text parts.Joining with an empty string can mangle content boundaries. Use a newline to maintain readability and model cues.
- const msgTextContent = message.parts - .map((part) => (part.type === "text" ? part.text : "")) - .join(""); + const msgTextContent = message.parts + .filter((part) => part.type === "text") + .map((part) => part.text) + .join("\n");
28-39
: Cross-runtime base64 decoding for SVGs.Direct atob usage can fail in non-DOM runtimes (Node/Edge) or TS without DOM lib. Prefer a guarded global or Buffer fallback; otherwise the catch will silently downgrade to image parts even when decodable.
- if (image.url.includes("base64")) { - svgMarkup = atob(dataUrlPart); - } else { + if (image.url.includes("base64")) { + const a = (globalThis as any).atob as ((s: string) => string) | undefined; + svgMarkup = a ? a(dataUrlPart) : (typeof Buffer !== "undefined" + ? Buffer.from(dataUrlPart, "base64").toString("utf-8") + : dataUrlPart); + } else { svgMarkup = decodeURIComponent(dataUrlPart); }If Buffer is not available in your target runtimes, we can add a small, dependency-free base64 decoder.
77-83
: Avoid emitting empty assistant text parts.Skip pushing a text part when msgTextContent is empty to reduce noise.
- assistantContentParts.push({ - type: "text", - text: msgTextContent, - }); + if (msgTextContent.trim().length > 0) { + assistantContentParts.push({ + type: "text", + text: msgTextContent, + }); + }components/prompt-kit/chat-container.tsx (1)
27-33
: Optional: Announce updates to assistive tech.As this is a live chat log container, consider adding aria-live="polite" to improve screen reader experience.
<StickToBottom className={cn("flex overflow-y-auto", className)} resize="smooth" initial="instant" - role="log" + role="log" + aria-live="polite" {...props} >hooks/use-ai-generate-chat.tsx (2)
71-73
: Remove or guard console logs to avoid noisy production logsThe console.log statements are helpful during development but noisy in production.
- console.log("----- ✅ Updating Stored Messages -----"); + if (process.env.NODE_ENV !== "production") { + console.debug("Updating stored AI chat messages"); + } @@ - console.log("----- ☑️ Populating Chat with Stored Messages -----"); + if (process.env.NODE_ENV !== "production") { + console.debug("Hydrated AI chat with stored messages"); + }Also applies to: 80-81
27-35
: Optional: delay useChat initialization until store hydration to avoid id/memory re-syncsYou instantiate useChat before the store hydrates, potentially creating a transient session id then swapping to the stored messages later. Consider gating initialization (or keying by chatId) after hasStoreHydrated to avoid double init and ensure the persisted chatId is used from the start.
app/ai/components/ai-chat-form.tsx (1)
115-120
: Unreachable loader and redundant disabled condition on the send buttonThe send button is rendered only when isGeneratingTheme is false, so:
- disabled prop doesn’t need isGeneratingTheme.
- The inner icon conditional is unreachable (it will always render ArrowUp).
onClick={handleGenerate} - disabled={ - isEmptyPrompt || isSomeImageUploading || isGeneratingTheme || isInitializing - } + disabled={isEmptyPrompt || isSomeImageUploading || isInitializing} > - {isGeneratingTheme ? <Loader className="animate-spin" /> : <ArrowUp />} + <ArrowUp />Also remove the unused Loader import:
-import { ArrowUp, Loader, StopCircle } from "lucide-react"; +import { ArrowUp, StopCircle } from "lucide-react";Also applies to: 119-119
components/editor/ai/chat-input.tsx (2)
68-69
: Await the theme generation callback for proper error propagationonThemeGeneration returns a Promise; awaiting it here helps surface errors and sequences the optional “clear local draft” logic that depends on completion.
- onThemeGeneration(promptData); + await onThemeGeneration(promptData);
164-167
: Unreachable loader and redundant disabled condition on the send buttonThis button only renders when isGeneratingTheme is false, so:
- disabled doesn’t need isGeneratingTheme.
- The inner icon conditional is unreachable.
- disabled={isEmptyPrompt || isSomeImageUploading || isGeneratingTheme} + disabled={isEmptyPrompt || isSomeImageUploading} > - {isGeneratingTheme ? <LoaderIcon className="animate-spin" /> : <ArrowUp />} + <ArrowUp />Also applies to: 166-166
app/ai/components/ai-chat-hero.tsx (1)
32-33
: Handle the async call explicitly to avoid unhandled rejectionsgenerateThemeCore is async. Since you’re not awaiting it intentionally, prefix with void to make intent clear and avoid unhandled rejection warnings in some environments.
- generateThemeCore(promptData); + void generateThemeCore(promptData);components/editor/ai/message.tsx (1)
90-96
: Fix invalid Tailwind class syntaxborder-border/50! isn’t valid; the important modifier goes before the class: !border-border/50. As-is, the class is ignored.
- <div - className={cn( - "border-border/50! bg-foreground relative flex size-6 shrink-0 items-center justify-center rounded-full border select-none" - )} - > + <div + className={cn( + "!border-border/50 bg-foreground relative flex size-6 shrink-0 items-center justify-center rounded-full border select-none" + )} + >components/editor/ai/chat-interface.tsx (2)
71-81
: Avoid unhandled promise rejection on submitThe call isn’t awaited; make the intent explicit.
Apply this diff:
- generateThemeCore(promptData); + void generateThemeCore(promptData);
53-57
: Fix grammar in destructive toast message“not longer” → “no longer”.
Apply this diff:
- title: "Cannot retry this message", - description: "Seems like this message is not longer available.", + title: "Cannot retry this message", + description: "It looks like this message is no longer available.", variant: "destructive",components/editor/ai/chat-theme-preview.tsx (4)
33-39
: Use passed errorText when availableThe error variant accepts
errorText
but it’s not used. Destructure it from props and render it.Apply this diff to destructure:
export function ChatThemePreview({ status, expanded = false, themeStyles, + errorText, className, children, ...props }: ChatThemePreviewProps) {
And this diff to render:
- <span className="text-foreground/90 text-sm">Generation cancelled or failed.</span> + <span className="text-foreground/90 text-sm"> + {errorText ?? "Generation cancelled or failed."} + </span>Also applies to: 64-75
85-91
: Accessibility: make the header toggler a true button or add button semanticsCurrently the clickable header is a div. Add button semantics for keyboard/screen readers.
Apply this diff:
- <div + <div className={cn( "group/control hover:bg-background/50 flex h-10 w-full shrink-0 cursor-pointer items-center gap-2 p-1.5 pr-2 transition-colors duration-300 ease-in-out", isExpanded ? "border-b" : "border-transparent" )} onClick={() => setIsExpanded(!isExpanded)} + role="button" + tabIndex={0} + aria-expanded={isExpanded} + onKeyDown={(e) => { + if (e.key === "Enter" || e.key === " ") { + e.preventDefault(); + setIsExpanded((v) => !v); + } + }} >
50-61
: Add live region/aria-busy for loading stateAdvertise loading to assistive tech.
Apply this diff:
- <div className="flex size-full h-10 items-center gap-2 p-1.5"> + <div + className="flex size-full h-10 items-center gap-2 p-1.5" + role="status" + aria-live="polite" + aria-busy="true" + >
129-131
: Tailwind class invalid: move the important modifier before the utility
size-3.5!
won’t work. Use!size-3.5
.Apply this diff:
- <Zap className="size-3.5!" /> + <Zap className="!size-3.5" />
<PillActionButton onClick={handleImageButtonClick} disabled={isGeneratingTheme}> | ||
<input | ||
type="file" | ||
accept="image/*" | ||
multiple={false} | ||
ref={fileInputRef} | ||
onChange={handleImageUpload} | ||
disabled={aiIsGenerating} | ||
disabled={isGeneratingTheme} | ||
style={{ display: "none" }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Disable image controls while uploading or when max reached.
Also gate the hidden input to prevent concurrent uploads.
- <PillActionButton onClick={handleImageButtonClick} disabled={isGeneratingTheme}>
+ <PillActionButton
+ onClick={handleImageButtonClick}
+ disabled={isGeneratingTheme || isSomeImageUploading || !canUploadMore}
+ >
@@
ref={fileInputRef}
onChange={handleImageUpload}
- disabled={isGeneratingTheme}
+ disabled={isGeneratingTheme || isSomeImageUploading || !canUploadMore}
style={{ display: "none" }}
/>
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
<PillActionButton onClick={handleImageButtonClick} disabled={isGeneratingTheme}> | |
<input | |
type="file" | |
accept="image/*" | |
multiple={false} | |
ref={fileInputRef} | |
onChange={handleImageUpload} | |
disabled={aiIsGenerating} | |
disabled={isGeneratingTheme} | |
style={{ display: "none" }} | |
<PillActionButton | |
onClick={handleImageButtonClick} | |
disabled={isGeneratingTheme || isSomeImageUploading || !canUploadMore} | |
> | |
<input | |
type="file" | |
accept="image/*" | |
multiple={false} | |
ref={fileInputRef} | |
onChange={handleImageUpload} | |
disabled={isGeneratingTheme || isSomeImageUploading || !canUploadMore} | |
style={{ display: "none" }} |
🤖 Prompt for AI Agents
In app/ai/components/suggested-pill-actions.tsx around lines 63 to 71, the
hidden file input and image upload button aren't fully gated: update the logic
so the PillActionButton and the hidden <input> are disabled when
isGeneratingTheme is true OR when the image upload count has reached the
configured max (e.g., disableWhenMaxReached flag), and prevent the input's
onChange handler from starting a new upload if an upload is already in progress
(return early if isGeneratingTheme or max reached). Ensure the ref click only
triggers when allowed and the input's disabled prop is set accordingly to
prevent concurrent uploads.
const { messages }: { messages: ChatMessage[] } = await req.json(); | ||
const modelMessages = await convertMessagesToModelMessages(messages); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Validate request body before proceeding
Defensive check avoids cryptic errors and improved 400 feedback for malformed clients.
Apply this diff:
- const { messages }: { messages: ChatMessage[] } = await req.json();
- const modelMessages = await convertMessagesToModelMessages(messages);
+ const { messages }: { messages: ChatMessage[] } = await req.json();
+ if (!Array.isArray(messages) || messages.length === 0) {
+ return new Response("Missing or invalid messages array.", { status: 400 });
+ }
+ const modelMessages = await convertMessagesToModelMessages(messages);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const { messages }: { messages: ChatMessage[] } = await req.json(); | |
const modelMessages = await convertMessagesToModelMessages(messages); | |
const { messages }: { messages: ChatMessage[] } = await req.json(); | |
if (!Array.isArray(messages) || messages.length === 0) { | |
return new Response("Missing or invalid messages array.", { status: 400 }); | |
} | |
const modelMessages = await convertMessagesToModelMessages(messages); |
🤖 Prompt for AI Agents
In app/api/generate-theme/route.ts around lines 57 to 59, add defensive
validation of the parsed request body before calling
convertMessagesToModelMessages: after parsing req.json(), ensure the body has a
"messages" property that is an array (and optionally that each item matches the
expected ChatMessage shape), and if validation fails immediately return a 400
response with a clear error message; only call convertMessagesToModelMessages
when "messages" is present and valid.
onError: (error) => { | ||
if (error instanceof Error) console.error(error); | ||
}, | ||
} satisfies GoogleGenerativeAIProviderOptions, | ||
}, | ||
}); | ||
|
||
if (usage) { | ||
try { | ||
await recordAIUsage({ | ||
promptTokens: usage.promptTokens, | ||
completionTokens: usage.completionTokens, | ||
onFinish: async (result) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Propagate streaming errors to the client stream
Currently errors only log to console; mirror them to the UI stream so the client can surface failures.
Apply this diff:
- onError: (error) => {
- if (error instanceof Error) console.error(error);
- },
+ onError: (error) => {
+ if (error instanceof Error) {
+ console.error(error);
+ try {
+ writer.error?.(error);
+ } catch (_) {
+ // no-op
+ }
+ }
+ },
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
onError: (error) => { | |
if (error instanceof Error) console.error(error); | |
}, | |
} satisfies GoogleGenerativeAIProviderOptions, | |
}, | |
}); | |
if (usage) { | |
try { | |
await recordAIUsage({ | |
promptTokens: usage.promptTokens, | |
completionTokens: usage.completionTokens, | |
onFinish: async (result) => { | |
onError: (error) => { | |
if (error instanceof Error) { | |
console.error(error); | |
try { | |
writer.error?.(error); | |
} catch (_) { | |
// no-op | |
} | |
} | |
}, | |
onFinish: async (result) => { |
🤖 Prompt for AI Agents
In app/api/generate-theme/route.ts around lines 70 to 73, the onError handler
only logs errors to the console but needs to propagate them into the outgoing UI
stream; update onError so that when an Error occurs you log it and also write a
serialized error event into the client stream (e.g. an object with type "error"
and the error message/details) using the same stream writer/encoder used for
successful chunks, flush/close the stream as appropriate, and ensure any async
writes are awaited so the client receives the failure notification.
onFinish: async (result) => { | ||
const { totalUsage } = result; | ||
try { | ||
await recordAIUsage({ | ||
promptTokens: totalUsage.inputTokens, | ||
completionTokens: totalUsage.outputTokens, | ||
}); | ||
} catch (error) { | ||
logError(error as Error, { action: "recordAIUsage", totalUsage }); | ||
} | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Guard usage-tracking fields to avoid runtime errors when totals are missing
Some providers/edge cases may not populate totalUsage; default to zero.
Apply this diff:
- onFinish: async (result) => {
- const { totalUsage } = result;
+ onFinish: async (result) => {
+ const { totalUsage } = result;
try {
await recordAIUsage({
- promptTokens: totalUsage.inputTokens,
- completionTokens: totalUsage.outputTokens,
+ promptTokens: totalUsage?.inputTokens ?? 0,
+ completionTokens: totalUsage?.outputTokens ?? 0,
});
} catch (error) {
logError(error as Error, { action: "recordAIUsage", totalUsage });
}
},
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
onFinish: async (result) => { | |
const { totalUsage } = result; | |
try { | |
await recordAIUsage({ | |
promptTokens: totalUsage.inputTokens, | |
completionTokens: totalUsage.outputTokens, | |
}); | |
} catch (error) { | |
logError(error as Error, { action: "recordAIUsage", totalUsage }); | |
} | |
}, | |
onFinish: async (result) => { | |
const { totalUsage } = result; | |
try { | |
await recordAIUsage({ | |
promptTokens: totalUsage?.inputTokens ?? 0, | |
completionTokens: totalUsage?.outputTokens ?? 0, | |
}); | |
} catch (error) { | |
logError(error as Error, { action: "recordAIUsage", totalUsage }); | |
} | |
}, |
🤖 Prompt for AI Agents
In app/api/generate-theme/route.ts around lines 73 to 83, the code assumes
result.totalUsage and its fields exist which can throw if missing; change to
safely default totalUsage and its token fields to zero before calling
recordAIUsage (e.g. const totalUsage = result.totalUsage ?? { inputTokens: 0,
outputTokens: 0 }; const promptTokens = totalUsage.inputTokens ?? 0; const
completionTokens = totalUsage.outputTokens ?? 0) and pass those guarded values
into recordAIUsage, keeping the existing try/catch and error logging.
import { TooltipWrapper } from "@/components/tooltip-wrapper"; | ||
import { Button } from "@/components/ui/button"; | ||
import { cn } from "@/lib/utils"; | ||
import { Bug } from "lucide-react"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add "use client" to avoid importing Client Components into a Server Component
This component renders Button and TooltipTrigger-backed wrapper (both are Client Components). Without the "use client" directive, Next.js will error when this is imported by a Server Component.
Apply this diff:
+ "use client";
+
import { TooltipWrapper } from "@/components/tooltip-wrapper";
import { Button } from "@/components/ui/button";
import { cn } from "@/lib/utils";
import { Bug } from "lucide-react";
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
import { TooltipWrapper } from "@/components/tooltip-wrapper"; | |
import { Button } from "@/components/ui/button"; | |
import { cn } from "@/lib/utils"; | |
import { Bug } from "lucide-react"; | |
"use client"; | |
import { TooltipWrapper } from "@/components/tooltip-wrapper"; | |
import { Button } from "@/components/ui/button"; | |
import { cn } from "@/lib/utils"; | |
import { Bug } from "lucide-react"; |
🤖 Prompt for AI Agents
In components/debug-button.tsx around lines 1 to 4, this file imports
client-only components (Button and TooltipWrapper) but lacks the "use client"
directive; add a top-line "use client" directive as the very first line of the
file so Next.js treats this file as a Client Component and avoids server-side
import errors when used by Server Components.
if (!document.startViewTransition) { | ||
setThemeState({ | ||
...themeState, | ||
styles: mergedStyles, | ||
}); | ||
} else { | ||
document.startViewTransition(() => { | ||
setThemeState({ | ||
...themeState, | ||
styles: mergedStyles, | ||
}); | ||
}); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🛠️ Refactor suggestion
❓ Verification inconclusive
SSR safety and typing for startViewTransition
- Accessing document is fine client-side, but guard against SSR by checking typeof document !== "undefined".
- TypeScript doesn’t know about startViewTransition on Document in standard lib.dom; add a type guard or a global augmentation.
- Minor: prefer a feature check using "startViewTransition" in document.
Suggested change:
- if (!document.startViewTransition) {
- setThemeState({
- ...themeState,
- styles: mergedStyles,
- });
- } else {
- document.startViewTransition(() => {
- setThemeState({
- ...themeState,
- styles: mergedStyles,
- });
- });
- }
+ if (typeof document !== "undefined" && "startViewTransition" in document) {
+ // @ts-expect-error: startViewTransition is experimental and not in lib.dom yet
+ (document as any).startViewTransition(() => {
+ setThemeState({
+ ...themeState,
+ styles: mergedStyles,
+ });
+ });
+ } else {
+ setThemeState({
+ ...themeState,
+ styles: mergedStyles,
+ });
+ }
Optional: add a global DOM typing augmentation in a .d.ts file:
// types/dom.d.ts
declare global {
interface Document {
startViewTransition?: (cb: () => void) => unknown;
}
}
export {};
Guard document access and add typing for startViewTransition
Prevent an SSR ReferenceError and satisfy TypeScript for the experimental API.
- File: utils/ai/apply-theme.ts — lines ~12–25
Suggested change:
- if (!document.startViewTransition) {
- setThemeState({
- ...themeState,
- styles: mergedStyles,
- });
- } else {
- document.startViewTransition(() => {
- setThemeState({
- ...themeState,
- styles: mergedStyles,
- });
- });
- }
+ if (typeof document !== "undefined" && "startViewTransition" in document) {
+ // startViewTransition is experimental and may not be in lib.dom
+ // @ts-expect-error
+ (document as any).startViewTransition(() => {
+ setThemeState({
+ ...themeState,
+ styles: mergedStyles,
+ });
+ });
+ } else {
+ setThemeState({
+ ...themeState,
+ styles: mergedStyles,
+ });
+ }
Optional: add a global DOM typing to avoid @ts-expect-error:
// types/dom.d.ts
declare global {
interface Document {
startViewTransition?: (cb: () => void) => unknown;
}
}
export {};
🤖 Prompt for AI Agents
In utils/ai/apply-theme.ts around lines 12 to 25, the code accesses
document.startViewTransition directly which can throw in SSR and lacks
TypeScript typing; wrap access with a runtime guard (e.g., check typeof document
!== "undefined" && typeof document.startViewTransition === "function") and call
startViewTransition only when present, otherwise run setThemeState directly;
additionally add the suggested global DOM typing file (e.g., types/dom.d.ts
declaring Document.startViewTransition?: (cb: () => void) => unknown and export
{}), remove any @ts-expect-error for this API, and ensure the function uses the
guarded condition to satisfy both SSR and TypeScript.
utils/ai/message-converter.ts
Outdated
// Add the prompt text content as a text part | ||
const textContent = promptData.content; | ||
if (textContent.trim().length > 0) { | ||
const textPart: TextPart = { | ||
type: "text", | ||
text: textContent, | ||
}; | ||
content.push(textPart); | ||
userContentParts.push(textPart); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Null-safety: promptData.content may be undefined; trim() will throw.
Guard against undefined content before calling trim.
- const textContent = promptData.content;
- if (textContent.trim().length > 0) {
+ const textContent = promptData.content ?? "";
+ if (textContent.trim().length > 0) {
const textPart: TextPart = {
type: "text",
text: textContent,
};
userContentParts.push(textPart);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Add the prompt text content as a text part | |
const textContent = promptData.content; | |
if (textContent.trim().length > 0) { | |
const textPart: TextPart = { | |
type: "text", | |
text: textContent, | |
}; | |
content.push(textPart); | |
userContentParts.push(textPart); | |
} | |
// Add the prompt text content as a text part | |
const textContent = promptData.content ?? ""; | |
if (textContent.trim().length > 0) { | |
const textPart: TextPart = { | |
type: "text", | |
text: textContent, | |
}; | |
userContentParts.push(textPart); | |
} |
🤖 Prompt for AI Agents
In utils/ai/message-converter.ts around lines 53 to 61, promptData.content may
be undefined and calling trim() will throw; update the guard to first ensure
content is a string (e.g. check typeof promptData.content === "string" or
coalesce with an empty string) before calling trim(), and only construct and
push the TextPart when the content is present and non-empty after trimming.
const google = createGoogleGenerativeAI({ | ||
apiKey: process.env.GOOGLE_API_KEY, | ||
}); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fail fast when GOOGLE_API_KEY is missing
Creating the provider with an undefined apiKey will defer failure to first request and produce hard-to-debug 401s. Validate eagerly.
Apply:
-const google = createGoogleGenerativeAI({
- apiKey: process.env.GOOGLE_API_KEY,
-});
+const apiKey = process.env.GOOGLE_API_KEY;
+if (!apiKey) {
+ throw new Error("Missing GOOGLE_API_KEY environment variable for Google Generative AI provider.");
+}
+const google = createGoogleGenerativeAI({ apiKey });
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const google = createGoogleGenerativeAI({ | |
apiKey: process.env.GOOGLE_API_KEY, | |
}); | |
const apiKey = process.env.GOOGLE_API_KEY; | |
if (!apiKey) { | |
throw new Error("Missing GOOGLE_API_KEY environment variable for Google Generative AI provider."); | |
} | |
const google = createGoogleGenerativeAI({ apiKey }); |
🤖 Prompt for AI Agents
In utils/ai/model.ts around lines 6 to 9, the code constructs the Google
provider using process.env.GOOGLE_API_KEY without validating it, which defers
failures to runtime; validate the environment variable immediately and fail fast
by checking if process.env.GOOGLE_API_KEY is present and non-empty, throwing a
descriptive Error (or logging and exiting) if missing, then pass the validated
key into createGoogleGenerativeAI so the app fails at startup instead of
producing delayed 401s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
type ChatThemePreviewProps = ComponentProps<"div"> & ChatThemePreviewPropsBase; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Prop surface mismatch: extends div props but root element is Card; forward props/className to Card (all states)
Right now className
/...props
do not apply to the root element for loading/error states, and in complete state they’re applied to an inner div. Forward these to the Card to align with the public API.
Apply these diffs:
Loading branch:
- <Card className={cn("max-w-[550px] overflow-hidden rounded-lg shadow-none")}>
+ <Card
+ className={cn("max-w-[550px] overflow-hidden rounded-lg shadow-none", className)}
+ {...props}
+ >
Error branch:
- <Card className={cn("max-w-[550px] overflow-hidden rounded-lg shadow-none")}>
+ <Card
+ className={cn("max-w-[550px] overflow-hidden rounded-lg shadow-none", className)}
+ {...props}
+ >
Complete branch root:
- <Card className={cn("max-w-[550px] overflow-hidden rounded-lg shadow-none")}>
+ <Card
+ className={cn("max-w-[550px] overflow-hidden rounded-lg shadow-none", className)}
+ {...props}
+ >
And remove forwarding to the inner container to avoid duplication:
- <div className={cn("space-y-2 p-2.5", className)} {...props}>
+ <div className={cn("space-y-2 p-2.5")}>
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In components/editor/ai/chat-theme-preview.tsx around lines 12-13, the component
type extends div props but currently forwards className and ...props to an inner
container (only in the complete state) and not to the root Card for
loading/error states; update the JSX so the Card element in the Loading, Error
and Complete branches receives {...props} and className (merge with any local
classes) and remove any spreading/forwarding of props/className to inner
containers to avoid duplication, ensuring the public API applies props
consistently to the root Card.
store/ai-chat-store.ts
Outdated
chatId: crypto.randomUUID(), | ||
setChatId: (chatId: string) => { | ||
set({ chatId }); | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid SSR/runtime crashes: guard crypto.randomUUID
crypto.randomUUID()
may not be available in all environments (e.g., older Node, some SSR contexts). Provide a safe fallback.
Apply this diff:
- chatId: crypto.randomUUID(),
+ chatId:
+ typeof crypto !== "undefined" && "randomUUID" in crypto
+ ? crypto.randomUUID()
+ : Math.random().toString(36).slice(2),
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
chatId: crypto.randomUUID(), | |
setChatId: (chatId: string) => { | |
set({ chatId }); | |
}, | |
chatId: | |
typeof crypto !== "undefined" && "randomUUID" in crypto | |
? crypto.randomUUID() | |
: Math.random().toString(36).slice(2), | |
setChatId: (chatId: string) => { | |
set({ chatId }); | |
}, |
🤖 Prompt for AI Agents
In store/ai-chat-store.ts around lines 20-23, calling crypto.randomUUID()
directly can crash in SSR/old runtimes; replace the direct call with a guarded
fallback: compute the id using (typeof crypto !== "undefined" && typeof
crypto.randomUUID === "function") ? crypto.randomUUID() : a safe fallback (for
example generate a simple UUID-like string using Date.now() and Math.random() or
use a small UUID v4 helper), and ensure the fallback is used both where the
initial chatId is set and anywhere else you currently call crypto.randomUUID so
the module doesn't throw during SSR or in environments without
crypto.randomUUID.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 34
🔭 Outside diff range comments (10)
components/editor/editor.tsx (1)
110-110
: Invalid Tailwind class z-1; use arbitrary value or a scale tokenTailwind does not provide z-1. Use z-[1] or a standard token (e.g., z-10).
- className="z-1 min-w-[max(20%,22rem)]" + className="z-[1] min-w-[max(20%,22rem)]"hooks/use-dialog-actions.tsx (1)
189-216
: Critical: Context provider is incorrect (missing .Provider)
<DialogActionsContext value={store}>
must be<DialogActionsContext.Provider value={store}>
. As-is, this will crash or be a no-op at runtime.- return ( - <DialogActionsContext value={store}> + return ( + <DialogActionsContext.Provider value={store}> {children} @@ - </DialogActionsContext> + </DialogActionsContext.Provider> );lib/error-response.ts (1)
9-20
: Tighten jsonError parameter types and avoid leaking optionality from the schemaUsing
MyErrorResponseType["..."]
makes parameters optional;status
could becomeundefined
and silently default to 200. Type the function signature strictly and keep the response body typed.-function jsonError( - code: MyErrorResponseType["code"], - message: MyErrorResponseType["message"], - data: MyErrorResponseType["data"], - status: MyErrorResponseType["status"] -): Response { - const response: MyErrorResponseType = { code, message, data, status }; +function jsonError(code: ApiErrorCode, message: string, data: unknown, status: number): Response { + const response: MyErrorResponseType = { code, message, data, status }; return new Response(JSON.stringify(response), { status, headers: { "Content-Type": "application/json" }, }); }Optional follow-up: Return a JSON error for unknown errors too to keep a consistent response shape.
components/editor/ai/alert-banner.tsx (1)
23-23
: Fix setTimeout typing and cleanup guard (Node types in client code).Use ReturnType instead of NodeJS.Timeout and guard the cleanup to avoid “used before assigned” issues with strict TS.
- let timer: NodeJS.Timeout; + let timer: ReturnType<typeof setTimeout> | undefined; ... - return () => clearTimeout(timer); + return () => { + if (timer !== undefined) clearTimeout(timer); + };Also applies to: 36-36
components/editor/ai/no-messages-placeholder.tsx (1)
7-7
: Type-only import for AIPromptDataAIPromptData is used purely as a type in this component. Use a type-only import to avoid emitting a runtime import.
-import { AIPromptData } from "@/types/ai"; +import type { AIPromptData } from "@/types/ai";components/editor/ai/closeable-suggested-pill-actions.tsx (1)
6-6
: Type-only import for AIPromptDataThis is a client component and the symbol is used as a type. Use a type-only import to avoid unnecessary runtime code.
-import { AIPromptData } from "@/types/ai"; +import type { AIPromptData } from "@/types/ai";app/ai/components/suggested-pill-actions.tsx (1)
29-40
: Guard against concurrent auto-submits and restore proper depsIf an image finishes uploading while generation is in-flight, the effect will still fire and may enqueue a second request. Also, the ESLint ignore hides missing dependencies.
Apply this diff:
useEffect(() => { - if (uploadedImages.length > 0 && !isSomeImageUploading) { + if (isGeneratingTheme) return; + if (uploadedImages.length > 0 && !isSomeImageUploading) { onThemeGeneration({ content: "", // No text prompt mentions: [], // No mentions images: [uploadedImages[0]], }); } - - // eslint-disable-next-line react-hooks/exhaustive-deps - }, [uploadedImages, isSomeImageUploading]); + }, [uploadedImages, isSomeImageUploading, isGeneratingTheme, onThemeGeneration]);app/ai/components/ai-chat-form.tsx (1)
50-55
: Fix potential runtime crash whenpromptData
is undefinedSpreading
promptData
directly will throw if it’sundefined
. Guard it with a default empty object.- onThemeGeneration({ - ...promptData, - content: promptData?.content ?? "", - mentions: promptData?.mentions ?? [], - images, - }); + onThemeGeneration({ + ...(promptData ?? {}), + content: promptData?.content ?? "", + mentions: promptData?.mentions ?? [], + images, + });utils/ai/message-converter.ts (1)
25-33
: Fix server-side SVG decoding:atob
not available in Node.js
atob
is a browser API and will throw in Node runtimes (typical for API routes). UseBuffer
when available and fall back toatob
only when present (e.g., Edge/runtime with Web APIs).- const dataUrlPart = image.url.split(",")[1]; - let svgMarkup: string; - - if (image.url.includes("base64")) { - svgMarkup = atob(dataUrlPart); - } else { - svgMarkup = decodeURIComponent(dataUrlPart); - } + const svgMarkup = decodeSvgDataUrl(image.url);Add this helper in the same file (e.g., below imports):
function decodeSvgDataUrl(dataUrl: string): string { const [header, data = ""] = dataUrl.split(","); const isBase64 = /;base64/i.test(header); if (isBase64) { // Prefer Node's Buffer when available if (typeof Buffer !== "undefined") { return Buffer.from(data, "base64").toString("utf-8"); } // Fallback for runtimes exposing Web APIs const atobFn = (globalThis as any).atob as undefined | ((s: string) => string); if (typeof atobFn === "function") { return atobFn(data); } throw new Error("No available base64 decoder for data URL"); } return decodeURIComponent(data); }components/editor/ai/chat-input.tsx (1)
60-69
: Await and handle errors fromonThemeGeneration
onThemeGeneration
is typed to return a Promise, but the call isn’t awaited. If it rejects, it can trigger an unhandled promise rejection. Await it and consider catching errors for UX.- onThemeGeneration(promptData); + try { + await onThemeGeneration(promptData); + } catch (err) { + // TODO: surface a user-friendly error toast/state + console.error("Failed to generate theme", err); + }
🧹 Nitpick comments (49)
components/editor/editor.tsx (5)
31-31
: Guard against hydration mismatch from conditional mobile/desktop renderingThe early return on isMobile swaps the entire tree, which can cause hydration mismatch if isMobile changes post-mount (common when computed from window size). Ensure useIsMobile is SSR-safe (stable between server and first client render), or gate rendering until it’s known.
Confirm useIsMobile returns a consistent initial value across SSR/CSR, or defers rendering until mounted.
Check for hydration warnings in the console on initial load.
Consider rendering a single layout and toggling inner content via CSS/conditional subtrees, or return null until isMobile is determined.
Alternatively, add a mounted guard in useIsMobile (e.g., return null until after useEffect) and render a placeholder to avoid swapping entire trees on hydration.
Also applies to: 64-99
35-41
: Avoid reading global store state via getState() in handlers if an updater form is availableAccessing useEditorStore.getState() inside callbacks can be okay, but if setThemeState supports a functional updater, prefer it to avoid subtle stale reads/races during quick successive updates.
- const handleStyleChange = React.useCallback( - (newStyles: ThemeStyles) => { - const prev = useEditorStore.getState().themeState; - setThemeState({ ...prev, styles: newStyles }); - }, - [setThemeState] - ); + const handleStyleChange = React.useCallback( + (newStyles: ThemeStyles) => { + // If setThemeState supports functional updates: + // setThemeState((prev) => ({ ...prev, styles: newStyles })); + // Otherwise, keep current approach. + const prev = useEditorStore.getState().themeState; + setThemeState({ ...prev, styles: newStyles }); + }, + [setThemeState] + );If setThemeState doesn’t accept a function, consider adding such support in the store to improve ergonomics and safety.
81-86
: Consider passing resolved initialTheme instead of the raw themePromise to childrenIf ThemeControlPanel (or other children) also unwrap themePromise (e.g., via use()), it can cause duplicate Suspense/ErrorBoundary interactions or redundant work. Since Editor already resolves/validates initialTheme, prefer passing the resolved value or derived fields.
- Replace themePromise={themePromise} with initialTheme={initialTheme} (or remove it if unused).
- Align child components to accept the resolved data instead of a promise.
Also applies to: 113-118
8-8
: Minor consistency: either import useCallback or use React.useCallback consistentlyYou’re mixing named imports (useEffect) with React.useCallback usage. For clarity, either import useCallback or consistently use the React namespace.
Example:
-import React, { use, useEffect } from "react"; +import React, { useEffect, useCallback } from "react"; ... -const handleStyleChange = React.useCallback( +const handleStyleChange = useCallback(Also applies to: 35-41
54-60
: Invalid theme data path: returning static text may not be ideal UXIf the fetched theme data is invalid, you immediately return a static error div. Consider routing this to a more user-friendly empty state, toasts, or a retry action, consistent with your app’s error patterns.
- Show a non-blocking banner/toast with retry.
- Log diagnostic info (safely) to help triage schema mismatches.
lib/constants.ts (1)
5-5
: Clarify naming/units to avoid confusion with other “500” limits.With AI_PROMPT_CHARACTER_LIMIT also 500, consider clarifying the unit (per-day, per-session, etc.). For example: AI_FREE_TIER_REQUESTS_PER_DAY.
app/prompt-kit.css (1)
3-128
: Namespace keyframe names to avoid collisions.Names like wave, blink, shimmer are common. Prefix with pk- (e.g., pk-wave) to minimize clashes with other libraries/plugins.
components/debug-button.tsx (1)
17-17
: Remove unnecessary cn("") wrapper.Passing an empty string is redundant.
Apply this diff:
- <Button variant="ghost" size="icon" className={cn("", className)} {...props}> + <Button variant="ghost" size="icon" className={cn(className)} {...props}>Alternatively, pass className directly if no merging is needed.
components/editor/ai/alert-banner.tsx (2)
76-84
: Add an accessible label to the close button.Improves a11y for screen readers.
- <Button + <Button variant="ghost" size="icon" className="size-4 [&>svg]:size-3" - onClick={() => setShowBanner(false)} + onClick={() => setShowBanner(false)} + aria-label="Close banner" > <X /> </Button>
64-88
: Optional: Decide whether dismissing should persist or reappear.Currently, clicking X sets local state to false and the effect won’t reshow it (showBanner isn’t a dependency). If that’s intended (dismiss until reload), fine. If you want persistence across navigations/sessions, consider storing dismissal in localStorage tied to the user.
components/prompt-kit/scroll-button.tsx (1)
24-36
: Add type="button" and an accessible label.Prevents accidental form submission when placed inside forms and improves a11y.
<Button + type="button" variant={variant} size={size} className={cn( "h-10 w-10 rounded-full transition-all duration-150 ease-out", !isAtBottom ? "translate-y-0 scale-100 opacity-100" : "pointer-events-none translate-y-4 scale-95 opacity-0", className )} onClick={() => scrollToBottom()} + aria-label="Scroll to bottom" {...props} >components/prompt-kit/chat-container.tsx (1)
16-19
: Consider making the ref prop work with forwardRef pattern.The
ChatContainerScrollAnchorProps
type includes aref
prop, but the component doesn't useReact.forwardRef
. This could lead to confusion as the ref won't be properly forwarded to the underlying div element.Consider using
React.forwardRef
for proper ref forwarding:-export type ChatContainerScrollAnchorProps = { - className?: string - ref?: React.RefObject<HTMLDivElement> -} & React.HTMLAttributes<HTMLDivElement> +export type ChatContainerScrollAnchorProps = { + className?: string +} & React.HTMLAttributes<HTMLDivElement> -function ChatContainerScrollAnchor({ - className, - ...props -}: ChatContainerScrollAnchorProps) { +const ChatContainerScrollAnchor = React.forwardRef< + HTMLDivElement, + ChatContainerScrollAnchorProps +>(({ className, ...props }, ref) => { return ( <div + ref={ref} className={cn("h-px w-full shrink-0 scroll-mt-4", className)} aria-hidden="true" {...props} /> ) -} +}) + +ChatContainerScrollAnchor.displayName = "ChatContainerScrollAnchor"components/editor/custom-textarea.tsx (1)
55-55
: Consider accessibility improvements for the textarea.The reduced height constraints (min-h-[50px] max-h-[120px]) might impact accessibility for users who need larger text areas.
Consider adding ARIA attributes to improve accessibility:
editorProps: { attributes: { class: cn( "min-w-0 min-h-[50px] max-h-[120px] wrap-anywhere text-foreground/90 scrollbar-thin overflow-y-auto w-full bg-background p-1 text-sm focus-visible:outline-none disabled:opacity-50 max-sm:text-[16px]!", className ), + "aria-label": "Theme description editor", + "aria-disabled": disabled, },components/prompt-kit/loader.tsx (1)
76-91
: Complex inline styles might benefit from CSS modules or styled-components.The
ClassicLoader
uses complex inline styles for positioning and animation delays. While functional, this approach makes the code harder to maintain.Consider extracting the complex calculations into a utility function:
+const getSpinnerBarStyle = (index: number, size: "sm" | "md" | "lg") => { + const marginLeftMap = { sm: "-0.75px", md: "-1px", lg: "-1.25px" }; + const transformOriginMap = { + sm: "0.75px 10px", + md: "1px 12px", + lg: "1.25px 14px", + }; + + return { + top: "0", + left: "50%", + marginLeft: marginLeftMap[size], + transformOrigin: transformOriginMap[size], + transform: `rotate(${index * 30}deg)`, + opacity: 0, + animationDelay: `${index * 0.1}s`, + height: barSizes[size].height, + width: barSizes[size].width, + }; +}; {[...Array(12)].map((_, i) => ( <div key={i} className="bg-primary absolute animate-[spinner-fade_1.2s_linear_infinite] rounded-full" - style={{ - top: "0", - left: "50%", - marginLeft: - size === "sm" ? "-0.75px" : size === "lg" ? "-1.25px" : "-1px", - transformOrigin: `${size === "sm" ? "0.75px" : size === "lg" ? "1.25px" : "1px"} ${size === "sm" ? "10px" : size === "lg" ? "14px" : "12px"}`, - transform: `rotate(${i * 30}deg)`, - opacity: 0, - animationDelay: `${i * 0.1}s`, - height: barSizes[size].height, - width: barSizes[size].width, - }} + style={getSpinnerBarStyle(i, size)} /> ))}components/editor/theme-control-panel.tsx (1)
83-86
: Correctly gating top-level theme actions while generating; consider adding accessibility signal.Disabling ThemePresetSelect and ThemeEditActions when isGeneratingTheme is true is correct. Add aria-busy for better a11y and UX signaling.
- <ThemePresetSelect className="h-14 rounded-none" disabled={isGeneratingTheme} /> + <ThemePresetSelect + className="h-14 rounded-none" + disabled={isGeneratingTheme} + aria-busy={isGeneratingTheme} + />- <ThemeEditActions theme={theme} disabled={isGeneratingTheme} /> + <ThemeEditActions theme={theme} disabled={isGeneratingTheme} aria-busy={isGeneratingTheme} />utils/ai/ai-prompt.tsx (1)
11-14
: Helper extraction is good; tighten the output format to avoid accidental line breaks.The template introduces a newline after the equals sign, which can subtly change downstream parsing. If the consumer expects a single-line "@Label = {json}", prefer an inline format.
-export const buildMentionStringForAPI = (mention: MentionReference) => { - return `@${mention.label} = - ${JSON.stringify(mention.themeData)}`; -}; +export const buildMentionStringForAPI = (mention: MentionReference) => { + return `@${mention.label} = ${JSON.stringify(mention.themeData)}`; +};If the newline is intentional and the server-side tooling accounts for it, feel free to ignore this suggestion.
hooks/use-ai-chat-form.ts (1)
34-39
: Minor ordering nit: mark initialized within the transition.Setting hasInitialized before scheduling the transition is fine; moving it inside the transition keeps the side effect closer to the state update.
- startTransition(() => { - dispatch({ - type: "INITIALIZE", - payload: imagesDraft.map(({ url }) => ({ url })), - }); - }); + startTransition(() => { + dispatch({ + type: "INITIALIZE", + payload: imagesDraft.map(({ url }) => ({ url })), + }); + hasInitialized.current = true; + });app/ai/components/chat-heading.tsx (1)
6-7
: Redundant ternary; simplify style assignment."--gradient-accent" resolves to the same value regardless of isGeneratingTheme.
- "--gradient-accent": isGeneratingTheme ? "var(--foreground)" : "var(--foreground)", + "--gradient-accent": "var(--foreground)",components/editor/ai/ai-chat-form-body.tsx (1)
75-85
: Add aria-busy to the chat input container for better accessibilityWhen disabled due to ongoing generation, indicating busy state improves SR feedback.
- <div className="relative isolate min-h-[50px] overflow-hidden" aria-disabled={disabled}> + <div + className="relative isolate min-h-[50px] overflow-hidden" + aria-disabled={disabled} + aria-busy={disabled} + >components/editor/ai/closeable-suggested-pill-actions.tsx (1)
21-24
: Remove unnecessary async
handleSetPrompt
performs no awaits. Dropasync
to avoid implicitly returning a Promise.- const handleSetPrompt = async (prompt: string) => { + const handleSetPrompt = (prompt: string) => { const promptData = createCurrentThemePrompt({ prompt }); onGenerateTheme(promptData); };components/editor/ai/message-actions.tsx (1)
45-51
: Optional: animate visibility with opacity instead of visibilityUsing
invisible/visible
won’t animate opacity. If you want a fade effect, prefer opacity transitions.- className={cn( - "invisible flex gap-1 transition-opacity duration-300 ease-out group-hover/message:visible", + className={cn( + "opacity-0 pointer-events-none flex gap-1 transition-opacity duration-300 ease-out group-hover/message:opacity-100 group-hover/message:pointer-events-auto", isUser && "justify-end", isAssistant && "justify-start pl-7.5" )}hooks/use-ai-theme-generation-core.ts (1)
5-7
: Broaden generating state check to be future-proofCurrent check only covers "submitted" and "streaming". If the SDK adds/uses additional in-flight states, the UI might show an idle state while a request is actually pending.
Consider this change and verify against your SDK's status enum:
- const isGeneratingTheme = status === "submitted" || status === "streaming"; + const isGeneratingTheme = status !== "ready" && status !== "error";If the SDK exposes a union type of statuses, prefer an explicit in-flight set: new Set(["submitted","streaming","reloading", ...]).has(status).
hooks/use-ai-generate-chat.tsx (4)
66-74
: Remove or gate console logsConsole noise in production can leak implementation details and clutter logs.
Apply this diff:
- console.log("----- ✅ Updating Stored Messages -----"); setStoredMessages(chat.messages);
If you prefer to keep it during development:
- console.log("----- ✅ Updating Stored Messages -----"); + if (process.env.NODE_ENV !== "production") { + console.log("----- ✅ Updating Stored Messages -----"); + }
76-85
: Remove or gate console logs (hydration)Same rationale as above.
Apply this diff:
- console.log("----- ☑️ Populating Chat with Stored Messages -----"); chat.setMessages(storedMessages);
Or, gate behind NODE_ENV as shown in the previous comment.
61-64
: Clamp reset index to avoid surprising slicesCalling resetMessagesUpToIndex with negative or out-of-range indices can yield unexpected results. Clamp to [0, messages.length].
Apply this diff:
- const resetMessagesUpToIndex = (index: number) => { - const newMessages = chat.messages.slice(0, index); - chat.setMessages(newMessages); - }; + const resetMessagesUpToIndex = (index: number) => { + const end = Math.min(Math.max(index, 0), chat.messages.length); + chat.setMessages(chat.messages.slice(0, end)); + };
46-55
: Future hook-in point: applying themeStylesYou already extract themeStyles from message.metadata with a TODO. If you want, I can help wire this to your editor store/apply function and add unit tests.
Do you want me to push a follow-up commit that:
- Introduces an applyGeneratedTheme(themeStyles) utility,
- Wires it here,
- And adds tests around metadata-to-theme conversion?
app/api/generate-theme/tools.ts (1)
9-10
: Harden the tool input schema (reject unexpected keys).Since this tool takes no input, prefer a strict empty object to fail fast on unexpected payloads.
- inputSchema: z.object({}), + inputSchema: z.object({}).strict(),utils/ai/parse-ai-sdk-transport-error.ts (2)
37-45
: Use fallback when the parsed message is empty/whitespace.If
parsed.message
is an empty string, the current??
keeps it, yielding a blank banner. Trim and fall back for better UX.- message: parsed.message ?? fallbackMessage, + message: parsed.message && parsed.message.trim().length > 0 + ? parsed.message + : fallbackMessage,
51-59
: Mirror the empty-string fallback for string inputs.Same reasoning as above to avoid rendering an empty error message.
- message: parsed.message ?? fallbackMessage, + message: parsed.message && parsed.message.trim().length > 0 + ? parsed.message + : fallbackMessage,app/api/generate-theme/route.ts (3)
31-35
: Avoid unnecessary await for headers() and normalize IP extraction.
headers()
is synchronous. Also,x-forwarded-for
can contain a comma-separated list; use the first IP for rate limiting.- const headersList = await headers(); + const headersList = headers(); @@ - const ip = headersList.get("x-forwarded-for") ?? "anonymous"; + const fwd = headersList.get("x-forwarded-for"); + const ip = fwd ? fwd.split(",")[0].trim() : "anonymous";
70-72
: Use centralized logging instead of console.error.You already import
logError
. Prefer consistent logging to ensure structured output and observability.- onError: (error) => { - if (error instanceof Error) console.error(error); - }, + onError: (error) => { + if (error instanceof Error) logError(error, { route: "/api/generate-theme", phase: "streamText" }); + },
90-91
: Avoid sending reasoning traces unless explicitly required.Even with
includeThoughts: false
, surfacing reasoning events can still risk exposing unintended traces. Consider disabling by default or gating behind an env flag.- writer.merge(result.toUIMessageStream({ sendReasoning: true })); + writer.merge(result.toUIMessageStream({ sendReasoning: false }));components/editor/ai/messages.tsx (2)
86-91
: Base loading-state logic on the same message set you render.For consistency with the rendered list, consider inspecting
visibleMessages
instead of the fullmessages
array.- const lastAssistantMsgHasText = getLastAssistantMessage(messages)?.parts.some( + const lastAssistantMsgHasText = getLastAssistantMessage(visibleMessages)?.parts.some( (part) => part.type === "text" && Boolean(part.text) ); @@ - }, [status, messages]); + }, [status, visibleMessages]);
153-160
: Likely Tailwind class typo: important modifier placement.
border-border/50!
is not a valid Tailwind class. If you intended to force precedence, the bang modifier must prefix the utility.- "border-border/50! bg-destructive relative flex size-6 shrink-0 items-center justify-center rounded-full border select-none" + "!border-border/50 bg-destructive relative flex size-6 shrink-0 items-center justify-center rounded-full border select-none"components/examples/ai-chat-demo.tsx (2)
37-39
: Non-standard scale utility; prefer arbitrary value syntax.
scale-80
is not a default Tailwind scale. Usescale-[0.8]
or define a custom value in your Tailwind config.- "bg-background/50 flex h-full w-full min-w-[350px] origin-top-left flex-col overflow-hidden backdrop-blur-lg max-lg:scale-80" + "bg-background/50 flex h-full w-full min-w-[350px] origin-top-left flex-col overflow-hidden backdrop-blur-lg max-lg:scale-[0.8]"
24-33
: Broaden element typing when toggling disabled state.You’re casting to
HTMLButtonElement
but the query selects multiple element types that also supportdisabled
. Narrow by guard or use a broader union to satisfy TS and future-proof the demo.- const focusables = ref.current.querySelectorAll( + const focusables = ref.current.querySelectorAll<HTMLElement>( 'button, [href], input, select, textarea, [tabindex]:not([tabindex="-1"])' ); focusables.forEach((el) => { (el as HTMLElement).setAttribute("tabindex", "-1"); // Only set disabled if supported and disabled is true - if (disabled && "disabled" in el) (el as HTMLButtonElement).disabled = true; - if (!disabled && "disabled" in el) (el as HTMLButtonElement).disabled = false; + if ("disabled" in el) { + // @ts-expect-error: multiple element types expose a 'disabled' property + (el as HTMLButtonElement | HTMLInputElement | HTMLSelectElement | HTMLTextAreaElement).disabled = !!disabled; + } });app/ai/components/ai-chat-form.tsx (2)
41-58
: Remove unnecessary async fromhandleGenerate
handleGenerate
does notawait
anything. Droppingasync
avoids confusion about unhandled promises.- const handleGenerate = async () => { + const handleGenerate = () => { if (!checkValidSession() || !checkValidSubscription()) return; // Act as an early return // Only send images that are not loading, and strip loading property const images = uploadedImages.filter((img) => !img.loading).map(({ url }) => ({ url })); // Proceed only if there is text, or at least one image if (isEmptyPrompt && images.length === 0) return; onThemeGeneration({ ...(promptData ?? {}), content: promptData?.content ?? "", mentions: promptData?.mentions ?? [], images, }); clearLocalDraft(); };
119-120
: Remove unreachable loader toggleThis branch only renders when
isGeneratingTheme
is false, so the ternary always shows the arrow. Simplify and drop the unused Loader icon.- {isGeneratingTheme ? <Loader className="animate-spin" /> : <ArrowUp />} + <ArrowUp />Additionally, remove the now-unused import on Line 13:
- import { ArrowUp, Loader, StopCircle } from "lucide-react"; + import { ArrowUp, StopCircle } from "lucide-react";utils/ai/message-converter.ts (3)
18-20
: User messages withoutpromptData
are droppedCurrently, user messages are only pushed when
promptData
exists. If upstream ever sends user messages without metadata, those will be silently omitted from the model’s context.Consider adding a fallback that builds user content from
message.parts
(text/images) whenpromptData
is absent.Would you like me to provide a safe fallback implementation that preserves such messages?
5-8
: Consider making the function synchronousThere’s no
await
inside; returning aPromise
is unnecessary overhead and complicates typing for callers.-export async function convertMessagesToModelMessages( - messages: ChatMessage[] -): Promise<ModelMessage[]> { - const modelMessages: ModelMessage[] = []; +export function convertMessagesToModelMessages(messages: ChatMessage[]): ModelMessage[] { + const modelMessages: ModelMessage[] = [];Before applying, confirm no callers rely on the
Promise
return type (e.g.,await convertMessagesToModelMessages(...)
).
87-91
: Disambiguate assistant metadata by code-fencing JSONEmbedding raw JSON as plain text can confuse the model. Wrapping it in a JSON code block makes intent explicit.
- assistantContentParts.push({ - type: "text", - text: JSON.stringify(themeStyles), - }); + assistantContentParts.push({ + type: "text", + text: `\`\`\`json\n${JSON.stringify(themeStyles)}\n\`\`\``, + });components/editor/ai/chat-input.tsx (2)
25-35
: Consider aligning theonThemeGeneration
signature across componentsHere it’s
() => Promise<void>
, whereasAIChatForm
expects a synchronous() => void
. Unifying the type reduces friction for shared handlers.Would you like me to propose a shared
type OnThemeGeneration = (promptData: AIPromptData) => Promise<void>
and apply it across both components?
164-167
: Remove unreachable loader toggle in the send buttonThis branch is only rendered when
isGeneratingTheme
is false, so the loader icon never renders. Simplify and drop the loader import if it becomes unused.- {isGeneratingTheme ? <LoaderIcon className="animate-spin" /> : <ArrowUp />} + <ArrowUp />components/editor/ai/message.tsx (2)
85-97
: Fix Tailwind class typo
border-border/50!
isn’t a valid class and won’t be picked up by Tailwind. Remove the trailing!
.- "border-border/50! bg-foreground relative flex size-6 shrink-0 items-center justify-center rounded-full border select-none" + "border-border/50 bg-foreground relative flex size-6 shrink-0 items-center justify-center rounded-full border select-none"
175-201
: Optional: fallback to part-based images when metadata is missingImages are sourced only from
promptData.images
. If a message lacks metadata but includes image parts, they won’t render. Add a fallback to scanmessage.parts
for{ type: "image" }
.I can provide a small utility to merge images from
promptData?.images
withmessage.parts
image entries if you want broader compatibility.store/ai-chat-store.ts (1)
12-15
: Expose_setHasHydrated
as internal-only to avoid API leakageThe leading underscore signals internal use, but it’s part of the public interface. Consider keeping it off the exported interface (define an internal store type for wiring
persist
and export a narrowed selector-based hook).components/editor/ai/chat-theme-preview.tsx (2)
32-39
:errorText
prop is accepted but not renderedSurface the optional
errorText
when provided to give contextual failures.Destructure
errorText
:export function ChatThemePreview({ status, expanded = false, themeStyles, + errorText, className, children, ...props }: ChatThemePreviewProps) {
Use it in the error card:
- <span className="text-foreground/90 text-sm">Generation cancelled or failed.</span> + <span className="text-foreground/90 text-sm"> + {errorText ?? "Generation cancelled or failed."} + </span>Also applies to: 64-74
33-41
:expanded
behaves as “defaultExpanded”; sync when prop changes or rename
expanded
is only used for initial state; updates to the prop won’t reflect. Either rename todefaultExpanded
or sync via effect.Minimal sync:
-import { ComponentProps, useState } from "react"; +import { ComponentProps, useEffect, useState } from "react"; ... const [isExpanded, setIsExpanded] = useState(expanded); + useEffect(() => { + setIsExpanded(expanded); + }, [expanded]);components/editor/ai/chat-interface.tsx (1)
54-56
: Fix grammar in toast copyChange “not longer available” to “no longer available”.
- description: "Seems like this message is not longer available.", + description: "Seems like this message is no longer available.",
const stream = createUIMessageStream<ChatMessage>({ | ||
execute: ({ writer }) => { | ||
const result = streamText({ | ||
abortSignal: req.signal, | ||
model: baseModel, | ||
providerOptions: baseProviderOptions, | ||
system: SYSTEM_PROMPT, | ||
messages: modelMessages, | ||
tools: THEME_GENERATION_TOOLS, | ||
stopWhen: stepCountIs(5), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Reconsider hard stop condition stepCountIs(5)
to prevent premature termination.
A fixed step cap can cut off tool calls or assistant output across providers. Unless you have evidence that 5 is safe, remove it or gate it behind config.
- stopWhen: stepCountIs(5),
+ // Consider removing hard caps unless specifically required by UX.
+ // stopWhen: stepCountIs(5),
If intentional, please confirm the rationale (e.g., observed runaway loops) and that it won’t truncate theme generation.
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "1) Search for 'stepCountIs(' occurrences:"
rg --hidden --no-ignore -n "stepCountIs\(" || true
echo
echo "2) Search for 'stepCountIs' definitions (function / const / export):"
rg --hidden --no-ignore -n "stepCountIs" || true
echo
echo "3) Search for 'stopWhen' usage sites:"
rg --hidden --no-ignore -n "stopWhen" || true
echo
echo "4) Search for stream helpers (streamText / createUIMessageStream):"
rg --hidden --no-ignore -n "streamText" || true
rg --hidden --no-ignore -n "createUIMessageStream" || true
echo
echo "5) Search for THEME_GENERATION_TOOLS:"
rg --hidden --no-ignore -n "THEME_GENERATION_TOOLS" || true
echo
echo "6) Show the app/api/generate-theme/route.ts file (if present) to inspect surrounding context:"
if [ -f "app/api/generate-theme/route.ts" ]; then
sed -n '1,240p' app/api/generate-theme/route.ts
else
echo "app/api/generate-theme/route.ts not found"
fi
echo
echo "7) If stepCountIs is found, show its file content (first match):"
FIRST=$(rg --hidden --no-ignore -n "stepCountIs" -S --no-ignore-vcs --hidden --line-number --no-heading --max-count 1 | awk -F: '{print $1":"$2}')
if [ -n "$FIRST" ]; then
FILE=$(echo "$FIRST" | cut -d: -f1)
echo "Found in: $FILE"
sed -n '1,240p' "$FILE"
else
echo "No stepCountIs definition/file found in repo."
fi
Length of output: 9916
Reconsider hard stop condition stepCountIs(5)
— avoid premature truncation
Found no other usages or configuration for this cap; stepCountIs
is imported from "ai" and only used in app/api/generate-theme/route.ts (import around lines 13–17, usage at ~69). A fixed stopWhen of 5 can prematurely cut tool calls or assistant output — remove it or make it configurable.
Files to change:
- app/api/generate-theme/route.ts — remove or gate the
stopWhen: stepCountIs(5)
line.
Suggested diff:
- stopWhen: stepCountIs(5),
+ // Consider removing hard caps unless required by UX.
+ // stopWhen: stepCountIs(5),
If this limit was intentional, please confirm the rationale (e.g., observed runaway loops) and either document it or expose it via config/env/request option so it won't unexpectedly truncate theme generation.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const stream = createUIMessageStream<ChatMessage>({ | |
execute: ({ writer }) => { | |
const result = streamText({ | |
abortSignal: req.signal, | |
model: baseModel, | |
providerOptions: baseProviderOptions, | |
system: SYSTEM_PROMPT, | |
messages: modelMessages, | |
tools: THEME_GENERATION_TOOLS, | |
stopWhen: stepCountIs(5), | |
const stream = createUIMessageStream<ChatMessage>({ | |
execute: ({ writer }) => { | |
const result = streamText({ | |
abortSignal: req.signal, | |
model: baseModel, | |
providerOptions: baseProviderOptions, | |
system: SYSTEM_PROMPT, | |
messages: modelMessages, | |
tools: THEME_GENERATION_TOOLS, | |
// Consider removing hard caps unless required by UX. | |
// stopWhen: stepCountIs(5), |
🤖 Prompt for AI Agents
In app/api/generate-theme/route.ts around lines 60 to 69, remove or
conditionally gate the hard stopWhen: stepCountIs(5) entry which currently
forces the stream to halt after 5 steps and can prematurely truncate tool calls
or assistant output; either delete that line or replace it with a configurable
value (read from env, request param, or feature flag) and pass the configured
limit into stepCountIs(...) (or omit stopWhen entirely) and add a brief comment
documenting the rationale if the 5-step cap is intentionally retained.
onFinish: async (result) => { | ||
const { totalUsage } = result; | ||
try { | ||
await recordAIUsage({ | ||
promptTokens: totalUsage.inputTokens, | ||
completionTokens: totalUsage.outputTokens, | ||
}); | ||
} catch (error) { | ||
logError(error as Error, { action: "recordAIUsage", totalUsage }); | ||
} | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Guard token usage before recording; avoid variable shadowing.
totalUsage
fields can be undefined depending on provider/path.- The
result
parameter shadows the outerresult
variable, which harms readability.
- onFinish: async (result) => {
- const { totalUsage } = result;
+ onFinish: async (summary) => {
+ const { totalUsage } = summary;
try {
- await recordAIUsage({
- promptTokens: totalUsage.inputTokens,
- completionTokens: totalUsage.outputTokens,
- });
+ const promptTokens = totalUsage?.inputTokens;
+ const completionTokens = totalUsage?.outputTokens;
+ if (typeof promptTokens === "number" && typeof completionTokens === "number") {
+ await recordAIUsage({ promptTokens, completionTokens });
+ }
} catch (error) {
- logError(error as Error, { action: "recordAIUsage", totalUsage });
+ logError(error as Error, { action: "recordAIUsage", totalUsage });
}
},
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
onFinish: async (result) => { | |
const { totalUsage } = result; | |
try { | |
await recordAIUsage({ | |
promptTokens: totalUsage.inputTokens, | |
completionTokens: totalUsage.outputTokens, | |
}); | |
} catch (error) { | |
logError(error as Error, { action: "recordAIUsage", totalUsage }); | |
} | |
}, | |
onFinish: async (summary) => { | |
const { totalUsage } = summary; | |
try { | |
const promptTokens = totalUsage?.inputTokens; | |
const completionTokens = totalUsage?.outputTokens; | |
if (typeof promptTokens === "number" && typeof completionTokens === "number") { | |
await recordAIUsage({ promptTokens, completionTokens }); | |
} | |
} catch (error) { | |
logError(error as Error, { action: "recordAIUsage", totalUsage }); | |
} | |
}, |
🤖 Prompt for AI Agents
In app/api/generate-theme/route.ts around lines 73 to 83, the onFinish handler
shadows the outer result variable and assumes
totalUsage.inputTokens/outputTokens exist; change the parameter name (e.g.,
finishResult or res) to avoid shadowing, guard access to totalUsage and its
fields (use optional chaining and fallback values or skip calling recordAIUsage
if tokens are undefined), and pass only numeric token counts to recordAIUsage
(e.g., promptTokens: finishResult.totalUsage?.inputTokens ?? 0,
completionTokens: finishResult.totalUsage?.outputTokens ?? 0) — also keep the
same try/catch logging behavior but include the guarded values in the log
context.
@layer base { | ||
* { | ||
@apply border-border outline-ring/50; | ||
} | ||
body { | ||
@apply bg-background text-foreground; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid setting outline styles on all elements globally.
Applying outline-ring/50 globally can interfere with native focus indicators and component-level focus styles. Prefer component-scoped focus-visible styles.
Apply this diff to stop forcing outlines globally:
@layer base {
* {
- @apply border-border outline-ring/50;
+ @apply border-border;
}
body {
@apply bg-background text-foreground;
}
}
If desired, add an explicit, accessible focus style where needed (e.g., focus-visible:ring-2 ring-ring) on interactive components.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@layer base { | |
* { | |
@apply border-border outline-ring/50; | |
} | |
body { | |
@apply bg-background text-foreground; | |
} | |
} | |
@layer base { | |
* { | |
@apply border-border; | |
} | |
body { | |
@apply bg-background text-foreground; | |
} | |
} |
🤖 Prompt for AI Agents
In app/globals.css around lines 68 to 75, remove the global outline application
on the universal selector and instead restrict focus styles to interactive
components; change the rule so it no longer applies outline-ring/50 to "*" and
add guidance to apply accessible focus-visible styles (e.g.,
focus-visible:ring-2 ring-ring) to buttons, links, inputs, and other interactive
components at component or utility level. Ensure native focus indicators are
preserved unless explicitly overridden per component and document where to add
component-scoped focus-visible utilities.
app/layout.tsx
Outdated
<GetProDialogWrapper /> | ||
<Toaster /> | ||
{children} | ||
<AIGenerateChatProvider>{children}</AIGenerateChatProvider> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify error boundary coverage for the AI chat provider.
The AIGenerateChatProvider
handles errors internally via the onError
callback, but consider if an error boundary would provide additional protection for unhandled errors.
🏁 Script executed:
#!/bin/bash
# Description: Check error handling in AIGenerateChatProvider and look for error boundaries
# Check the AIGenerateChatProvider implementation for error handling
echo "=== AIGenerateChatProvider error handling ==="
ast-grep --pattern 'function AIGenerateChatProvider($_) {
$$$
}'
# Check if there are any error boundaries in the codebase
echo -e "\n=== Checking for Error Boundaries ==="
rg "ErrorBoundary|componentDidCatch|getDerivedStateFromError" --type tsx --type jsx
Length of output: 5178
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Searching for AIGenerateChatProvider usages ==="
rg -n "AIGenerateChatProvider" || true
echo -e "\n=== Showing app/layout.tsx (first 240 lines) ==="
if [ -f app/layout.tsx ]; then
sed -n '1,240p' app/layout.tsx || true
else
echo "app/layout.tsx not found"
fi
echo -e "\n=== Showing hooks/use-ai-generate-chat.tsx (first 240 lines) ==="
if [ -f hooks/use-ai-generate-chat.tsx ]; then
sed -n '1,240p' hooks/use-ai-generate-chat.tsx || true
else
echo "hooks/use-ai-generate-chat.tsx not found"
fi
echo -e "\n=== Searching for Next.js app error boundary files (app/**/error.*) ==="
find app -type f -regextype posix-extended -regex '.*\/error\.(tsx|ts|js|jsx)$' -print || true
echo -e "\n=== Searching for ErrorBoundary/componentDidCatch/getDerivedStateFromError references ==="
rg -n "ErrorBoundary|componentDidCatch|getDerivedStateFromError" || true
Length of output: 10354
Add a root error boundary or wrap AIGenerateChatProvider with one
Verified: AIGenerateChatProvider registers an onError handler for transport errors (hooks/use-ai-generate-chat.tsx) but there is no global/root error boundary; only a nested error file exists under app/themes/[themeId]/error.tsx.
- hooks/use-ai-generate-chat.tsx — onError in useChat shows a toast for transport errors.
- app/layout.tsx — provider is mounted here: {children}
- app/themes/[themeId]/error.tsx exists but only covers that route segment (no app/error.tsx at the root).
Recommendation: add a root app/error.tsx or wrap the provider in a client-side ErrorBoundary (class component or react-error-boundary) so unexpected/unhandled errors inside the provider don’t crash the whole app and can show a fallback UI.
@keyframes typing { | ||
0%, | ||
100% { | ||
transform: translateY(0); | ||
opacity: 0.5; | ||
} | ||
50% { | ||
transform: translateY(-2px); | ||
opacity: 1; | ||
} | ||
} | ||
|
||
@keyframes loading-dots { | ||
0%, | ||
100% { | ||
opacity: 0; | ||
} | ||
50% { | ||
opacity: 1; | ||
} | ||
} | ||
|
||
@keyframes wave { | ||
0%, | ||
100% { | ||
transform: scaleY(1); | ||
} | ||
50% { | ||
transform: scaleY(0.6); | ||
} | ||
} | ||
|
||
@keyframes blink { | ||
0%, | ||
100% { | ||
opacity: 1; | ||
} | ||
50% { | ||
opacity: 0; | ||
} | ||
} | ||
|
||
@keyframes text-blink { | ||
0%, | ||
100% { | ||
color: var(--primary); | ||
} | ||
50% { | ||
color: var(--muted-foreground); | ||
} | ||
} | ||
|
||
@keyframes bounce-dots { | ||
0%, | ||
100% { | ||
transform: scale(0.8); | ||
opacity: 0.5; | ||
} | ||
50% { | ||
transform: scale(1.2); | ||
opacity: 1; | ||
} | ||
} | ||
|
||
@keyframes thin-pulse { | ||
0%, | ||
100% { | ||
transform: scale(0.95); | ||
opacity: 0.8; | ||
} | ||
50% { | ||
transform: scale(1.05); | ||
opacity: 0.4; | ||
} | ||
} | ||
|
||
@keyframes pulse-dot { | ||
0%, | ||
100% { | ||
transform: scale(1); | ||
opacity: 0.8; | ||
} | ||
50% { | ||
transform: scale(1.5); | ||
opacity: 1; | ||
} | ||
} | ||
|
||
@keyframes shimmer-text { | ||
0% { | ||
background-position: 150% center; | ||
} | ||
100% { | ||
background-position: -150% center; | ||
} | ||
} | ||
|
||
@keyframes wave-bars { | ||
0%, | ||
100% { | ||
transform: scaleY(1); | ||
opacity: 0.5; | ||
} | ||
50% { | ||
transform: scaleY(0.6); | ||
opacity: 1; | ||
} | ||
} | ||
|
||
@keyframes shimmer { | ||
0% { | ||
background-position: 200% 50%; | ||
} | ||
100% { | ||
background-position: -200% 50%; | ||
} | ||
} | ||
|
||
@keyframes spinner-fade { | ||
0% { | ||
opacity: 0; | ||
} | ||
100% { | ||
opacity: 1; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
A11y: provide a reduced-motion fallback for all loaders.
Users with motion sensitivity should be able to opt out of animations. Add a utility or a media query that disables or dampens animations.
Example you can add (in app/globals.css or here):
@media (prefers-reduced-motion: reduce) {
/* Disable animations on elements that opt-in to motion-safe */
.motion-safe\:animate-none {
animation: none !important;
}
}
Then gate animations via motion-safe: e.g., className="motion-safe:animate-none md:animate-[typing_var(--duration)_infinite]".
🤖 Prompt for AI Agents
In app/prompt-kit.css around lines 3–128 the file defines many keyframe
animations but lacks a reduced-motion fallback; add a prefers-reduced-motion
media query that disables or dampens these animations for users who opt out and
create a small utility class (e.g., a motion-safe animate-none utility) that
sets animation: none !important so components can opt into motion-respecting
behavior; then update animation-using elements/classes to be gated by that
utility or use a motion-safe prefix so animations only run when the user has not
requested reduced motion.
export const themeStylePropsSchemaWithoutSpacing = themeStylePropsSchema.omit({ | ||
spacing: true, | ||
}); | ||
|
||
export const themeStylesSchemaWithoutSpacing = z.object({ | ||
light: themeStylePropsSchemaWithoutSpacing, | ||
dark: themeStylePropsSchemaWithoutSpacing, | ||
}); | ||
|
||
export type ThemeStylePropsWithoutSpacing = z.infer<typeof themeStylesSchemaWithoutSpacing>; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Incorrect type alias: ThemeStylePropsWithoutSpacing inferred from the styles schema (light/dark) instead of props schema.
This will produce the wrong shape. Also missing a ThemeStylesWithoutSpacing type, which is useful alongside the schema.
export const themeStylesSchemaWithoutSpacing = z.object({
light: themeStylePropsSchemaWithoutSpacing,
dark: themeStylePropsSchemaWithoutSpacing,
});
-export type ThemeStylePropsWithoutSpacing = z.infer<typeof themeStylesSchemaWithoutSpacing>;
+export type ThemeStylePropsWithoutSpacing = z.infer<typeof themeStylePropsSchemaWithoutSpacing>;
+export type ThemeStylesWithoutSpacing = z.infer<typeof themeStylesSchemaWithoutSpacing>;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export const themeStylePropsSchemaWithoutSpacing = themeStylePropsSchema.omit({ | |
spacing: true, | |
}); | |
export const themeStylesSchemaWithoutSpacing = z.object({ | |
light: themeStylePropsSchemaWithoutSpacing, | |
dark: themeStylePropsSchemaWithoutSpacing, | |
}); | |
export type ThemeStylePropsWithoutSpacing = z.infer<typeof themeStylesSchemaWithoutSpacing>; | |
export const themeStylePropsSchemaWithoutSpacing = themeStylePropsSchema.omit({ | |
spacing: true, | |
}); | |
export const themeStylesSchemaWithoutSpacing = z.object({ | |
light: themeStylePropsSchemaWithoutSpacing, | |
dark: themeStylePropsSchemaWithoutSpacing, | |
}); | |
export type ThemeStylePropsWithoutSpacing = z.infer<typeof themeStylePropsSchemaWithoutSpacing>; | |
export type ThemeStylesWithoutSpacing = z.infer<typeof themeStylesSchemaWithoutSpacing>; |
🤖 Prompt for AI Agents
In types/theme.ts around lines 78 to 88, the ThemeStylePropsWithoutSpacing type
is incorrectly inferred from the styles schema (themeStylesSchemaWithoutSpacing)
instead of the props/schema for a single style; update
ThemeStylePropsWithoutSpacing to use z.infer<typeof
themeStylePropsSchemaWithoutSpacing> and add a new export type
ThemeStylesWithoutSpacing = z.infer<typeof themeStylesSchemaWithoutSpacing> so
you have both the single-style props type and the light/dark styles shape
available.
export const buildPromptForAPI = (promptData: AIPromptData) => { | ||
const mentionReferences = promptData.mentions.map((mention) => buildMentionStringForAPI(mention)); | ||
return `${promptData.content}\n\n${mentionReferences.join("\n")}`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Make prompt assembly deterministic across renders.
Sorting mentions before rendering yields stable prompts, aiding reproducibility and easier testing. Also verify that the extra newline in each mention (if kept) matches the API contract.
-export const buildPromptForAPI = (promptData: AIPromptData) => {
- const mentionReferences = promptData.mentions.map((mention) => buildMentionStringForAPI(mention));
+export const buildPromptForAPI = (promptData: AIPromptData) => {
+ const mentionReferences = promptData.mentions
+ .slice()
+ .sort((a, b) => a.label.localeCompare(b.label))
+ .map((mention) => buildMentionStringForAPI(mention));
return `${promptData.content}\n\n${mentionReferences.join("\n")}`;
};
To double-check remaining references to any legacy prompt format, you can run:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n --glob '!dist' --glob '!build' --glob '!node_modules' $'@\\w+\\s*=\\s*\\n' || true
Length of output: 296
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Searching for symbols: buildPromptForAPI, buildMentionStringForAPI, AIPromptData"
rg -n --hidden -S --glob '!dist' --glob '!build' --glob '!node_modules' "buildPromptForAPI|buildMentionStringForAPI|AIPromptData" || true
echo
echo "Listing files that contain any of those symbols:"
rg -l --hidden -S --glob '!dist' --glob '!build' --glob '!node_modules' "buildPromptForAPI|buildMentionStringForAPI|AIPromptData" || true
echo
echo "Printing up to first 240 lines of each matching file:"
for f in $(rg -l --hidden -S --glob '!dist' --glob '!build' --glob '!node_modules' "buildPromptForAPI|buildMentionStringForAPI|AIPromptData" || true); do
echo
echo "=== $f ==="
sed -n '1,240p' "$f"
done
Length of output: 67136
Sort mentions before assembling prompts — make prompt output deterministic.
Sort mentions by label when building prompts and when converting to model messages; also verify whether the newline/indentation in buildMentionStringForAPI matches the API contract.
Files to change:
- utils/ai/ai-prompt.tsx — buildPromptForAPI (currently maps promptData.mentions directly).
- utils/ai/message-converter.ts — convertMessagesToModelMessages (currently pushes mentions via promptData.mentions.forEach()).
- utils/ai/ai-prompt.tsx — buildMentionStringForAPI: verify/remove the extra newline/indentation if the API expects a single-line mention.
Proposed diffs:
utils/ai/ai-prompt.tsx
-export const buildPromptForAPI = (promptData: AIPromptData) => {
- const mentionReferences = promptData.mentions.map((mention) => buildMentionStringForAPI(mention));
- return `${promptData.content}\n\n${mentionReferences.join("\n")}`;
-};
+export const buildPromptForAPI = (promptData: AIPromptData) => {
+ const mentionReferences = promptData.mentions
+ .slice()
+ .sort((a, b) => a.label.localeCompare(b.label))
+ .map((mention) => buildMentionStringForAPI(mention));
+ return `${promptData.content}\n\n${mentionReferences.join("\n")}`;
+};
utils/ai/message-converter.ts
- // Add each mention as a text part
- promptData.mentions.forEach((mention) => {
- userContentParts.push({
- type: "text",
- text: buildMentionStringForAPI(mention),
- });
- });
+ // Add each mention as a text part (sorted for determinism)
+ promptData.mentions
+ .slice()
+ .sort((a, b) => a.label.localeCompare(b.label))
+ .forEach((mention) => {
+ userContentParts.push({
+ type: "text",
+ text: buildMentionStringForAPI(mention),
+ });
+ });
Note: I couldn't find an in-repo API contract for the mention formatting — please confirm whether the multiline output from buildMentionStringForAPI (the "\n " before JSON) is required. If not, consider changing it to a single-line format for consistency.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export const buildPromptForAPI = (promptData: AIPromptData) => { | |
const mentionReferences = promptData.mentions.map((mention) => buildMentionStringForAPI(mention)); | |
return `${promptData.content}\n\n${mentionReferences.join("\n")}`; | |
export const buildPromptForAPI = (promptData: AIPromptData) => { | |
const mentionReferences = promptData.mentions | |
.slice() | |
.sort((a, b) => a.label.localeCompare(b.label)) | |
.map((mention) => buildMentionStringForAPI(mention)); | |
return `${promptData.content}\n\n${mentionReferences.join("\n")}`; | |
}; |
if (!document.startViewTransition) { | ||
setThemeState({ | ||
...themeState, | ||
styles: mergedStyles, | ||
}); | ||
} else { | ||
document.startViewTransition(() => { | ||
setThemeState({ | ||
...themeState, | ||
styles: mergedStyles, | ||
}); | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Harden view-transition usage with a typed guard and safe fallback
Accessing document.startViewTransition
can produce TS errors in environments without the DOM type augmentation and may throw at runtime. Guard its existence in a typed way and fall back cleanly.
- if (!document.startViewTransition) {
- setThemeState({
- ...themeState,
- styles: mergedStyles,
- });
- } else {
- document.startViewTransition(() => {
- setThemeState({
- ...themeState,
- styles: mergedStyles,
- });
- });
- }
+ const startViewTransition = (document as any)?.startViewTransition as
+ | ((cb: () => void) => void)
+ | undefined;
+
+ const apply = () =>
+ setThemeState({
+ ...themeState,
+ styles: mergedStyles,
+ });
+
+ try {
+ if (typeof startViewTransition === "function") {
+ startViewTransition(apply);
+ } else {
+ apply();
+ }
+ } catch {
+ // Fallback in case the API is present but errors
+ apply();
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (!document.startViewTransition) { | |
setThemeState({ | |
...themeState, | |
styles: mergedStyles, | |
}); | |
} else { | |
document.startViewTransition(() => { | |
setThemeState({ | |
...themeState, | |
styles: mergedStyles, | |
}); | |
}); | |
} | |
const startViewTransition = (document as any)?.startViewTransition as | |
| ((cb: () => void) => void) | |
| undefined; | |
const apply = () => | |
setThemeState({ | |
...themeState, | |
styles: mergedStyles, | |
}); | |
try { | |
if (typeof startViewTransition === "function") { | |
startViewTransition(apply); | |
} else { | |
apply(); | |
} | |
} catch { | |
// Fallback in case the API is present but errors | |
apply(); | |
} |
🤖 Prompt for AI Agents
In utils/ai/apply-theme.ts around lines 12 to 24, guard usage of
document.startViewTransition with a typed runtime check and safe fallback: add a
small type-guard (or use typeof (document as any).startViewTransition ===
"function") so TypeScript knows the property is callable, then call
document.startViewTransition inside a try/catch and fall back to directly
calling setThemeState on error or when the guard fails; ensure the fallback
keeps the same setThemeState({...themeState, styles: mergedStyles}) behavior.
const hasTextPart = message.parts.some((part) => part.type === "text" && Boolean(part.text)); | ||
const images = message.metadata?.promptData?.images; | ||
const hasAttachments = images && images.length > 0; | ||
return hasTextPart || hasAttachments; | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure boolean logic and trim text when filtering displayable messages
- Boolean(part.text) treats whitespace as content; trim it.
- images && images.length > 0 can return the array or undefined; normalize to a boolean.
Apply this diff:
- const hasTextPart = message.parts.some((part) => part.type === "text" && Boolean(part.text));
- const images = message.metadata?.promptData?.images;
- const hasAttachments = images && images.length > 0;
- return hasTextPart || hasAttachments;
+ const hasTextPart = message.parts.some(
+ (part) => part.type === "text" && Boolean(part.text?.trim())
+ );
+ const images = message.metadata?.promptData?.images;
+ const hasAttachments = (images?.length ?? 0) > 0;
+ return hasTextPart || hasAttachments;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const hasTextPart = message.parts.some((part) => part.type === "text" && Boolean(part.text)); | |
const images = message.metadata?.promptData?.images; | |
const hasAttachments = images && images.length > 0; | |
return hasTextPart || hasAttachments; | |
}); | |
const hasTextPart = message.parts.some( | |
(part) => part.type === "text" && Boolean(part.text?.trim()) | |
); | |
const images = message.metadata?.promptData?.images; | |
const hasAttachments = (images?.length ?? 0) > 0; | |
return hasTextPart || hasAttachments; | |
}); |
🤖 Prompt for AI Agents
In utils/ai/messages.ts around lines 5 to 9, the filter logic incorrectly treats
whitespace-only text as present and may return a non-boolean for attachments;
update the predicate to trim text before checking non-emptiness and normalize
the images check to a boolean (e.g., use message.parts.some(p => p.type ===
"text" && Boolean(p.text?.trim())) and const hasAttachments =
Boolean(message.metadata?.promptData?.images?.length)); return the combined
boolean.
const google = createGoogleGenerativeAI({ | ||
apiKey: process.env.GOOGLE_API_KEY, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fail fast when GOOGLE_API_KEY is missing
Avoid deferred runtime failures inside the provider by validating the API key up front with a clear error.
Apply this diff:
-const google = createGoogleGenerativeAI({
- apiKey: process.env.GOOGLE_API_KEY,
-});
+const GOOGLE_API_KEY = process.env.GOOGLE_API_KEY;
+if (!GOOGLE_API_KEY) {
+ throw new Error("GOOGLE_API_KEY is not set. Please configure it in your environment.");
+}
+const google = createGoogleGenerativeAI({
+ apiKey: GOOGLE_API_KEY,
+});
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const google = createGoogleGenerativeAI({ | |
apiKey: process.env.GOOGLE_API_KEY, | |
}); | |
const GOOGLE_API_KEY = process.env.GOOGLE_API_KEY; | |
if (!GOOGLE_API_KEY) { | |
throw new Error("GOOGLE_API_KEY is not set. Please configure it in your environment."); | |
} | |
const google = createGoogleGenerativeAI({ | |
apiKey: GOOGLE_API_KEY, | |
}); |
🤖 Prompt for AI Agents
In utils/ai/model.ts around lines 6 to 8, the GOOGLE_API_KEY is used directly
when creating the Google provider which defers failures to runtime; validate
process.env.GOOGLE_API_KEY immediately and fail fast by checking if it's present
and throwing a clear Error (or exiting) before calling createGoogleGenerativeAI
so the module initialization surfaces a helpful message when the key is missing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
♻️ Duplicate comments (21)
package.json (2)
14-16
: Verify compatibility: mixing ai@^5 with @ai-sdk/*@^2You’re on ai v5 while @ai-sdk/react, @ai-sdk/openai, and @ai-sdk/google are at ^2.x. Depending on how you’re using the APIs, this mix may be fine, but it’s a common source of subtle breakages during upgrades. Please verify.
Run this script to surface potential mismatches and usage patterns:
#!/usr/bin/env bash set -euo pipefail echo "Declared versions in package.json:" jq -r '.dependencies | to_entries[] | select(.key|test("^(@ai-sdk|ai$)")) | "\(.key): \(.value)"' package.json echo echo "Imports referencing 'ai' or '@ai-sdk/react' (context):" rg -nP -C3 'from\s+[\"\'](?:ai|@ai-sdk/react|@ai-sdk/openai|@ai-sdk/google)[\"\']' --type=ts --type=tsx || true echo echo "Legacy imports that should be eliminated:" rg -nP -C2 'from\s+[\"\']ai/react[\"\']' --type=ts --type=tsx || trueAlso applies to: 68-68
126-126
: Align ESLint config with Next minorNext is 15.4.1; eslint-config-next is 15.3.0. Aligning them avoids plugin rule drift.
Apply:
- "eslint-config-next": "15.3.0", + "eslint-config-next": "15.4.1",types/ai.ts (1)
1-2
: Avoid runtime imports: make these type-only and drop the value import of THEME_GENERATION_TOOLSThis file is types-only. Importing the tools value pulls server code into client bundles. Use type-only imports and typeof import to avoid a runtime edge.
Apply:
-import { THEME_GENERATION_TOOLS } from "@/app/api/generate-theme/tools"; -import { InferUITools, UIMessage } from "ai"; +import type { InferUITools, UIMessage } from "ai";hooks/use-ai-theme-generation-core.ts (1)
8-14
: Make promptData required, validate content/images, and return the sendMessage promise
- The param is optional and only checks for existence, not useful content.
- Not returning/awaiting
sendMessage
hides transport errors from callers.Apply this diff:
- const generateThemeCore = async (promptData?: AIPromptData) => { - if (!promptData) throw new Error("Failed to generate theme. Please try again."); - - sendMessage({ - text: promptData.content, - metadata: { promptData }, - }); - }; + const generateThemeCore = (promptData: AIPromptData) => { + const hasContent = Boolean(promptData.content?.trim()); + const hasImages = (promptData.images?.length ?? 0) > 0; + if (!hasContent && !hasImages) { + throw new Error("Missing prompt content."); + } + return sendMessage({ + text: promptData.content, + metadata: { promptData }, + }); + };store/ai-chat-store.ts (2)
28-33
: Don’t persist transient flags; whitelist persisted keys
hasHydrated
is ephemeral and should not be stored. Persist onlymessages
.{ name: "ai-chat-storage-v2", storage: createJSONStorage(() => idbStorage), + partialize: (state) => ({ + messages: state.messages, + }), onRehydrateStorage: () => (state) => { state?._setHasHydrated?.(); }, }
1-1
: Mark the store as client-only
This module persists to IndexedDB and uses React hooks; it must be a client component.+ "use client"; import { ChatMessage } from "@/types/ai";
components/editor/ai/chat-theme-preview.tsx (9)
50-61
: Forward className and ...props to the root Card in loading state
Public API extends div props; forward them consistently to the root container.- <Card className={cn("w-full max-w-[550px] overflow-hidden rounded-lg shadow-none")}> + <Card + className={cn("w-full max-w-[550px] overflow-hidden rounded-lg shadow-none", className)} + {...props} + >
64-74
: Forward className and ...props to the root Card in error state
Align with the public prop surface.- <Card className={cn("max-w-[550px] overflow-hidden rounded-lg shadow-none")}> + <Card + className={cn("max-w-[550px] overflow-hidden rounded-lg shadow-none", className)} + {...props} + >
84-85
: Forward className and ...props to the root Card in complete state
Currently forwarded to an inner div; forward to Card instead.- <Card className={cn("max-w-[550px] overflow-hidden rounded-lg shadow-none")}> + <Card + className={cn("max-w-[550px] overflow-hidden rounded-lg shadow-none", className)} + {...props} + >
150-152
: Stop forwarding className and ...props to inner content
These belong to the root Card; avoid duplicate application.- <div className={cn("space-y-2 p-2.5", className)} {...props}> + <div className={cn("space-y-2 p-2.5")}> {children} </div>
41-41
: Resolve 'system' theme to a concrete palette before indexing
IndexingthemeStyles[mode]
will break whenmode === "system"
. Resolve it safely using matchMedia on the client.const { theme: mode } = useTheme(); + const resolvedMode = + mode === "system" + ? (typeof window !== "undefined" && + window.matchMedia && + window.matchMedia("(prefers-color-scheme: dark)").matches + ? "dark" + : "light") + : mode;
96-120
: Use the resolved mode for color swatches
Follow-up to resolved mode to prevent undefined indexing.- style={{ backgroundColor: themeStyles[mode].primary }} + style={{ backgroundColor: themeStyles[resolvedMode].primary }} ... - style={{ backgroundColor: themeStyles[mode].secondary }} + style={{ backgroundColor: themeStyles[resolvedMode].secondary }} ... - style={{ backgroundColor: themeStyles[mode].background }} + style={{ backgroundColor: themeStyles[resolvedMode].background }} ... - style={{ backgroundColor: themeStyles[mode].muted }} + style={{ backgroundColor: themeStyles[resolvedMode].muted }} ... - style={{ backgroundColor: themeStyles[mode].accent }} + style={{ backgroundColor: themeStyles[resolvedMode].accent }} ... - style={{ backgroundColor: themeStyles[mode].border }} + style={{ backgroundColor: themeStyles[resolvedMode].border }}
90-91
: Make the clickable header accessible
Add role, focusability, and keyboard activation; expose the expanded state.- onClick={() => setIsExpanded(!isExpanded)} + onClick={() => setIsExpanded((prev) => !prev)} + role="button" + tabIndex={0} + aria-expanded={isExpanded} + onKeyDown={(e) => { + if (e.key === "Enter" || e.key === " ") { + e.preventDefault(); + setIsExpanded((prev) => !prev); + } + }}
129-131
: Invalid Tailwind class: remove the trailing!
Tailwind uses the important prefix, not suffix.- <Zap className="size-3.5!" /> + <Zap className="size-3.5" />
133-139
: Expose expanded state on the toggle control and wire it to the region
Addaria-expanded
andaria-controls
to the button, and anid
to the expandable region. UseuseId()
for uniqueness.- <button + <button type="button" className="text-foreground/75 group-hover/control:text-foreground ml-auto transition-colors" aria-label={isExpanded ? "Collapse section" : "Expand section"} + aria-expanded={isExpanded} + aria-controls={contentId} > <ChevronsUpDown className="size-4" /> </button>And at the top import and state:
-import { ComponentProps, useState } from "react"; +import { ComponentProps, useId, useState } from "react";Define a unique id:
- const [isExpanded, setIsExpanded] = useState(expanded); + const [isExpanded, setIsExpanded] = useState(expanded); + const contentId = useId();Attach it to the content region:
- <div + <div + id={contentId} className={cn( "grid transition-all duration-300 ease-in-out", isExpanded ? "grid-rows-[1fr]" : "grid-rows-[0fr]" )} >app/layout.tsx (1)
90-91
: Add a root error boundary (duplicate of earlier recommendation).There’s no app/error.tsx at the root. Consider adding a root error boundary or wrapping ChatProvider in one to handle unexpected errors from the chat pipeline without taking down the whole app.
I can scaffold a minimal app/error.tsx with a user-friendly fallback if you want.
components/editor/ai/messages.tsx (1)
102-120
: Fix index mismatch: handlers/editing/streaming use filtered index, causing wrong actionsvisibleMessages.map uses the filtered index both for last-message detection and to call onRetry/onEdit/onEditSubmit, while editingMessageIndex refers to the full messages array. This desynchronizes actions and streaming state when messages are filtered out. Use the original index by id and detect streaming against the last visible message.
Apply this diff:
- {visibleMessages.map((message, index) => { - const isLastMessage = index === messages.length - 1; + {visibleMessages.map((message, index) => { + const messageIndex = messageIndexById.get(message.id) ?? index; + const isLastVisibleMessage = index === visibleMessages.length - 1; const isStreaming = status === "submitted" || status === "streaming"; - const isLastMessageStreaming = - message.role === "assistant" && isStreaming && isLastMessage; + const isLastMessageStreaming = + message.role === "assistant" && isStreaming && isLastVisibleMessage; return ( <Message key={message.id} message={message} - onRetry={() => onRetry(index)} - isEditing={editingMessageIndex === index} - onEdit={() => onEdit(index)} - onEditSubmit={(newPromptData) => onEditSubmit(index, newPromptData)} + onRetry={() => onRetry(messageIndex)} + isEditing={editingMessageIndex === messageIndex} + onEdit={() => onEdit(messageIndex)} + onEditSubmit={(newPromptData) => onEditSubmit(messageIndex, newPromptData)} onEditCancel={onEditCancel} isLastMessageStreaming={isLastMessageStreaming} isGeneratingTheme={isGeneratingTheme} /> ); })}Add a memoized id→index map:
const visibleMessages = useMemo(() => filterMessagesToDisplay(messages), [messages]); + + const messageIndexById = useMemo( + () => new Map(messages.map((m, i) => [m.id, i] as const)), + [messages] + );app/api/generate-theme/route.ts (3)
57-59
: Validate request body before converting messagesGuard against missing/invalid inputs to avoid cryptic failures.
- const { messages }: { messages: ChatMessage[] } = await req.json(); - const modelMessages = await convertMessagesToModelMessages(messages); + const { messages }: { messages: ChatMessage[] } = await req.json(); + if (!Array.isArray(messages) || messages.length === 0) { + return new Response("Missing or invalid messages array.", { status: 400 }); + } + const modelMessages = await convertMessagesToModelMessages(messages);
72-75
: Propagate streaming errors to client streamOnly logging to console leaves the UI unaware of failures. Surface the error via the writer so the client can present it.
- onError: (error) => { - if (error instanceof Error) console.error(error); - }, + onError: (error) => { + if (error instanceof Error) { + console.error(error); + try { + writer.error?.(error); + } catch (_) { + // no-op + } + } + },
75-85
: Guard token usage fields and avoid parameter shadowingtotalUsage may be absent; also rename the parameter to avoid shadowing outer symbols and improve clarity.
- onFinish: async (result) => { - const { totalUsage } = result; + onFinish: async (summary) => { + const { totalUsage } = summary; try { - await recordAIUsage({ - promptTokens: totalUsage.inputTokens, - completionTokens: totalUsage.outputTokens, - }); + const promptTokens = totalUsage?.inputTokens; + const completionTokens = totalUsage?.outputTokens; + if (typeof promptTokens === "number" && typeof completionTokens === "number") { + await recordAIUsage({ promptTokens, completionTokens }); + } } catch (error) { logError(error as Error, { action: "recordAIUsage", totalUsage }); } },components/editor/ai/message.tsx (1)
171-181
: Fix content visibility bug and loose inequality; include text parts fallbackAs noted previously, promptData?.content?.trim() != "" returns true for undefined (due to loose inequality), and the visibility check ignores text parts when promptData is absent, hiding valid user text.
- const promptData = message.metadata?.promptData; - const shouldDisplayMsgContent = promptData?.content?.trim() != ""; + const promptData = message.metadata?.promptData; + const textFromParts = message.parts + .map((part) => (part.type === "text" ? part.text : "")) + .join("\n"); + const hasTextContent = Boolean((promptData?.content ?? textFromParts).trim()); const getDisplayContent = () => { - if (promptData) { - return buildAIPromptRender(promptData); - } - - return message.parts.map((part) => (part.type === "text" ? part.text : "")).join(""); + return promptData ? buildAIPromptRender(promptData) : textFromParts; }; @@ - {shouldDisplayMsgContent && ( + {hasTextContent && ( <div className={cn( "bg-card/75 text-card-foreground/90 w-fit self-end rounded-lg border p-3 text-sm" )} > {msgContent} </div> )}Also applies to: 230-238
🧹 Nitpick comments (25)
components/ai-elements/code-block.tsx (3)
111-113
: Optional: clear timeout on unmount to avoid updating state after unmountPrevent a potential setState after unmount by tracking and clearing the timeout.
Apply these diffs:
const [isCopied, setIsCopied] = useState(false); const { code } = useContext(CodeBlockContext); + const timeoutRef = useRef<number | null>(null);
- setTimeout(() => setIsCopied(false), timeout); + if (timeoutRef.current) { + window.clearTimeout(timeoutRef.current); + } + timeoutRef.current = window.setTimeout(() => setIsCopied(false), timeout);And add this import adjustment and effect:
- import { createContext, useContext, useState } from "react"; + import { createContext, useContext, useEffect, useRef, useState } from "react";useEffect(() => { return () => { if (timeoutRef.current) { window.clearTimeout(timeoutRef.current); } }; }, []);Also applies to: 120-125
66-88
: Optional: render a single SyntaxHighlighter to avoid double render costRendering both light and dark blocks doubles highlight work on every render. If you already use next-themes (common in Next apps), conditionally render a single instance keyed to theme.
Example approach (requires next-themes):
import { useTheme } from "next-themes"; const ThemedHighlighter = ({ code, language, showLineNumbers }: { code: string; language: string; showLineNumbers?: boolean }) => { const { theme, systemTheme } = useTheme(); const isDark = (theme === "system" ? systemTheme : theme) === "dark"; return ( <SyntaxHighlighter className="overflow-x-auto overflow-auto" codeTagProps={{ className: "font-mono text-sm" }} customStyle={{ margin: 0, padding: "1rem", fontSize: "0.875rem" }} language={language} lineNumberStyle={{ color: "hsl(var(--muted-foreground))", paddingRight: "1rem", minWidth: "2.5rem" }} showLineNumbers={showLineNumbers} style={isDark ? oneDark : oneLight} > {code} </SyntaxHighlighter> ); };Then replace the two SyntaxHighlighter instances with <ThemedHighlighter ... />.
36-39
: Nit: background/color set twice (Tailwind + inline style)You set bg/fg via container classes and again via customStyle. It’s redundant; consider relying on one to reduce CSS override churn.
Also applies to: 48-55, 71-78
app/loaders.css (1)
1-129
: Consider motion-reduction and namespacing for keyframes
- Add motion-reduction handling so users with prefers-reduced-motion aren’t forced to see continuous animations.
- Namespacing generic keyframe names (e.g., wave, blink, shimmer) can prevent collisions with 3rd-party styles.
As a follow-up:
- In components that consume these animations, gate them with motion-safe utilities (e.g., motion-safe:animate-…).
- Optionally prefix keyframes (e.g., loader-wave, loader-blink) in a future pass to avoid global name collisions.
package.json (1)
118-118
: Remove @types/next — Next includes its own types@types/next is deprecated and can conflict with Next 15’s bundled types.
Apply:
- "@types/next": "^9.0.0",
components/horizontal-scroll-area.tsx (1)
16-24
: Optional: limit gradient overlays to edges for clarity and perfBoth overlays span the full width (left-0 right-0). Consider constraining them to small edge widths to avoid unnecessary overdraw and ensure the effect is localized.
Example:
- "from-background/75 pointer-events-none absolute right-0 left-0 z-10 h-full bg-gradient-to-r to-transparent to-10% opacity-0 transition-opacity", + "from-background/75 pointer-events-none absolute left-0 w-12 z-10 h-full bg-gradient-to-r to-transparent opacity-0 transition-opacity", @@ - "from-background/75 pointer-events-none absolute right-0 left-0 z-10 h-full bg-gradient-to-l to-transparent to-10% opacity-0 transition-opacity", + "from-background/75 pointer-events-none absolute right-0 w-12 z-10 h-full bg-gradient-to-l to-transparent opacity-0 transition-opacity",hooks/use-ai-theme-generation-core.ts (1)
5-6
: Derive isGeneratingTheme from a typed status union to avoid typosIf
status
is a free-form string, using string literals risks typos. Consider exporting a union or enum from the chat context and narrowing here.components/ai-elements/conversation.tsx (2)
24-26
: Avoid passing empty string into cn
cn("", className)
is equivalent tocn(className)
.- <StickToBottom.Content className={cn("", className)} {...props} /> + <StickToBottom.Content className={cn(className)} {...props} />
40-54
: Add an accessible label to the scroll button
The button renders only an icon; add an aria-label and mark the icon as decorative.- <Button + <Button className={cn("absolute bottom-4 left-[50%] translate-x-[-50%] rounded-full", className)} onClick={handleScrollToBottom} size="icon" type="button" variant="outline" + aria-label="Scroll to latest messages" {...props} > - <ArrowDownIcon className="size-4" /> + <ArrowDownIcon className="size-4" aria-hidden="true" /> </Button>components/loader.tsx (1)
416-445
: Wrap variants with a live region to improve screen-reader feedback
Expose a polite live region so assistive tech announces ongoing activity without extra work at call sites.-function Loader({ variant = "circular", size = "md", text, className }: LoaderProps) { - switch (variant) { - case "circular": - return <CircularLoader size={size} className={className} />; - case "classic": - return <ClassicLoader size={size} className={className} />; - case "pulse": - return <PulseLoader size={size} className={className} />; - case "pulse-dot": - return <PulseDotLoader size={size} className={className} />; - case "dots": - return <DotsLoader size={size} className={className} />; - case "typing": - return <TypingLoader size={size} className={className} />; - case "wave": - return <WaveLoader size={size} className={className} />; - case "bars": - return <BarsLoader size={size} className={className} />; - case "terminal": - return <TerminalLoader size={size} className={className} />; - case "text-blink": - return <TextBlinkLoader text={text} size={size} className={className} />; - case "text-shimmer": - return <TextShimmerLoader text={text} size={size} className={className} />; - case "loading-dots": - return <TextDotsLoader text={text} size={size} className={className} />; - default: - return <CircularLoader size={size} className={className} />; - } -} +function Loader({ variant = "circular", size = "md", text, className }: LoaderProps) { + const render = () => { + switch (variant) { + case "circular": + return <CircularLoader size={size} className={className} />; + case "classic": + return <ClassicLoader size={size} className={className} />; + case "pulse": + return <PulseLoader size={size} className={className} />; + case "pulse-dot": + return <PulseDotLoader size={size} className={className} />; + case "dots": + return <DotsLoader size={size} className={className} />; + case "typing": + return <TypingLoader size={size} className={className} />; + case "wave": + return <WaveLoader size={size} className={className} />; + case "bars": + return <BarsLoader size={size} className={className} />; + case "terminal": + return <TerminalLoader size={size} className={className} />; + case "text-blink": + return <TextBlinkLoader text={text} size={size} className={className} />; + case "text-shimmer": + return <TextShimmerLoader text={text} size={size} className={className} />; + case "loading-dots": + return <TextDotsLoader text={text} size={size} className={className} />; + default: + return <CircularLoader size={size} className={className} />; + } + }; + return <div role="status" aria-live="polite">{render()}</div>; +}components/editor/ai/chat-theme-preview.tsx (1)
64-74
: Surface provided errorText if available
Prefer the specific error message when given; fall back otherwise.- <span className="text-foreground/90 text-sm">Generation cancelled or failed.</span> + <span className="text-foreground/90 text-sm"> + {"errorText" in ({} as any) ? null : null /* keep TS happy in diff viewer */} + {(props as any)?.errorText ?? "Generation cancelled or failed."} + </span>Outside diff, a cleaner change within this component scope:
- Destructure
errorText
from props for thestatus === "error"
branch and rendererrorText ?? "Generation cancelled or failed."
.hooks/use-stream-text.ts (1)
17-27
: Optional: allow configurable speed.Consider parameterizing the typewriter speed so the caller can tune the animation without editing the hook.
components/editor/ai/stream-text.tsx (1)
43-44
: Prefer stream || text as a safer fallback when animation hasn’t started.Using nullish coalescing keeps an empty stream (initial state) and renders nothing; logical OR will render text until streaming kicks in.
Apply this diff:
- return wrap(stream ?? text ?? ""); + return wrap(stream || text || "");hooks/use-chat-context.tsx (3)
15-21
: Avoid name collision between the ChatContext type and value; clarify naming.Using the same identifier for the interface and the React Context value harms readability. Rename the interface to ChatContextValue and update the createContext type parameter.
Apply this diff:
-interface ChatContext extends ReturnType<typeof useChat<ChatMessage>> { +interface ChatContextValue extends ReturnType<typeof useChat<ChatMessage>> { startNewChat: () => void; resetMessagesUpToIndex: (index: number) => void; } -const ChatContext = createContext<ChatContext | null>(null); +const ChatContext = createContext<ChatContextValue | null>(null);
65-73
: Persisting messages only when idle/error is a good call; consider gating console logs in production.The console.log calls will ship to production. Wrap them behind a development check to avoid noisy logs in prod.
For example:
if (process.env.NODE_ENV !== "production") { console.log("----- ✅ Updating Stored Messages -----"); }
93-99
: Minor grammar fix in error message.Use “a ChatProvider” instead of “an ChatProvider”.
Apply this diff:
- throw new Error("useChatContext must be used within an ChatProvider"); + throw new Error("useChatContext must be used within a ChatProvider");components/editor/ai/messages.tsx (2)
61-65
: Remove stray console.log in production UILeftover console logging in a render-affecting effect is noisy.
previousUserMsgLength.current = currentUserMsgCount; requestAnimationFrame(() => { - console.log("scrolling to end"); scrollEndElement.scrollIntoView({ behavior: "smooth", block: "start" }); });
47-49
: Consistency: derive initial user message count via helperUse getUserMessages for consistency with later logic.
- const previousUserMsgLength = useRef<number>( - messages.filter((message) => message.role === "user").length - ); + const previousUserMsgLength = useRef<number>(getUserMessages(messages).length);app/api/generate-theme/route.ts (2)
71-72
: Avoid hard-capping steps with stepCountIs(5) unless requiredA fixed cap can truncate tool calls/output across providers. Consider removing or making it configurable.
- stopWhen: stepCountIs(5), + // Consider removing hard caps unless specifically required by UX/risk controls. + // stopWhen: stepCountIs(5),If you intend to keep a cap, expose it via an env var and document the rationale.
31-31
: Nit: headers() is synchronous; drop unnecessary awaitMinor cleanup; no functional change.
- const headersList = await headers(); + const headersList = headers();components/editor/ai/chat-input.tsx (2)
68-69
: Await onThemeGeneration to handle rejections and sequencinggenerateTheme is async; awaiting the callback ensures errors are caught by the surrounding flow and avoids fire-and-forget.
- onThemeGeneration(promptData); + await onThemeGeneration(promptData);
166-167
: Remove redundant conditional iconThis branch only renders when isGeneratingTheme is false, so the ternary always selects ArrowUp.
- {isGeneratingTheme ? <LoaderIcon className="animate-spin" /> : <ArrowUp />} + <ArrowUp />components/editor/ai/message.tsx (3)
39-39
: Also hide actions while editingCurrently actions are hidden only while streaming. Consider hiding them when the user is editing to avoid confusing/disabled controls.
- const showMessageActions = !isLastMessageStreaming; + const showMessageActions = !isLastMessageStreaming && !isEditing;
95-101
: Tailwind important modifier appears mis-placed ('border-border/50!')Tailwind’s important modifier is normally prefixed, e.g. !border-border/50. The trailing exclamation likely won’t be parsed and may silently drop the color. Suggest correcting it.
- "border-border/50! bg-foreground relative flex size-6 shrink-0 items-center justify-center rounded-full border select-none" + "!border-border/50 bg-foreground relative flex size-6 shrink-0 items-center justify-center rounded-full border select-none"
104-148
: Add explicit fallback for unknown part typesThe map callback returns undefined for unhandled part types. React tolerates it, but being explicit reads better and avoids accidental sparse arrays.
{message.parts.map((part, idx) => { const { type } = part; const key = `message-${message.id}-part-${idx}`; if (type === "text") { return ( <StreamText key={key} text={part.text} className="w-fit text-sm" animate={isLastMessageStreaming} markdown /> ); } if (type === "tool-generateTheme") { const { state } = part; if (state === "output-available") { const themeStyles = part.output; return ( <ChatThemePreview key={key} status="complete" themeStyles={themeStyles} className="p-0" > <ScrollArea className="h-48"> <div className="p-2"> <ColorPreview styles={themeStyles} currentMode={themeState.currentMode} /> </div> </ScrollArea> </ChatThemePreview> ); } if (state === "output-error") { return <ChatThemePreview key={key} status="error" className="p-0" />; } return <ChatThemePreview key={key} status="loading" className="p-0" />; } + return null; })}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (25)
app/ai/components/ai-chat-hero.tsx
(3 hunks)app/api/generate-theme/route.ts
(3 hunks)app/api/generate-theme/tools.ts
(1 hunks)app/globals.css
(2 hunks)app/layout.tsx
(2 hunks)app/loaders.css
(1 hunks)components/ai-elements/code-block.tsx
(1 hunks)components/ai-elements/conversation.tsx
(1 hunks)components/ai-elements/response.tsx
(1 hunks)components/editor/ai/chat-input.tsx
(7 hunks)components/editor/ai/chat-interface.tsx
(5 hunks)components/editor/ai/chat-theme-preview.tsx
(1 hunks)components/editor/ai/closeable-suggested-pill-actions.tsx
(2 hunks)components/editor/ai/message.tsx
(3 hunks)components/editor/ai/messages.tsx
(1 hunks)components/editor/ai/stream-text.tsx
(1 hunks)components/horizontal-scroll-area.tsx
(1 hunks)components/loader.tsx
(1 hunks)hooks/use-ai-theme-generation-core.ts
(1 hunks)hooks/use-chat-context.tsx
(1 hunks)hooks/use-scroll-start-end.ts
(1 hunks)hooks/use-stream-text.ts
(1 hunks)package.json
(5 hunks)store/ai-chat-store.ts
(1 hunks)types/ai.ts
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- app/globals.css
- components/editor/ai/closeable-suggested-pill-actions.tsx
- components/editor/ai/chat-interface.tsx
🧰 Additional context used
🧬 Code Graph Analysis (17)
components/ai-elements/response.tsx (1)
lib/utils.ts (1)
cn
(6-8)
components/editor/ai/stream-text.tsx (2)
hooks/use-stream-text.ts (1)
useStreamText
(3-66)components/ai-elements/response.tsx (1)
Response
(9-17)
app/layout.tsx (1)
hooks/use-chat-context.tsx (1)
ChatProvider
(22-91)
components/editor/ai/chat-theme-preview.tsx (5)
types/theme.ts (1)
ThemeStyles
(76-76)components/theme-provider.tsx (1)
useTheme
(84-92)hooks/use-feedback-text.ts (1)
useFeedbackText
(13-42)lib/utils.ts (1)
cn
(6-8)utils/ai/apply-theme.ts (1)
applyGeneratedTheme
(5-25)
components/horizontal-scroll-area.tsx (2)
hooks/use-scroll-start-end.ts (1)
useScrollStartEnd
(41-78)lib/utils.ts (1)
cn
(6-8)
components/editor/ai/messages.tsx (10)
types/ai.ts (2)
ChatMessage
(38-38)AIPromptData
(18-22)hooks/use-chat-context.tsx (1)
useChatContext
(93-99)hooks/use-scroll-start-end.ts (1)
useScrollStartEnd
(41-78)utils/ai/parse-ai-sdk-transport-error.ts (1)
parseAiSdkTransportError
(27-63)lib/utils.ts (1)
cn
(6-8)components/ai-elements/conversation.tsx (3)
Conversation
(12-20)ConversationContent
(24-26)ConversationScrollButton
(30-54)components/editor/ai/message.tsx (1)
Message
(26-78)components/editor/ai/loading-logo.tsx (1)
LoadingLogo
(4-20)components/tooltip-wrapper.tsx (1)
TooltipWrapper
(7-35)components/editor/ai/chat-messages.tsx (1)
ChatMessages
(29-144)
components/ai-elements/conversation.tsx (1)
lib/utils.ts (1)
cn
(6-8)
components/loader.tsx (1)
lib/utils.ts (1)
cn
(6-8)
types/ai.ts (2)
types/theme.ts (1)
ThemeStyles
(76-76)app/api/generate-theme/tools.ts (1)
THEME_GENERATION_TOOLS
(7-33)
hooks/use-chat-context.tsx (5)
types/ai.ts (1)
ChatMessage
(38-38)store/ai-chat-store.ts (1)
useAIChatStore
(15-35)utils/ai/parse-ai-sdk-transport-error.ts (1)
parseAiSdkTransportError
(27-63)utils/ai/apply-theme.ts (1)
applyGeneratedTheme
(5-25)hooks/use-subscription.ts (1)
SUBSCRIPTION_STATUS_QUERY_KEY
(10-10)
app/api/generate-theme/route.ts (2)
types/ai.ts (1)
ChatMessage
(38-38)app/api/generate-theme/tools.ts (1)
THEME_GENERATION_TOOLS
(7-33)
components/ai-elements/code-block.tsx (1)
lib/utils.ts (1)
cn
(6-8)
hooks/use-ai-theme-generation-core.ts (2)
hooks/use-chat-context.tsx (1)
useChatContext
(93-99)types/ai.ts (1)
AIPromptData
(18-22)
store/ai-chat-store.ts (2)
types/ai.ts (1)
ChatMessage
(38-38)store/idb-storage.ts (1)
idbStorage
(4-14)
components/editor/ai/chat-input.tsx (5)
types/ai.ts (1)
AIPromptData
(18-22)hooks/use-chat-context.tsx (1)
useChatContext
(93-99)components/editor/ai/alert-banner.tsx (1)
BannerWrapper
(90-112)components/editor/ai/ai-chat-form-body.tsx (1)
AIChatFormBody
(33-89)lib/constants.ts (1)
MAX_IMAGE_FILES
(7-7)
app/ai/components/ai-chat-hero.tsx (4)
hooks/use-chat-context.tsx (1)
useChatContext
(93-99)hooks/use-ai-theme-generation-core.ts (1)
useAIThemeGenerationCore
(4-22)hooks/use-guards.ts (1)
useGuards
(7-15)types/ai.ts (1)
AIPromptData
(18-22)
components/editor/ai/message.tsx (8)
types/ai.ts (2)
ChatMessage
(38-38)AIPromptData
(18-22)components/editor/ai/message-actions.tsx (1)
MessageActions
(17-91)store/editor-store.ts (1)
useEditorStore
(34-231)components/editor/ai/loading-logo.tsx (1)
LoadingLogo
(4-20)components/editor/ai/stream-text.tsx (1)
StreamText
(12-44)components/editor/ai/chat-theme-preview.tsx (1)
ChatThemePreview
(32-159)utils/ai/ai-prompt.tsx (1)
buildAIPromptRender
(21-45)components/editor/ai/message-edit-form.tsx (1)
MessageEditForm
(30-150)
🔇 Additional comments (9)
components/ai-elements/code-block.tsx (1)
26-35
: Good use of context to avoid prop drilling for the copy button overlayScoping the code string with a lightweight context makes the copy button composable and keeps the API clean.
types/ai.ts (1)
38-38
: LGTM: new ChatMessage UI contract looks consistent with the streaming/tooling flowThe UIMessage<MyMetadata, MyUIDataParts, MyUITools> shape aligns with your tool-driven streaming pipeline.
components/ai-elements/conversation.tsx (1)
12-20
: Good wrapper defaults for StickToBottom
The component sensibly forwards props and sets instant initial/resize with a log role.app/layout.tsx (1)
85-92
: Provider ordering looks good; ChatProvider is correctly nested under QueryProvider.Placing ChatProvider inside QueryProvider ensures useQueryClient is available. Keeping it within ThemeProvider/TooltipProvider is fine and non-problematic.
hooks/use-chat-context.tsx (2)
30-53
: Transport and callbacks look aligned with the new AI SDK v5 flow.Using DefaultChatTransport with api: "/api/generate-theme" and reacting to "data-generated-theme-styles" matches the streaming design.
31-33
: DefaultChatTransport endpoint option isapi
Confirmed that in AI SDK v5,DefaultChatTransport
expects the endpoint property to be namedapi
(notpath
orurl
). No changes needed.• File: hooks/use-chat-context.tsx
• Lines: 31–33app/ai/components/ai-chat-hero.tsx (1)
24-33
: Wiring to new chat and generation core looks correctstartNewChat precedes navigation and generateThemeCore; props are propagated to children appropriately.
components/editor/ai/message.tsx (2)
5-5
: Type migration to ChatMessage and streaming-aware props look goodUsing ChatMessage and adding isLastMessageStreaming/isGeneratingTheme aligns with the new SDK v5 streaming model. No issues spotted in the public surface.
Also applies to: 16-24
111-118
: Confirm streaming semantics for multiple text partsanimate={isLastMessageStreaming} applies streaming behavior to every text part. If the provider behind useStreamText is shared per message, multiple StreamText instances could interleave deltas unexpectedly. If only the currently streaming text should animate, limit animation to the last text part.
Example tweak (only if intended):
- animate={isLastMessageStreaming} + animate={isLastMessageStreaming && idx === message.parts.length - 1}
import { baseModel, baseProviderOptions } from "@/utils/ai/model"; | ||
import { generateObject, tool } from "ai"; | ||
import z from "zod"; | ||
import { Context } from "./route"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix circular dependency at runtime by using a type-only import
Importing Context as a value from "./route" creates a route↔tools circular import. Since Context is only used for typing (experimental_context as Context), switch to a type-only import to avoid runtime cycles.
-import { Context } from "./route";
+import type { Context } from "./route";
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
import { Context } from "./route"; | |
import type { Context } from "./route"; |
🤖 Prompt for AI Agents
In app/api/generate-theme/tools.ts around line 5, the current import imports
Context as a value from "./route" causing a runtime circular dependency; change
that import to a type-only import (i.e., import type { Context } from "./route")
so Context is erased at runtime, and confirm that Context is only used in type
positions (like experimental_context as Context) so no value-level uses remain.
className="overflow-hidden dark:hidden" | ||
codeTagProps={{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix code overflow: use scrollable overflow instead of clipping
Long lines will be clipped with no way to scroll. Switch to overflow auto to preserve readability.
Apply these diffs:
- className="overflow-hidden dark:hidden"
+ className="overflow-x-auto overflow-auto dark:hidden"
- className="hidden overflow-hidden dark:block"
+ className="hidden overflow-x-auto overflow-auto dark:block"
Also applies to: 67-68
🤖 Prompt for AI Agents
In components/ai-elements/code-block.tsx around lines 44-45 and also at 67-68,
the code block is using overflow-hidden which clips long lines and prevents
scrolling; update the className to use a scrollable overflow (e.g., replace
overflow-hidden with overflow-auto or overflow-x-auto as appropriate for
horizontal scrolling) so long lines can be scrolled rather than clipped, and
keep the existing dark:hidden class unchanged.
export const CodeBlockCopyButton = ({ | ||
onCopy, | ||
onError, | ||
timeout = 2000, | ||
children, | ||
className, | ||
...props | ||
}: CodeBlockCopyButtonProps) => { | ||
const [isCopied, setIsCopied] = useState(false); | ||
const { code } = useContext(CodeBlockContext); | ||
|
||
const copyToClipboard = async () => { | ||
if (typeof window === "undefined" || !navigator.clipboard.writeText) { | ||
onError?.(new Error("Clipboard API not available")); | ||
return; | ||
} | ||
|
||
try { | ||
await navigator.clipboard.writeText(code); | ||
setIsCopied(true); | ||
onCopy?.(); | ||
setTimeout(() => setIsCopied(false), timeout); | ||
} catch (error) { | ||
onError?.(error as Error); | ||
} | ||
}; | ||
|
||
const Icon = isCopied ? CheckIcon : CopyIcon; | ||
|
||
return ( | ||
<Button | ||
className={cn("shrink-0", className)} | ||
onClick={copyToClipboard} | ||
size="icon" | ||
variant="ghost" | ||
{...props} | ||
> | ||
{children ?? <Icon size={14} />} | ||
</Button> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
onClick from props overrides copy handler; also add basic a11y labels
Spreading props after onClick lets a consumer override the copy behavior unintentionally. Merge handlers so copy is always triggered, then call user onClick. Also add aria-label/title for icon-only button.
Apply this diff:
export const CodeBlockCopyButton = ({
onCopy,
onError,
timeout = 2000,
children,
className,
- ...props
+ onClick: userOnClick,
+ ...rest
}: CodeBlockCopyButtonProps) => {
const [isCopied, setIsCopied] = useState(false);
const { code } = useContext(CodeBlockContext);
const copyToClipboard = async () => {
@@
};
+ const handleClick: ComponentProps<"button">["onClick"] = (e) => {
+ void copyToClipboard();
+ userOnClick?.(e as any);
+ };
+
const Icon = isCopied ? CheckIcon : CopyIcon;
return (
<Button
className={cn("shrink-0", className)}
- onClick={copyToClipboard}
+ onClick={handleClick}
size="icon"
variant="ghost"
- {...props}
+ aria-label={isCopied ? "Copied" : "Copy code"}
+ title={isCopied ? "Copied" : "Copy code"}
+ {...rest}
>
- {children ?? <Icon size={14} />}
+ {children ?? <Icon size={14} aria-hidden />}
</Button>
);
};
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export const CodeBlockCopyButton = ({ | |
onCopy, | |
onError, | |
timeout = 2000, | |
children, | |
className, | |
...props | |
}: CodeBlockCopyButtonProps) => { | |
const [isCopied, setIsCopied] = useState(false); | |
const { code } = useContext(CodeBlockContext); | |
const copyToClipboard = async () => { | |
if (typeof window === "undefined" || !navigator.clipboard.writeText) { | |
onError?.(new Error("Clipboard API not available")); | |
return; | |
} | |
try { | |
await navigator.clipboard.writeText(code); | |
setIsCopied(true); | |
onCopy?.(); | |
setTimeout(() => setIsCopied(false), timeout); | |
} catch (error) { | |
onError?.(error as Error); | |
} | |
}; | |
const Icon = isCopied ? CheckIcon : CopyIcon; | |
return ( | |
<Button | |
className={cn("shrink-0", className)} | |
onClick={copyToClipboard} | |
size="icon" | |
variant="ghost" | |
{...props} | |
> | |
{children ?? <Icon size={14} />} | |
</Button> | |
export const CodeBlockCopyButton = ({ | |
onCopy, | |
onError, | |
timeout = 2000, | |
children, | |
className, | |
onClick: userOnClick, | |
...rest | |
}: CodeBlockCopyButtonProps) => { | |
const [isCopied, setIsCopied] = useState(false); | |
const { code } = useContext(CodeBlockContext); | |
const copyToClipboard = async () => { | |
if (typeof window === "undefined" || !navigator.clipboard.writeText) { | |
onError?.(new Error("Clipboard API not available")); | |
return; | |
} | |
try { | |
await navigator.clipboard.writeText(code); | |
setIsCopied(true); | |
onCopy?.(); | |
setTimeout(() => setIsCopied(false), timeout); | |
} catch (error) { | |
onError?.(error as Error); | |
} | |
}; | |
const handleClick: ComponentProps<"button">["onClick"] = (e) => { | |
void copyToClipboard(); | |
userOnClick?.(e as any); | |
}; | |
const Icon = isCopied ? CheckIcon : CopyIcon; | |
return ( | |
<Button | |
className={cn("shrink-0", className)} | |
onClick={handleClick} | |
size="icon" | |
variant="ghost" | |
aria-label={isCopied ? "Copied" : "Copy code"} | |
title={isCopied ? "Copied" : "Copy code"} | |
{...rest} | |
> | |
{children ?? <Icon size={14} aria-hidden />} | |
</Button> | |
); | |
}; |
const copyToClipboard = async () => { | ||
if (typeof window === "undefined" || !navigator.clipboard.writeText) { | ||
onError?.(new Error("Clipboard API not available")); | ||
return; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against undefined navigator.clipboard to avoid runtime TypeError
Accessing navigator.clipboard.writeText without verifying clipboard exists can throw on some browsers/contexts.
Apply this diff:
- if (typeof window === "undefined" || !navigator.clipboard.writeText) {
+ if (
+ typeof navigator === "undefined" ||
+ !navigator.clipboard ||
+ typeof navigator.clipboard.writeText !== "function"
+ ) {
onError?.(new Error("Clipboard API not available"));
return;
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const copyToClipboard = async () => { | |
if (typeof window === "undefined" || !navigator.clipboard.writeText) { | |
onError?.(new Error("Clipboard API not available")); | |
return; | |
} | |
const copyToClipboard = async () => { | |
if ( | |
typeof navigator === "undefined" || | |
!navigator.clipboard || | |
typeof navigator.clipboard.writeText !== "function" | |
) { | |
onError?.(new Error("Clipboard API not available")); | |
return; | |
} | |
// …rest of function… |
🤖 Prompt for AI Agents
In components/ai-elements/code-block.tsx around lines 114-118, the guard
currently checks navigator.clipboard.writeText but can throw if
navigator.clipboard is undefined; update the condition to also verify
navigator.clipboard exists (e.g. check typeof window === "undefined" ||
!navigator?.clipboard?.writeText or typeof navigator === "undefined" ||
!navigator.clipboard || !navigator.clipboard.writeText) before calling
writeText, and keep the onError return path; optionally wrap the writeText call
in try/catch to handle runtime failures gracefully.
export const Response = memo( | ||
({ className, ...props }: ResponseProps) => ( | ||
<Streamdown | ||
className={cn("size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0", className)} | ||
{...props} | ||
/> | ||
), | ||
(prevProps, nextProps) => prevProps.children === nextProps.children | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Custom memo comparator can cause stale props (e.g., className) — remove it or compare all relevant props.
The comparator only checks children. If className or other props change while children stay the same, the component won’t re-render, leading to stale styles/behavior.
Apply this diff to rely on React’s default shallow comparison:
-export const Response = memo(
- ({ className, ...props }: ResponseProps) => (
- <Streamdown
- className={cn("size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0", className)}
- {...props}
- />
- ),
- (prevProps, nextProps) => prevProps.children === nextProps.children
-);
+export const Response = memo(({ className, ...props }: ResponseProps) => (
+ <Streamdown
+ className={cn("size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0", className)}
+ {...props}
+ />
+));
If you truly need a comparator, include at least className and other critical props in the equality check; otherwise, the maintenance and correctness costs outweigh the benefit.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export const Response = memo( | |
({ className, ...props }: ResponseProps) => ( | |
<Streamdown | |
className={cn("size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0", className)} | |
{...props} | |
/> | |
), | |
(prevProps, nextProps) => prevProps.children === nextProps.children | |
); | |
export const Response = memo(({ className, ...props }: ResponseProps) => ( | |
<Streamdown | |
className={cn("size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0", className)} | |
{...props} | |
/> | |
)); |
🤖 Prompt for AI Agents
In components/ai-elements/response.tsx around lines 9 to 17, the custom memo
comparator currently only compares children causing stale renders when className
or other props change; remove the second argument to memo so React uses its
default shallow prop comparison, or replace the comparator with one that at
minimum checks children, className and any other critical props (e.g., key,
style, data-* props) to ensure re-renders when those change.
const intersectionObserverOptions = useMemo<IntersectionObserverInit>(() => { | ||
return { | ||
...defaultObserverOptions, | ||
...observerOptions, | ||
root: containerRef?.current ?? observerOptions.root ?? null, | ||
}; | ||
}, [observerOptions.root, observerOptions.threshold, observerOptions.rootMargin]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Root resolution can be stale (computed before ref is set)
intersectionObserverOptions is memoized using containerRef?.current at render time. On first mount, current is often null, so the observer ends up using the viewport instead of the intended scroll container and never updates.
Apply this refactor to resolve the root inside the effect (remove the useMemo block):
- const intersectionObserverOptions = useMemo<IntersectionObserverInit>(() => {
- return {
- ...defaultObserverOptions,
- ...observerOptions,
- root: containerRef?.current ?? observerOptions.root ?? null,
- };
- }, [observerOptions.root, observerOptions.threshold, observerOptions.rootMargin]);
+ // memoization not necessary; resolve root at effect time to avoid stale nulls
And update the effect below to build options with the current root (see next diff).
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In hooks/use-scroll-start-end.ts around lines 51 to 57, the code memoizes
intersectionObserverOptions using containerRef?.current at render time which is
often null on first mount, causing the observer to use the viewport and never
update; remove the useMemo block and instead build the observer options inside
the effect where you create the IntersectionObserver so you can read the
up-to-date root via containerRef.current (use root = containerRef.current ??
observerOptions.root ?? null), merge with defaultObserverOptions and
observerOptions there, and ensure the effect cleans up and re-creates the
observer when relevant observerOptions change (use observerOptions primitives in
the dependency array, avoid depending on containerRef.current).
useEffect(() => { | ||
const startMarker = scrollStartRef.current; | ||
const endMarker = scrollEndRef.current; | ||
if (!startMarker || !endMarker) return; | ||
|
||
const observer = new IntersectionObserver((entries) => { | ||
for (const entry of entries) { | ||
if (entry.target === startMarker) setIsScrollStart(entry.isIntersecting); | ||
if (entry.target === endMarker) setIsScrollEnd(entry.isIntersecting); | ||
} | ||
}, intersectionObserverOptions); | ||
|
||
observer.observe(startMarker); | ||
observer.observe(endMarker); | ||
|
||
return () => observer.disconnect(); | ||
}, [intersectionObserverOptions]); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Build IntersectionObserver with current container root inside effect
Construct the options with the latest containerRef.current inside the effect so the observer uses the scroll container as root.
Apply:
useEffect(() => {
const startMarker = scrollStartRef.current;
const endMarker = scrollEndRef.current;
if (!startMarker || !endMarker) return;
- const observer = new IntersectionObserver((entries) => {
+ const options: IntersectionObserverInit = {
+ ...defaultObserverOptions,
+ ...observerOptions,
+ root: containerRef?.current ?? (observerOptions as IntersectionObserverInit).root ?? null,
+ };
+
+ const observer = new IntersectionObserver((entries) => {
for (const entry of entries) {
if (entry.target === startMarker) setIsScrollStart(entry.isIntersecting);
if (entry.target === endMarker) setIsScrollEnd(entry.isIntersecting);
}
- }, intersectionObserverOptions);
+ }, options);
observer.observe(startMarker);
observer.observe(endMarker);
return () => observer.disconnect();
- }, [intersectionObserverOptions]);
+ }, [containerRef, observerOptions.root, observerOptions.threshold, observerOptions.rootMargin]);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
useEffect(() => { | |
const startMarker = scrollStartRef.current; | |
const endMarker = scrollEndRef.current; | |
if (!startMarker || !endMarker) return; | |
const observer = new IntersectionObserver((entries) => { | |
for (const entry of entries) { | |
if (entry.target === startMarker) setIsScrollStart(entry.isIntersecting); | |
if (entry.target === endMarker) setIsScrollEnd(entry.isIntersecting); | |
} | |
}, intersectionObserverOptions); | |
observer.observe(startMarker); | |
observer.observe(endMarker); | |
return () => observer.disconnect(); | |
}, [intersectionObserverOptions]); | |
useEffect(() => { | |
const startMarker = scrollStartRef.current; | |
const endMarker = scrollEndRef.current; | |
if (!startMarker || !endMarker) return; | |
const options: IntersectionObserverInit = { | |
...defaultObserverOptions, | |
...observerOptions, | |
root: containerRef?.current ?? (observerOptions as IntersectionObserverInit).root ?? null, | |
}; | |
const observer = new IntersectionObserver((entries) => { | |
for (const entry of entries) { | |
if (entry.target === startMarker) setIsScrollStart(entry.isIntersecting); | |
if (entry.target === endMarker) setIsScrollEnd(entry.isIntersecting); | |
} | |
}, options); | |
observer.observe(startMarker); | |
observer.observe(endMarker); | |
return () => observer.disconnect(); | |
}, [containerRef, observerOptions.root, observerOptions.threshold, observerOptions.rootMargin]); |
🤖 Prompt for AI Agents
In hooks/use-scroll-start-end.ts around lines 59 to 76, the IntersectionObserver
is created with intersectionObserverOptions defined outside the effect so it may
not include the latest containerRef.current as the root. Inside the useEffect,
read containerRef.current into a local variable, build a new options object by
spreading intersectionObserverOptions and setting root to that local container,
then pass that options object to the IntersectionObserver constructor; ensure
the effect dependency array includes intersectionObserverOptions and the local
containerRef.current value so the observer is recreated when the root changes,
and keep the rest of the observer lifecycle (observe start/end markers and
disconnect on cleanup) unchanged.
const frame = useRef<number | null>(null); | ||
const lastTimeRef = useRef<number>(0); | ||
const streamIndexRef = useRef<number>(0); | ||
const isAnimatingRef = useRef(false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Track latest full text via a ref to avoid dropping newly appended parts mid-animation.
Without storing the latest concatenated text in a ref, the animation closure uses a stale fullText snapshot. New parts arriving while animating are ignored until another change triggers the effect.
Apply this diff to add a ref for the current full text:
const frame = useRef<number | null>(null);
const lastTimeRef = useRef<number>(0);
const streamIndexRef = useRef<number>(0);
const isAnimatingRef = useRef(false);
+ const fullTextRef = useRef("");
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const frame = useRef<number | null>(null); | |
const lastTimeRef = useRef<number>(0); | |
const streamIndexRef = useRef<number>(0); | |
const isAnimatingRef = useRef(false); | |
const frame = useRef<number | null>(null); | |
const lastTimeRef = useRef<number>(0); | |
const streamIndexRef = useRef<number>(0); | |
const isAnimatingRef = useRef(false); | |
const fullTextRef = useRef(""); |
🤖 Prompt for AI Agents
In hooks/use-stream-text.ts around lines 6 to 9, add and use a ref to hold the
latest concatenated full text so the animation closure never reads a stale
snapshot: create something like fullTextRef = useRef<string>('') at the top,
update fullTextRef.current whenever you build/append the latest text (or inside
the effect that sets fullText), and reference fullTextRef.current inside the
animation loop/closure instead of the captured fullText variable so newly
appended parts are honored mid-animation.
useEffect(() => { | ||
if (isAnimatingRef.current) return; | ||
|
||
const typewriterSpeed = 5; | ||
const fullText = parts.join(""); | ||
|
||
if (streamIndexRef.current >= fullText.length) { | ||
setStream(fullText); | ||
return; | ||
} | ||
|
||
isAnimatingRef.current = true; | ||
|
||
const animate = (time: number) => { | ||
if (streamIndexRef.current < fullText.length) { | ||
if (time - lastTimeRef.current > typewriterSpeed) { | ||
streamIndexRef.current++; | ||
setStream(fullText.slice(0, streamIndexRef.current)); | ||
lastTimeRef.current = time; | ||
} | ||
frame.current = requestAnimationFrame(animate); | ||
} else { | ||
isAnimatingRef.current = false; | ||
} | ||
}; | ||
|
||
frame.current = requestAnimationFrame(animate); | ||
|
||
return () => { | ||
if (frame.current) { | ||
cancelAnimationFrame(frame.current); | ||
} | ||
isAnimatingRef.current = false; | ||
}; | ||
}, [parts]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix effect to animate against the latest text and not bail out during active animation.
The early return when isAnimatingRef.current is true prevents the effect from updating the text target. As a result, parts appended during animation won’t be rendered until another change occurs. Use fullTextRef inside animate and start the loop only when not already animating.
Apply this diff:
- useEffect(() => {
- if (isAnimatingRef.current) return;
-
- const typewriterSpeed = 5;
- const fullText = parts.join("");
-
- if (streamIndexRef.current >= fullText.length) {
- setStream(fullText);
- return;
- }
-
- isAnimatingRef.current = true;
-
- const animate = (time: number) => {
- if (streamIndexRef.current < fullText.length) {
- if (time - lastTimeRef.current > typewriterSpeed) {
- streamIndexRef.current++;
- setStream(fullText.slice(0, streamIndexRef.current));
- lastTimeRef.current = time;
- }
- frame.current = requestAnimationFrame(animate);
- } else {
- isAnimatingRef.current = false;
- }
- };
-
- frame.current = requestAnimationFrame(animate);
-
- return () => {
- if (frame.current) {
- cancelAnimationFrame(frame.current);
- }
- isAnimatingRef.current = false;
- };
- }, [parts]);
+ useEffect(() => {
+ const typewriterSpeed = 5;
+ fullTextRef.current = parts.join("");
+
+ if (streamIndexRef.current >= fullTextRef.current.length) {
+ setStream(fullTextRef.current);
+ isAnimatingRef.current = false;
+ return;
+ }
+
+ const animate = (time: number) => {
+ const fullText = fullTextRef.current;
+ if (streamIndexRef.current < fullText.length) {
+ if (time - lastTimeRef.current > typewriterSpeed) {
+ streamIndexRef.current++;
+ setStream(fullText.slice(0, streamIndexRef.current));
+ lastTimeRef.current = time;
+ }
+ frame.current = requestAnimationFrame(animate);
+ } else {
+ isAnimatingRef.current = false;
+ frame.current = null;
+ }
+ };
+
+ if (!isAnimatingRef.current) {
+ isAnimatingRef.current = true;
+ frame.current = requestAnimationFrame(animate);
+ }
+
+ return () => {
+ if (frame.current) {
+ cancelAnimationFrame(frame.current);
+ frame.current = null;
+ }
+ isAnimatingRef.current = false;
+ };
+ }, [parts]);
🤖 Prompt for AI Agents
In hooks/use-stream-text.ts around lines 29 to 63, the effect bails out when
isAnimatingRef.current is true so newly appended parts won't be applied to the
ongoing animation; update the effect to always set a shared fullTextRef.current
= parts.join("") at the top so the running animation reads the latest text, but
only start a new requestAnimationFrame loop if not already animating (i.e., if
isAnimatingRef.current is false). Inside the animate callback use
fullTextRef.current for length and slicing, ensure when the animation finishes
you set stream to fullTextRef.current and mark isAnimatingRef.current = false,
and keep the existing cleanup to cancel the frame and reset
isAnimatingRef.current.
type ThemeGenerationUITools = InferUITools<typeof THEME_GENERATION_TOOLS>; | ||
export type MyUITools = ThemeGenerationUITools; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Compute ThemeGenerationUITools without importing the value
Use a typeof import to infer from THEME_GENERATION_TOOLS without creating a runtime dependency.
Apply:
-type ThemeGenerationUITools = InferUITools<typeof THEME_GENERATION_TOOLS>;
-export type MyUITools = ThemeGenerationUITools;
+type ThemeGenerationUITools = InferUITools<
+ typeof import("@/app/api/generate-theme/tools").THEME_GENERATION_TOOLS
+>;
+export type MyUITools = ThemeGenerationUITools;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
type ThemeGenerationUITools = InferUITools<typeof THEME_GENERATION_TOOLS>; | |
export type MyUITools = ThemeGenerationUITools; | |
type ThemeGenerationUITools = InferUITools< | |
typeof import("@/app/api/generate-theme/tools").THEME_GENERATION_TOOLS | |
>; | |
export type MyUITools = ThemeGenerationUITools; |
🤖 Prompt for AI Agents
In types/ai.ts around lines 35-36, the code currently uses typeof
THEME_GENERATION_TOOLS which creates a runtime dependency; change it to a
type-only typeof import so the compiler infers the shape without importing the
value at runtime. Replace InferUITools<typeof THEME_GENERATION_TOOLS> with
InferUITools<typeof import("path/to/module").THEME_GENERATION_TOOLS> (using the
correct relative module path) and ensure the import is only used in types (no
top-level value import) so there is no runtime import.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (3)
app/api/generate-theme/tools.ts (1)
5-5
: Fix circular dependency at runtime by using a type-only importImporting Context as a value from "./route" creates a route↔tools circular import. Since Context is only used for typing (experimental_context as Context), switch to a type-only import to avoid runtime cycles.
-import { Context } from "./route"; +import type { Context } from "./route";types/ai.ts (2)
1-3
: Use type-only imports to prevent bundling server-only code into client buildsBoth
THEME_GENERATION_TOOLS
and the AI SDK imports should be type-only to avoid pulling server code into client bundles.-import { THEME_GENERATION_TOOLS } from "@/app/api/generate-theme/tools"; -import { DeepPartial, InferUITools, UIMessage } from "ai"; -import { ThemeStylesWithoutSpacing, type ThemeStyleProps, type ThemeStyles } from "./theme"; +import type { THEME_GENERATION_TOOLS } from "@/app/api/generate-theme/tools"; +import type { DeepPartial, InferUITools, UIMessage } from "ai"; +import { type ThemeStylesWithoutSpacing, type ThemeStyleProps, type ThemeStyles } from "./theme";
41-42
: Compute ThemeGenerationUITools without importing the valueUse a typeof import to infer from THEME_GENERATION_TOOLS without creating a runtime dependency.
-type ThemeGenerationUITools = InferUITools<typeof THEME_GENERATION_TOOLS>; -export type MyUITools = ThemeGenerationUITools; +type ThemeGenerationUITools = InferUITools< + typeof import("@/app/api/generate-theme/tools").THEME_GENERATION_TOOLS +>; +export type MyUITools = ThemeGenerationUITools;
🧹 Nitpick comments (1)
utils/ai/providers.ts (1)
19-27
: Consider making provider options more flexible.The provider options are currently hardcoded. Consider:
- The
thinkingBudget
of 128 might be insufficient for complex theme generation tasks- OpenAI's
reasoningEffort: "low"
might limit quality for theme generationConsider making these configurable via environment variables or based on the model being used:
+const getThinkingBudget = () => { + const budget = process.env.AI_THINKING_BUDGET; + return budget ? parseInt(budget, 10) : 256; +}; + export const baseProviderOptions = { google: { thinkingConfig: { includeThoughts: false, - thinkingBudget: 128, + thinkingBudget: getThinkingBudget(), }, } satisfies GoogleGenerativeAIProviderOptions, - openai: { reasoningEffort: "low" } satisfies OpenAIResponsesProviderOptions, + openai: { + reasoningEffort: process.env.NODE_ENV === "production" ? "medium" : "low" + } satisfies OpenAIResponsesProviderOptions, };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (9)
app/api/generate-theme/route.ts
(3 hunks)app/api/generate-theme/tools.ts
(1 hunks)hooks/use-chat-context.tsx
(1 hunks)hooks/use-stream-text.ts
(1 hunks)types/ai.ts
(2 hunks)types/theme.ts
(3 hunks)utils/ai/generate-theme.ts
(1 hunks)utils/ai/message-converter.ts
(1 hunks)utils/ai/providers.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- hooks/use-stream-text.ts
- types/theme.ts
- hooks/use-chat-context.tsx
- app/api/generate-theme/route.ts
- utils/ai/generate-theme.ts
- utils/ai/message-converter.ts
🧰 Additional context used
🧬 Code graph analysis (2)
app/api/generate-theme/tools.ts (3)
utils/ai/generate-theme.ts (1)
themeStylesOutputSchema
(68-68)app/api/generate-theme/route.ts (1)
Context
(115-115)utils/ai/providers.ts (2)
MODELS
(14-17)baseProviderOptions
(19-27)
types/ai.ts (2)
types/theme.ts (2)
ThemeStyles
(76-76)ThemeStylesWithoutSpacing
(87-87)app/api/generate-theme/tools.ts (1)
THEME_GENERATION_TOOLS
(7-49)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Vade Review
🔇 Additional comments (1)
types/ai.ts (1)
24-27
: MyMetadata fields correctly reflect actual usageAfter auditing all references to
promptData
andthemeStyles
across the codebase—including message construction inutils/ai/message-converter.ts
, UI rendering incomponents/editor/ai/*
, and API round‐trips through/api/generate-theme
—both properties are only ever accessed behind optional checks (?.
) and only populated on the appropriate message roles:
promptData?
is only supplied on user‐origin messages (e.g. viasendMessage({ metadata: { promptData } })
) and guarded before use in building user‐content parts.themeStyles?
is only attached to or consumed by assistant messages (via tool/result streams or example fixtures) and likewise always checked before use.No code path assumes either field is non‐null, and the optional definitions in
types/ai.ts
(lines 24–27) align perfectly with these guarded usages. No type changes are necessary here.
const { partialObjectStream, object } = streamObject({ | ||
abortSignal, | ||
model: MODELS.themeGeneration, | ||
providerOptions: baseProviderOptions, | ||
schema: themeStylesOutputSchema, | ||
messages, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for the streamObject call.
The streamObject
call could fail due to API issues, rate limiting, or network problems. Currently, there's no error handling which could leave the UI in an inconsistent state.
Add try-catch error handling:
- const { partialObjectStream, object } = streamObject({
- abortSignal,
- model: MODELS.themeGeneration,
- providerOptions: baseProviderOptions,
- schema: themeStylesOutputSchema,
- messages,
- });
+ let partialObjectStream, object;
+ try {
+ ({ partialObjectStream, object } = streamObject({
+ abortSignal,
+ model: MODELS.themeGeneration,
+ providerOptions: baseProviderOptions,
+ schema: themeStylesOutputSchema,
+ messages,
+ }));
+ } catch (error) {
+ writer.write({
+ id: toolCallId,
+ type: "data-generated-theme-styles",
+ data: { status: "error", error: error instanceof Error ? error.message : "Failed to generate theme" },
+ transient: false,
+ });
+ throw error;
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const { partialObjectStream, object } = streamObject({ | |
abortSignal, | |
model: MODELS.themeGeneration, | |
providerOptions: baseProviderOptions, | |
schema: themeStylesOutputSchema, | |
messages, | |
}); | |
let partialObjectStream, object; | |
try { | |
({ partialObjectStream, object } = streamObject({ | |
abortSignal, | |
model: MODELS.themeGeneration, | |
providerOptions: baseProviderOptions, | |
schema: themeStylesOutputSchema, | |
messages, | |
})); | |
} catch (error) { | |
writer.write({ | |
id: toolCallId, | |
type: "data-generated-theme-styles", | |
data: { | |
status: "error", | |
error: error instanceof Error | |
? error.message | |
: "Failed to generate theme" | |
}, | |
transient: false, | |
}); | |
throw error; | |
} |
🤖 Prompt for AI Agents
In app/api/generate-theme/tools.ts around lines 20 to 26, the call to
streamObject is unprotected and can throw on network/API errors; wrap the
streamObject invocation in a try-catch, log or capture the caught error with
contextual text (including model and providerOptions), and either rethrow a new
Error with that context or return a standardized error result so the caller/UI
can handle it; also ensure any allocated resources (e.g., abortSignal or
partialObjectStream) are cleaned up in the catch/finally and validate
partialObjectStream/object before using them.
for await (const chunk of partialObjectStream) { | ||
writer.write({ | ||
id: toolCallId, | ||
type: "data-generated-theme-styles", | ||
data: { status: "streaming", themeStyles: chunk }, | ||
transient: true, | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider handling stream interruptions gracefully.
If the stream is interrupted or encounters an error during iteration, the UI might be left showing a "streaming" status indefinitely.
Wrap the streaming loop in a try-catch:
+ try {
for await (const chunk of partialObjectStream) {
writer.write({
id: toolCallId,
type: "data-generated-theme-styles",
data: { status: "streaming", themeStyles: chunk },
transient: true,
});
}
+ } catch (error) {
+ // Only throw if not aborted
+ if (!abortSignal.aborted) {
+ writer.write({
+ id: toolCallId,
+ type: "data-generated-theme-styles",
+ data: { status: "error", error: "Stream interrupted" },
+ transient: false,
+ });
+ throw error;
+ }
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for await (const chunk of partialObjectStream) { | |
writer.write({ | |
id: toolCallId, | |
type: "data-generated-theme-styles", | |
data: { status: "streaming", themeStyles: chunk }, | |
transient: true, | |
}); | |
} | |
try { | |
for await (const chunk of partialObjectStream) { | |
writer.write({ | |
id: toolCallId, | |
type: "data-generated-theme-styles", | |
data: { status: "streaming", themeStyles: chunk }, | |
transient: true, | |
}); | |
} | |
} catch (error) { | |
// Only throw if not aborted | |
if (!abortSignal.aborted) { | |
writer.write({ | |
id: toolCallId, | |
type: "data-generated-theme-styles", | |
data: { status: "error", error: "Stream interrupted" }, | |
transient: false, | |
}); | |
throw error; | |
} | |
} |
🤖 Prompt for AI Agents
In app/api/generate-theme/tools.ts around lines 28-35, the for-await streaming
loop isn't handling interruptions and may leave the UI in "streaming" forever;
wrap the entire loop in a try-catch, in the try perform the existing for-await
iteration and after the loop emit a final writer.write message with type
"data-generated-theme-styles" and data.status set to "completed" (and
themeStyles as appropriate), and in the catch emit a writer.write with
data.status "error" including a concise error message/details, then ensure the
writer is properly closed or flushed (e.g., call writer.close()/end) in a
finally block so the UI can react to completion or failure.
export type MyUIDataParts = { | ||
"generated-theme-styles": | ||
| { | ||
status: "streaming"; | ||
themeStyles: DeepPartial<ThemeStylesWithoutSpacing>; | ||
} | ||
| { | ||
status: "ready"; | ||
themeStyles: ThemeStylesWithoutSpacing; | ||
}; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding an error status to MyUIDataParts.
The MyUIDataParts
type only handles "streaming" and "ready" states, but based on the error handling suggestions above, an "error" state would be beneficial for proper error reporting.
export type MyUIDataParts = {
"generated-theme-styles":
| {
status: "streaming";
themeStyles: DeepPartial<ThemeStylesWithoutSpacing>;
}
| {
status: "ready";
themeStyles: ThemeStylesWithoutSpacing;
- };
+ }
+ | {
+ status: "error";
+ error: string;
+ };
};
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export type MyUIDataParts = { | |
"generated-theme-styles": | |
| { | |
status: "streaming"; | |
themeStyles: DeepPartial<ThemeStylesWithoutSpacing>; | |
} | |
| { | |
status: "ready"; | |
themeStyles: ThemeStylesWithoutSpacing; | |
}; | |
}; | |
export type MyUIDataParts = { | |
"generated-theme-styles": | |
| { | |
status: "streaming"; | |
themeStyles: DeepPartial<ThemeStylesWithoutSpacing>; | |
} | |
| { | |
status: "ready"; | |
themeStyles: ThemeStylesWithoutSpacing; | |
} | |
| { | |
status: "error"; | |
error: string; | |
}; | |
}; |
🤖 Prompt for AI Agents
In types/ai.ts around lines 29 to 39, the MyUIDataParts union only models
"streaming" and "ready" states and should include an "error" variant so callers
can represent and surface failures; add a third union member { status: "error";
error: string | Error | SerializedError } (or a small ErrorInfo type if you
prefer structured fields like message/code) so the shape is exhaustive and
consumers can handle errors explicitly.
apiKey: process.env.GOOGLE_API_KEY, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Ensure GOOGLE_API_KEY is properly configured.
The Google provider requires GOOGLE_API_KEY
environment variable to be set. Without it, the application will fail at runtime.
🏁 Script executed:
#!/bin/bash
# Check if there's environment variable documentation or validation
# Look for environment variable documentation
fd -e md -e env | xargs rg -l "GOOGLE_API_KEY" 2>/dev/null | head -5
# Check for environment validation or error handling
rg -A3 -B3 "GOOGLE_API_KEY" --type=ts --type=js -g "!*.test.*" -g "!*.spec.*"
Length of output: 472
Add validation and documentation for the GOOGLE_API_KEY
environment variable
The Google generative AI provider will throw at runtime if process.env.GOOGLE_API_KEY
is unset. To prevent silent failures and improve DX, please:
• Enforce and fail fast when GOOGLE_API_KEY
is missing
– e.g., in your application bootstrap (e.g. src/index.ts
or at the top of utils/ai/providers.ts
):
if (!process.env.GOOGLE_API_KEY) {
throw new Error("Environment variable GOOGLE_API_KEY must be set");
}
• Document the requirement in your project’s environment-variable reference
– Add a section to README.md (or docs/env.md) describing GOOGLE_API_KEY
and its purpose.
– Update .env.example
(or similar) to include GOOGLE_API_KEY=
with a placeholder value.
These changes ensure the variable is both clearly communicated and validated at startup.
🤖 Prompt for AI Agents
In utils/ai/providers.ts around lines 7 to 8, the GOOGLE_API_KEY is read
directly from process.env which will cause runtime throws if unset; add a
fail-fast validation at module load (or in app bootstrap) that checks
process.env.GOOGLE_API_KEY and throws a clear Error("Environment variable
GOOGLE_API_KEY must be set") if missing, then use the validated value for the
provider; additionally add documentation for this env var in your README or
docs/env.md describing its purpose and usage, and add GOOGLE_API_KEY= in
.env.example with a placeholder so developers know to provide it.
// TODO: Add session and subscription check, this should be a Pro only feature | ||
// TODO: Record AI usage, providing the model id to `recordAIUsage` function |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The enhance-prompt API route is missing critical security checks and usage tracking. It currently allows unrestricted access to the AI prompt enhancement feature without session validation or subscription verification.
View Details
Analysis
The TODO comments on lines 7-8 indicate that this API endpoint is missing essential security mechanisms:
-
Session and subscription check: The endpoint should verify that the user is authenticated and has a valid subscription before allowing access to this Pro-only feature. Without these checks, any user (including unauthenticated ones) can make requests to this endpoint.
-
AI usage tracking: The endpoint should record AI usage via the
recordAIUsage
function to track token consumption and ensure proper billing/limits are enforced.
This creates a security vulnerability where:
- Unauthenticated users can consume AI resources
- Usage isn't tracked, potentially leading to unexpected costs
- Pro-only feature restrictions are bypassed
The implementation should add the same validation pattern used in the generate-theme
route, including user authentication, subscription validation, and usage recording.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
utils/ai/ai-prompt.tsx (2)
21-29
: Guard against empty mentions to avoid RegExp('()') and explosive splits.
If there are no mentions,new RegExp('()','g')
will match empty strings and break rendering performance.export const buildAIPromptRender = (promptData: AIPromptData): React.ReactNode => { // Create a regex that matches all possible mention patterns from the actual mentions const mentionPatterns = promptData.mentions.map( (m) => `@${m.label.replace(/[.*+?^${}()|[\\]\\]/g, "\\$&")}` ); + if (mentionPatterns.length === 0) { + const lines = promptData.content.split("\n"); + return lines.flatMap((line, i) => (i === 0 ? line : [<br key={`br-plain-${i}`} />, line])); + } const mentionRegex = new RegExp(`(${mentionPatterns.join("|")})`, "g");
339-343
: Fix empty-check logic to correctly handle undefined images.
Currently returns false when images are undefined even if content is empty.- const isEmptyPromptDataImages = !!uploadedImages && uploadedImages.length === 0; - - return isEmptyPromptDataImages && isEmptyPromptDataContent; + const hasImages = Array.isArray(uploadedImages) && uploadedImages.length > 0; + return !hasImages && isEmptyPromptDataContent;
♻️ Duplicate comments (6)
utils/ai/providers.ts (1)
6-8
: Fail fast if GOOGLE_API_KEY is missing.
This avoids opaque runtime errors when the provider is constructed without a key. Similar to a prior comment; re-raising here since the risk remains.-const google = createGoogleGenerativeAI({ - apiKey: process.env.GOOGLE_API_KEY, -}); +const GOOGLE_API_KEY = process.env.GOOGLE_API_KEY; +if (!GOOGLE_API_KEY) { + throw new Error("Environment variable GOOGLE_API_KEY must be set"); +} +const google = createGoogleGenerativeAI({ + apiKey: GOOGLE_API_KEY, +});utils/ai/ai-prompt.tsx (1)
16-18
: Sort and dedupe mentions for deterministic prompt assembly.
Keeps outputs stable across renders and avoids repeats. Mirrors an earlier suggestion.-export const buildPromptForAPI = (promptData: AIPromptData) => { - const mentionReferences = promptData.mentions.map((mention) => buildMentionStringForAPI(mention)); +export const buildPromptForAPI = (promptData: AIPromptData) => { + const mentionReferences = dedupeMentionReferences(promptData.mentions) + .slice() + .sort((a, b) => a.label.localeCompare(b.label)) + .map((mention) => buildMentionStringForAPI(mention)); return `${promptData.content}\n\n${mentionReferences.join("\n")}`; };utils/ai/message-converter.ts (2)
41-48
: Guard against undefined/empty prompt content (runtime safety).Even if types say string, metadata can be malformed. Coalesce before trim.
Apply:
- const textContent = promptData.content; - if (textContent.trim().length > 0) { + const textContent = (promptData.content ?? "").trim(); + if (textContent.length > 0) { const textPart: TextPart = { type: "text", - text: textContent, + text: textContent, }; userContentParts.push(textPart); }
50-57
: Null-safe mentions handling.Defensive default avoids runtime errors if mentions is missing.
Apply:
- const uniqueMentions = dedupeMentionReferences(promptData.mentions); + const uniqueMentions = dedupeMentionReferences(promptData.mentions ?? []);components/editor/custom-textarea.tsx (2)
56-56
: Remove conflicting autofocus to avoid flicker.Apply:
- autofocus: !disabled, + // Handle focus in onCreate based on disabled
71-87
: Use configured suggestion.pluginKey instead of Mention.options.suggestion.Prevents undefined access and aligns with the configured suggestion.
Apply:
- if (event.key === "Enter" && !event.shiftKey && !disabled && canSubmit) { - const mentionPluginKey = Mention.options.suggestion.pluginKey; + if (event.key === "Enter" && !event.shiftKey && !disabled && canSubmit) { + const mentionPluginKey = suggestion?.pluginKey; if (!mentionPluginKey) { console.error("Mention plugin key not found."); return false; } const { state } = view; const mentionState = mentionPluginKey.getState(state); if (mentionState?.active) { return false; } else { event.preventDefault(); onSubmit(); return true; } }
🧹 Nitpick comments (15)
utils/ai/providers.ts (1)
3-4
: Use type-only imports to avoid bundling runtime code from SDKs.
This keeps server bundle lean and prevents accidental runtime side effects.-import { createGoogleGenerativeAI, GoogleGenerativeAIProviderOptions } from "@ai-sdk/google"; -import { OpenAIResponsesProviderOptions } from "@ai-sdk/openai"; +import { createGoogleGenerativeAI } from "@ai-sdk/google"; +import type { GoogleGenerativeAIProviderOptions } from "@ai-sdk/google"; +import type { OpenAIResponsesProviderOptions } from "@ai-sdk/openai";utils/ai/ai-prompt.tsx (1)
11-14
: Emit mentions as a single line to reduce noisy tokenization.
The extra newline + indentation can create unintended formatting for models and UI splits.-export const buildMentionStringForAPI = (mention: MentionReference) => { - return `@${mention.label} = - ${JSON.stringify(mention.themeData)}`; -}; +export const buildMentionStringForAPI = (mention: MentionReference) => { + return `@${mention.label} = ${JSON.stringify(mention.themeData)}`; +};app/api/enhance-prompt/route.ts (1)
11-11
: Remove unused_prompt
.
Keeps the handler minimal and lint-clean.- const { prompt: _prompt, promptData }: { prompt: string; promptData: AIPromptData } = body; + const { promptData }: { prompt: string; promptData: AIPromptData } = body;components/editor/ai/enhance-prompt-button.tsx (1)
20-32
: Add accessible state and prevent unintended form submissions.
Providearia-pressed
for toggle semantics,aria-label
for clarity, andtype="button"
to avoid implicit submit inside forms.-export function EnhancePromptButton({ +export function EnhancePromptButton({ className, disabled, isEnhancing, onStart, onStop, ...props }: EnhancePromptButtonProps) { + const label = isEnhancing ? "Stop" : "Enhance prompt"; return ( - <TooltipWrapper label={isEnhancing ? "Stop" : "Enhance prompt"} asChild> + <TooltipWrapper label={label} asChild> <Button size="icon" variant={isEnhancing ? "destructive" : "outline"} className={cn("relative size-8 shadow-none", className)} onClick={isEnhancing ? onStop : onStart} disabled={disabled} + aria-pressed={isEnhancing} + aria-label={label} + type="button" {...props} > {isEnhancing ? <CircleStop /> : <WandSparkles />} </Button> </TooltipWrapper> ); }hooks/use-ai-enhance-prompt.ts (2)
6-7
: Import dedupe to stabilize active mentions (optional).
Prevents duplicates when reconstructing mentions later.-import { convertPromptDataToJSONContent } from "@/utils/ai/ai-prompt"; +import { convertPromptDataToJSONContent, dedupeMentionReferences } from "@/utils/ai/ai-prompt";
54-65
: Avoid awaitingcomplete()
to keep UI snappy and dedupe active mentions.
Let the streaming run without blocking and store unique mentions only.- const startEnhance = useCallback( - async (promptData: AIPromptData) => { + const startEnhance = useCallback( + (promptData: AIPromptData) => { const prompt = promptData?.content ?? ""; if (!prompt?.trim()) return; if (isLoading) stop(); setCompletion(""); - activeMentionsRef.current = - promptData?.mentions?.map((m) => ({ id: m.id, label: m.label })) ?? []; - await complete(prompt, { body: { promptData } }); + activeMentionsRef.current = dedupeMentionReferences(promptData?.mentions ?? []).map((m) => ({ + id: m.id, + label: m.label, + })); + void complete(prompt, { body: { promptData } }); }, [complete, isLoading, stop, setCompletion] );components/editor/ai/ai-chat-form-body.tsx (2)
54-55
: Also disable DnD when submission isn’t allowed.When generation/initialization blocks submit, DnD should be disabled too to avoid queuing uploads mid-flow.
Apply:
- disabled={disabled || uploadedImages.some((img) => img.loading)} + disabled={disabled || !canSubmit || uploadedImages.some((img) => img.loading)}
60-64
: Gate image strip interactions by canSubmit.Previews should be non-interactive whenever submission is blocked (e.g., generating, initializing).
Apply:
- disabled && "pointer-events-none opacity-75" + (!canSubmit) && "pointer-events-none opacity-75"utils/ai/message-converter.ts (2)
71-74
: Preserve spacing between concatenated text parts.Joining with no delimiter can merge words across parts.
Apply:
- const msgTextContent = message.parts - .map((part) => (part.type === "text" ? part.text : "")) - .join(""); + const msgTextContent = message.parts + .map((part) => (part.type === "text" ? part.text : "")) + .join("\n");
84-104
: Skip empty assistant messages.Avoid emitting assistant entries with no text and no themeStyles.
Apply:
- if (message.role === "assistant") { + if (message.role === "assistant") { const assistantContentParts: AssistantContent = []; assistantContentParts.push({ type: "text", text: msgTextContent, }); @@ - modelMessages.push({ - role: "assistant", - content: assistantContentParts, - }); + if (msgTextContent.trim().length > 0 || !!themeStyles) { + modelMessages.push({ + role: "assistant", + content: assistantContentParts, + }); + } }app/ai/components/ai-chat-form.tsx (2)
111-119
: Pro-only gating TODO noted.When you’re ready, I can wire the subscription gate and hide the control for non‑Pro users.
134-159
: Unreachable conditional in send button icon.This branch renders only when isGeneratingTheme is false, so the ternary always shows ArrowUp.
Apply:
- {isGeneratingTheme ? <Loader className="animate-spin" /> : <ArrowUp />} + <ArrowUp />components/editor/custom-textarea.tsx (1)
143-149
: Reduce focus/blur churn on disabled changes.Blurring on disable is enough; avoid forcing focus on enable to prevent jumpiness.
Apply:
useEffect(() => { if (!editor) return; editor.setEditable(!disabled); - if (disabled) editor.commands.blur(); - else editor.commands.focus("end"); + if (disabled) editor.commands.blur(); }, [disabled, editor]);components/editor/ai/chat-input.tsx (2)
83-84
: Consider awaiting onThemeGeneration.Prop is typed to return Promise; awaiting can help if you later need to chain actions or error-handle locally (parent-driven state still controls spinners).
Apply:
- onThemeGeneration(promptData); + await onThemeGeneration(promptData);
209-210
: Unreachable conditional in send button icon.Same as in AIChatForm; the ternary can be simplified.
Apply:
- {isGeneratingTheme ? <LoaderIcon className="animate-spin" /> : <ArrowUp />} + <ArrowUp />
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (11)
app/ai/components/ai-chat-form.tsx
(6 hunks)app/api/enhance-prompt/route.ts
(1 hunks)components/editor/ai/ai-chat-form-body.tsx
(3 hunks)components/editor/ai/chat-input.tsx
(6 hunks)components/editor/ai/enhance-prompt-button.tsx
(1 hunks)components/editor/ai/messages.tsx
(1 hunks)components/editor/custom-textarea.tsx
(6 hunks)hooks/use-ai-enhance-prompt.ts
(1 hunks)utils/ai/ai-prompt.tsx
(6 hunks)utils/ai/message-converter.ts
(1 hunks)utils/ai/providers.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- components/editor/ai/messages.tsx
🧰 Additional context used
🧬 Code graph analysis (9)
app/api/enhance-prompt/route.ts (3)
types/ai.ts (1)
AIPromptData
(18-22)utils/ai/message-converter.ts (1)
buildUserContentPartsFromPromptData
(5-60)utils/ai/providers.ts (2)
MODELS
(14-18)baseProviderOptions
(20-28)
components/editor/ai/enhance-prompt-button.tsx (2)
components/tooltip-wrapper.tsx (1)
TooltipWrapper
(7-35)lib/utils.ts (1)
cn
(6-8)
utils/ai/ai-prompt.tsx (1)
types/ai.ts (2)
MentionReference
(5-12)AIPromptData
(18-22)
hooks/use-ai-enhance-prompt.ts (4)
utils/ai/parse-ai-sdk-transport-error.ts (1)
parseAiSdkTransportError
(27-63)types/ai.ts (1)
AIPromptData
(18-22)utils/ai/ai-prompt.tsx (1)
convertPromptDataToJSONContent
(195-333)store/ai-local-draft-store.ts (1)
useAILocalDraftStore
(14-28)
components/editor/ai/ai-chat-form-body.tsx (2)
components/editor/custom-textarea.tsx (1)
CustomTextarea
(25-193)lib/constants.ts (1)
AI_PROMPT_CHARACTER_LIMIT
(1-1)
app/ai/components/ai-chat-form.tsx (7)
types/ai.ts (1)
AIPromptData
(18-22)hooks/use-ai-chat-form.ts (1)
useAIChatForm
(12-85)hooks/use-guards.ts (1)
useGuards
(7-15)hooks/use-ai-enhance-prompt.ts (1)
useAIEnhancePrompt
(12-81)components/editor/ai/enhance-prompt-button.tsx (1)
EnhancePromptButton
(12-34)components/editor/ai/image-uploader.tsx (1)
ImageUploader
(14-60)lib/constants.ts (1)
MAX_IMAGE_FILES
(7-7)
utils/ai/message-converter.ts (2)
types/ai.ts (2)
AIPromptData
(18-22)ChatMessage
(44-44)utils/ai/ai-prompt.tsx (2)
dedupeMentionReferences
(345-353)buildMentionStringForAPI
(11-14)
components/editor/ai/chat-input.tsx (7)
types/ai.ts (1)
AIPromptData
(18-22)hooks/use-chat-context.tsx (1)
useChatContext
(93-99)hooks/use-ai-chat-form.ts (1)
useAIChatForm
(12-85)hooks/use-ai-enhance-prompt.ts (1)
useAIEnhancePrompt
(12-81)components/editor/ai/alert-banner.tsx (2)
BannerWrapper
(90-112)AlertBanner
(12-88)components/loader.tsx (1)
Loader
(447-447)components/editor/ai/ai-chat-form-body.tsx (1)
AIChatFormBody
(35-95)
components/editor/custom-textarea.tsx (1)
lib/utils.ts (1)
cn
(6-8)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Vade Review
🔇 Additional comments (14)
utils/ai/providers.ts (1)
20-28
: Confirm v5 providerOptions shape: v5’sGoogleGenerativeAIProviderOptions
supports boththinkingConfig.includeThoughts
andthinkingBudget
, andproviderOptions
is defined as aRecord<string, Record<string, JSONValue>>
keyed by provider id (e.g.openai
).utils/ai/ai-prompt.tsx (2)
239-317
: Robust mention parsing looks good.
Good handling of overlaps, ordering (prefer longer at same index), and emission boundaries.
345-353
: Nice utility for deduping mentions by id.
Straightforward and efficient.app/api/enhance-prompt/route.ts (1)
14-31
: Streaming config and system prompt usage look solid.
Good use ofsmoothStream
and provider configuration.components/editor/ai/enhance-prompt-button.tsx (1)
12-19
: LGTM on the public API surface.
Clear toggle semantics viaonStart
/onStop
andisEnhancing
.hooks/use-ai-enhance-prompt.ts (2)
13-37
: Error handling and draft persistence on finish look good.
Toast normalization and JSON conversion are clean.
67-73
: Persist-on-stop behavior is appropriate.
Ensures intermediate results aren't lost on user cancel.components/editor/ai/ai-chat-form-body.tsx (2)
13-18
: Nice loader UX improvement.The skeleton for the dynamic import is a good, lightweight enhancement.
79-91
: Good: aria-disabled propagated to the input container.This improves a11y while keeping editing enabled when appropriate.
app/ai/components/ai-chat-form.tsx (1)
80-90
: Confirm intent: editing allowed while generating.AIChatFormBody receives disabled={isEnhancingPrompt}, so typing is allowed during generation. If you also want to pause DnD/removal while generating, the AIChatFormBody tweaks I suggested will cover that without disabling typing.
components/editor/custom-textarea.tsx (2)
133-136
: onCreate focus is good; guard remains correct.No issues here.
176-186
: Good a11y: aria-disabled on EditorContent and visible character counter.Nice touch.
components/editor/ai/chat-input.tsx (2)
115-121
: Nice banner UX for generation state.Lightweight and readable.
126-133
: Good canSubmit gating.This keeps typing allowed while preventing accidental submits.
const body = await req.json(); | ||
const { prompt: _prompt, promptData }: { prompt: string; promptData: AIPromptData } = body; | ||
const userContentParts = buildUserContentPartsFromPromptData(promptData); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Validate request body and fail fast with 400 on invalid input.
Prevents obscure errors in buildUserContentPartsFromPromptData
.
- const body = await req.json();
- const { prompt: _prompt, promptData }: { prompt: string; promptData: AIPromptData } = body;
+ const body = await req.json();
+ const { promptData }: { prompt: string; promptData: AIPromptData } = body ?? {};
+ if (!promptData || typeof promptData.content !== "string") {
+ return new Response(JSON.stringify({ error: "Invalid body: promptData.content is required" }), {
+ status: 400,
+ headers: { "content-type": "application/json" },
+ });
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const body = await req.json(); | |
const { prompt: _prompt, promptData }: { prompt: string; promptData: AIPromptData } = body; | |
const userContentParts = buildUserContentPartsFromPromptData(promptData); | |
const body = await req.json(); | |
const { promptData }: { prompt: string; promptData: AIPromptData } = body ?? {}; | |
if (!promptData || typeof promptData.content !== "string") { | |
return new Response(JSON.stringify({ error: "Invalid body: promptData.content is required" }), { | |
status: 400, | |
headers: { "content-type": "application/json" }, | |
}); | |
} | |
const userContentParts = buildUserContentPartsFromPromptData(promptData); |
🤖 Prompt for AI Agents
In app/api/enhance-prompt/route.ts around lines 10 to 13, the handler currently
reads req.json() and immediately destructures prompt and promptData, which can
cause obscure errors inside buildUserContentPartsFromPromptData when the request
body is missing or malformed; add explicit validation after parsing body to
verify that body is an object, that prompt is a non-empty string and promptData
exists and matches expected shape (or at minimum is an object), and if
validation fails return an HTTP 400 response with a clear error message; perform
this check before calling buildUserContentPartsFromPromptData so the route fails
fast on invalid input.
if (image.url.includes("base64")) { | ||
svgMarkup = atob(dataUrlPart); | ||
} else { | ||
content.push({ | ||
type: "image", | ||
image: image.url, | ||
}); | ||
svgMarkup = decodeURIComponent(dataUrlPart); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Server-side atob will throw; add cross-runtime base64 decode.
API routes may run in Node (no atob). Use atob when available, else Buffer.
Apply:
- svgMarkup = atob(dataUrlPart);
+ svgMarkup =
+ typeof atob === "function"
+ ? atob(dataUrlPart)
+ : Buffer.from(dataUrlPart, "base64").toString("utf-8");
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (image.url.includes("base64")) { | |
svgMarkup = atob(dataUrlPart); | |
} else { | |
content.push({ | |
type: "image", | |
image: image.url, | |
}); | |
svgMarkup = decodeURIComponent(dataUrlPart); | |
} | |
if (image.url.includes("base64")) { | |
svgMarkup = | |
typeof atob === "function" | |
? atob(dataUrlPart) | |
: Buffer.from(dataUrlPart, "base64").toString("utf-8"); | |
} else { | |
svgMarkup = decodeURIComponent(dataUrlPart); | |
} |
🤖 Prompt for AI Agents
In utils/ai/message-converter.ts around lines 15 to 19, replace the direct
atob(dataUrlPart) call (which throws in Node) with a cross-runtime decode: use
globalThis.atob when available, otherwise use Buffer.from(dataUrlPart,
'base64').toString('utf8'); keep the existing decodeURIComponent branch for
non-base64 data URLs and ensure dataUrlPart still refers to the base64 payload.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
utils/ai/providers.ts (1)
5-7
: Fail fast on missing API key (and support the SDK’s default env var).Add a clear startup error and fall back to
GOOGLE_GENERATIVE_AI_API_KEY
. This avoids runtime throws and aligns with the provider’s documented default.-const google = createGoogleGenerativeAI({ - apiKey: process.env.GOOGLE_API_KEY, -}); +const GOOGLE_API_KEY = + process.env.GOOGLE_API_KEY ?? process.env.GOOGLE_GENERATIVE_AI_API_KEY; +if (!GOOGLE_API_KEY) { + throw new Error( + "Missing API key: set GOOGLE_API_KEY or GOOGLE_GENERATIVE_AI_API_KEY" + ); +} +const google = createGoogleGenerativeAI({ apiKey: GOOGLE_API_KEY });
🧹 Nitpick comments (2)
utils/ai/providers.ts (2)
9-13
: Model choices are appropriate; consider tightening types.IDs match AI SDK v5 docs for Gemini 2.5 Flash/Pro. Optionally add
as const
to preserve literal keys/types.-export const MODELS = { +export const MODELS = { basic: google("gemini-2.5-flash"), themeGeneration: google("gemini-2.5-pro"), promptEnhancement: google("gemini-2.5-flash"), -}; +} as const;References: AI SDK v5 Google provider docs show
gemini-2.5-flash
andgemini-2.5-pro
usage. (v5.ai-sdk.dev)
15-22
: Default thinking budget: prefer dynamic (-1) to let Pro/Flash self-tune.For Gemini 2.5, Google recommends dynamic thinking by default; Pro can’t disable thinking, and -1 lets the model adapt. Consider
-1
here and override per-call when needed.export const baseProviderOptions = { google: { thinkingConfig: { includeThoughts: false, - thinkingBudget: 128, + thinkingBudget: -1, }, } satisfies GoogleGenerativeAIProviderOptions, };Docs: AI SDK v5 exposes
thinkingConfig
withincludeThoughts
/thinkingBudget
; Gemini 2.5 thinking guidance and budgets (including dynamic-1
) are documented. (v5.ai-sdk.dev, ai.google.dev, cloud.google.com)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (2)
package.json
(5 hunks)utils/ai/providers.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- package.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Vade Review
🔇 Additional comments (2)
utils/ai/providers.ts (2)
1-2
: Good use of server-only guard.Prevents accidental client bundling and leaking keys.
3-4
: Imports look correct for AI SDK v5.Type import is properly used with
satisfies
.
console.log("----- ✅ Updating Stored Messages -----"); | ||
setStoredMessages(chat.messages); | ||
} | ||
}, [chat.status, chat.messages]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The useEffect hook is missing setStoredMessages
in its dependency array, which violates React's exhaustive-deps rule and could cause stale closures.
View Details
📝 Patch Details
diff --git a/hooks/use-chat-context.tsx b/hooks/use-chat-context.tsx
index e1ffaede..5eb9f754 100644
--- a/hooks/use-chat-context.tsx
+++ b/hooks/use-chat-context.tsx
@@ -70,7 +70,7 @@ export function ChatProvider({ children }: { children: React.ReactNode }) {
console.log("----- ✅ Updating Stored Messages -----");
setStoredMessages(chat.messages);
}
- }, [chat.status, chat.messages]);
+ }, [chat.status, chat.messages, setStoredMessages]);
useEffect(() => {
if (!hasStoreHydrated || hasInitializedRef.current) return;
@@ -81,7 +81,7 @@ export function ChatProvider({ children }: { children: React.ReactNode }) {
}
hasInitializedRef.current = true;
- }, [hasStoreHydrated, storedMessages]);
+ }, [hasStoreHydrated, storedMessages, chat]);
return (
<ChatContext.Provider value={{ ...chat, startNewChat, resetMessagesUpToIndex }}>
Analysis
The useEffect on line 65-73 calls setStoredMessages(chat.messages)
but doesn't include setStoredMessages
in the dependency array [chat.status, chat.messages]
. While setStoredMessages
from Zustand stores is typically stable, React's ESLint rules require all dependencies to be declared to prevent potential stale closure bugs and ensure the effect behaves correctly if the function reference changes.
The dependency array should be: [chat.status, chat.messages, setStoredMessages]
This follows React best practices and ensures the effect will work correctly even if the store implementation changes in the future.
Oh this was to set the content Editor with the JSONContent that contained
the enhanced prompt, only resetting the internal state when this changed.
The other way to do it was to re mount the text area component on every new
chunk, and keep track of the last version (let’s say the prompt.length) to
avoid hydration errors and problem syncing with the local storage draft
…On Mon 15 Sep 2025 at 10:19 a.m. Sahaj Jain ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In components/editor/custom-textarea.tsx
<#210 (comment)>:
> + // Stream external content into the editor
+ useEffect(() => {
+ if (!editor) return;
+ if (!externalEditorContent) return;
+
+ try {
+ const current = editor.getJSON();
+ const next = externalEditorContent;
+ const hasChanged = JSON.stringify(current) !== JSON.stringify(next);
+ if (!hasChanged) return;
+
+ // Emit update so listeners propagate changes
+ editor.commands.setContent(next, true);
+ } catch (_e) {
+ // If setContent fails for any reason, silently ignore; user can keep typing
}
- }, [aiGenerateLoading, editor]);
+ }, [externalEditorContent, editor]);
I couldn't understand this. Can you explain why externalEditorContent is
needed?
—
Reply to this email directly, view it on GitHub
<#210 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BAZXDVPER7FGCGQ66RWJ6C33S3YKJAVCNFSM6AAAAACD3OCYZ6VHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZTEMRVGYYDMMJWGI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
}), | ||
{ | ||
name: "ai-chat-storage", | ||
name: "ai-chat-storage-v2", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
zustand has a version
for the store so we should use that here
* update package locks to avoid conflict * Refactor AI Generation with AI SDK v5 (#210) * refactor(wip): Whole Chat implementation * fix: Deps * chore: Update types and create tool to handle theme generation * chore: Add hook and component to handle feedback text * chore: Check for themeStyles in asssiatant metadata * feat: Improve messages styles and feedback * chore: Update react-hook-form deps * chore: Improve Messages display * chore: Show a banner when theme is generating * chore: Optimize the Messages syncing * feat: Add debug button to MessageActions in dev mode * chore: messages * chore: Improve System prompt + tools + utils * fix: correctly display User messages with only images * feat: Handle errors in UI and adapt to AI sdk * feat: Use tool output instead of custom data parts * chore: System prompt and schema context * chore: Gemini 2.5 pro as the default model * refactor: Change name to Chat Context and apply generated theme automatically * chore: Update ai packages to latest version * feat: Stop ongoing request before starting a new chat * feat: Refactor logic for Scroll start/end sentinels * feat: Improve UI streaming with AI Elements and Stream Text utils * chore: Generate theme types and logic * feat: Allow customize the speed of streaming text * refactor: Theme generation utils * fix: Avoid duplicated Mention references * styles: Improve Chat error banner * feat: Enhance Prompt core implementation * feat: Implement enhance prompt in components * chore: Add TODOS comments * chore: Remove Openai provider * chore: Update AI adk packages * fix: Free request constant --------- Co-authored-by: Sahaj Jain <[email protected]> * update lockfile * fix import * fix types * chore: Trigger theme generation before transitioning to editor page * refactor: Move AI sdk specific code into Lib and update imports * chore: Provide model ID to record usage util * refactor: Multi model provider convention * fix types * refactor: Use Zustand persist middleware versioning * feat: Improve prompts and context * feat: Optimize streaming JSONContent into text area * chore: Improve Theme Generation prompt * feat: Improve prompt * feat: Add subscription checks for enhancing prompt and Posthog tracking --------- Co-authored-by: Luis Llanes <[email protected]> Co-authored-by: Luis Llanes <[email protected]>
Summary by CodeRabbit
New Features
Improvements