Back to Blog
January 20, 2024
Full-Stack AI Team
14 min read
Next.js & VercelFeatured

Building AI Apps with Next.js, Vercel AI SDK & Zapserp

Complete guide to building streaming AI applications with Next.js, Vercel AI SDK, and Zapserp. Learn to create real-time RAG systems, AI chatbots, and deploy to Vercel Edge.

nextjsvercel-ai-sdkstreamingragchatbotvercel-edgereact

Building AI Apps with Next.js, Vercel AI SDK & Zapserp

The combination of Next.js, Vercel AI SDK, and Zapserp creates a powerful stack for building modern AI applications with real-time web data. This comprehensive guide shows you how to build streaming RAG systems, intelligent chatbots, and production-ready AI apps that deploy seamlessly to Vercel Edge.

We'll cover everything from basic integration to advanced patterns like streaming responses, real-time search, and optimized deployment strategies.

Why This Stack?

The Perfect AI Development Trio

  • Next.js: Full-stack React framework with excellent API routes and edge runtime support
  • Vercel AI SDK: Streaming responses, UI components, and seamless LLM integration
  • Zapserp: Real-time web search and content extraction for fresh, accurate data

Key Benefits

  • Streaming Responses: Real-time UI updates as AI generates responses
  • Edge Runtime: Fast, globally distributed AI inference
  • Real-Time Data: Current web information via Zapserp
  • Type Safety: Full TypeScript support across the stack
  • Zero Config Deployment: Deploy to Vercel with minimal setup

Setting Up the Foundation

Project Setup

# Create Next.js project with TypeScript
npx create-next-app@latest my-ai-app --typescript --tailwind --eslint --app

cd my-ai-app

# Install required dependencies
npm install ai openai zapserp
npm install @types/node --save-dev

# Install additional UI dependencies
npm install @radix-ui/react-select @radix-ui/react-textarea
npm install lucide-react class-variance-authority clsx tailwind-merge

Environment Configuration

Create your .env.local file:

# API Keys
OPENAI_API_KEY=your_openai_api_key
ZAPSERP_API_KEY=your_zapserp_api_key

# Optional: For enhanced functionality
VERCEL_URL=your_vercel_domain

Core Zapserp Integration

Create a Zapserp utility (lib/zapserp.ts):

import { Zapserp, SearchEngine, SearchResponse, Page } from 'zapserp'

if (!process.env.ZAPSERP_API_KEY) {
  throw new Error('ZAPSERP_API_KEY is required')
}

export const zapserp = new Zapserp({
  apiKey: process.env.ZAPSERP_API_KEY!
})

export interface SearchResult {
  title: string
  url: string
  snippet: string
  content?: string
  metadata?: {
    author?: string
    publishedTime?: string
    description?: string
  }
}

export interface ZapserpSearchOptions {
  engines?: SearchEngine[]
  limit?: number
  language?: string
  country?: string
  includeContent?: boolean
  contentLimit?: number
}

export async function searchWithZapserp(
  query: string,
  options: ZapserpSearchOptions = {}
): Promise<SearchResult[]> {
  const {
    engines = [SearchEngine.GOOGLE, SearchEngine.BING],
    limit = 5,
    language = 'en',
    country = 'us',
    includeContent = false,
    contentLimit = 3
  } = options

  try {
    // Perform search
    const searchResponse: SearchResponse = await zapserp.search({
      query,
      engines,
      limit,
      language,
      country
    })

    if (!searchResponse.results || searchResponse.results.length === 0) {
      return []
    }

    let results: SearchResult[] = searchResponse.results.map(result => ({
      title: result.title,
      url: result.url,
      snippet: result.snippet || 'No snippet available'
    }))

    // Extract content if requested
    if (includeContent && results.length > 0) {
      const urlsToExtract = results
        .slice(0, contentLimit)
        .map(r => r.url)

      try {
        const contentResponse = await zapserp.readerBatch({
          urls: urlsToExtract
        })

        // Merge content with search results
        results = results.map((result, index) => {
          const content = contentResponse.results[index]
          if (content && content.content) {
            return {
              ...result,
              content: content.content,
              metadata: {
                author: content.metadata?.author,
                publishedTime: content.metadata?.publishedTime,
                description: content.metadata?.description
              }
            }
          }
          return result
        })
      } catch (error) {
        console.error('Content extraction failed:', error)
      }
    }

    return results

  } catch (error) {
    console.error('Zapserp search failed:', error)
    throw new Error('Search failed')
  }
}

// Utility for filtering quality sources
export function filterQualitySources(results: SearchResult[]): SearchResult[] {
  const qualityDomains = [
    'wikipedia.org', 'stackoverflow.com', 'github.com', 'medium.com',
    'techcrunch.com', 'wired.com', 'arstechnica.com', 'reuters.com',
    'bbc.com', 'cnn.com', 'nytimes.com', 'wsj.com', 'bloomberg.com',
    'nature.com', 'science.org', 'arxiv.org'
  ]

  const excludeDomains = [
    'pinterest.com', 'youtube.com', 'facebook.com', 'twitter.com',
    'instagram.com', 'tiktok.com'
  ]

  return results.filter(result => {
    const url = result.url.toLowerCase()
    
    // Include quality domains
    if (qualityDomains.some(domain => url.includes(domain))) {
      return true
    }
    
    // Exclude low-quality domains
    if (excludeDomains.some(domain => url.includes(domain))) {
      return false
    }
    
    return true
  })
}

Streaming RAG with Vercel AI SDK

Basic Streaming API Route

Create your streaming API route (app/api/chat/route.ts):

import { OpenAIStream, StreamingTextResponse } from 'ai'
import { Configuration, OpenAIApi } from 'openai-edge'
import { searchWithZapserp, filterQualitySources } from '@/lib/zapserp'

const config = new Configuration({
  apiKey: process.env.OPENAI_API_KEY
})

const openai = new OpenAIApi(config)

export const runtime = 'edge'

export async function POST(req: Request) {
  try {
    const { messages, searchEnabled = true } = await req.json()
    
    const lastMessage = messages[messages.length - 1]
    const userQuery = lastMessage.content

    let searchContext = ''
    let sources: any[] = []

    // Perform web search if enabled
    if (searchEnabled && shouldPerformSearch(userQuery)) {
      try {
        console.log('Performing search for:', userQuery)
        
        const searchResults = await searchWithZapserp(userQuery, {
          limit: 5,
          includeContent: true,
          contentLimit: 3
        })

        const qualityResults = filterQualitySources(searchResults)
        
        if (qualityResults.length > 0) {
          searchContext = formatSearchContext(qualityResults)
          sources = qualityResults.map(r => ({
            title: r.title,
            url: r.url,
            snippet: r.snippet
          }))
        }
      } catch (error) {
        console.error('Search failed:', error)
        // Continue without search context
      }
    }

    // Build system prompt with search context
    const systemPrompt = buildSystemPrompt(searchContext)

    // Prepare messages for OpenAI
    const openAIMessages = [
      { role: 'system', content: systemPrompt },
      ...messages
    ]

    // Create streaming response
    const response = await openai.createChatCompletion({
      model: 'gpt-4-turbo-preview',
      messages: openAIMessages,
      temperature: 0.1,
      stream: true,
      max_tokens: 2000
    })

    // Convert to Vercel AI SDK stream
    const stream = OpenAIStream(response, {
      onCompletion: async (completion) => {
        // Log completion or save to database
        console.log('Completion:', completion.slice(0, 100))
      },
      onFinal: async (completion) => {
        // Add sources to the final response
        if (sources.length > 0) {
          // You can emit sources as a special message or store them
          console.log('Sources used:', sources.length)
        }
      }
    })

    return new StreamingTextResponse(stream)

  } catch (error) {
    console.error('API error:', error)
    return new Response('Internal Server Error', { status: 500 })
  }
}

function shouldPerformSearch(query: string): boolean {
  // Keywords that indicate need for fresh information
  const searchKeywords = [
    'latest', 'recent', 'current', 'today', 'now', 'breaking',
    'new', 'update', 'what happened', 'current status', 'trends',
    'news', 'developments', 'this year', '2024'
  ]

  const queryLower = query.toLowerCase()
  return searchKeywords.some(keyword => queryLower.includes(keyword))
}

function formatSearchContext(results: any[]): string {
  return results
    .map((result, index) => {
      let context = `[Source ${index + 1}: ${result.title}]
URL: ${result.url}
Content: ${result.content || result.snippet}`

      if (result.metadata?.publishedTime) {
        context += `\nPublished: ${result.metadata.publishedTime}`
      }

      return context
    })
    .join('\n\n---\n\n')
}

function buildSystemPrompt(searchContext: string): string {
  const basePrompt = `You are a helpful AI assistant that provides accurate, well-sourced answers.`

  if (searchContext) {
    return `${basePrompt}

You have access to current web information provided below. Use this information to provide accurate, up-to-date responses.

IMPORTANT INSTRUCTIONS:
1. Use ONLY the information provided in the web sources when making factual claims
2. Cite sources by mentioning the source title or URL when referencing information
3. If the provided information doesn't fully answer the question, say so clearly
4. Be concise but comprehensive
5. If information conflicts between sources, acknowledge this

WEB SOURCES:
${searchContext}

Answer based on the above sources and your general knowledge, clearly distinguishing between the two.`
  }

  return `${basePrompt}

Provide helpful, accurate responses based on your training data. If you need current information that might have changed recently, let the user know that you may not have the latest updates.`
}

Advanced Streaming with Sources

Create an enhanced API route that streams sources alongside the response (app/api/chat-with-sources/route.ts):

import { OpenAIStream, StreamingTextResponse } from 'ai'
import { Configuration, OpenAIApi } from 'openai-edge'
import { searchWithZapserp, filterQualitySources, SearchResult } from '@/lib/zapserp'

const config = new Configuration({
  apiKey: process.env.OPENAI_API_KEY
})

const openai = new OpenAIApi(config)

export const runtime = 'edge'

interface StreamData {
  sources?: SearchResult[]
  searchQuery?: string
  timestamp?: string
}

export async function POST(req: Request) {
  try {
    const { messages, options = {} } = await req.json()
    
    const {
      searchEnabled = true,
      searchStrategy = 'auto',
      maxSources = 5
    } = options

    const lastMessage = messages[messages.length - 1]
    const userQuery = lastMessage.content

    let searchResults: SearchResult[] = []
    let searchContext = ''

    // Perform contextual search
    if (searchEnabled) {
      const searchQuery = generateSearchQuery(userQuery, searchStrategy)
      
      if (searchQuery) {
        try {
          searchResults = await searchWithZapserp(searchQuery, {
            limit: maxSources,
            includeContent: true,
            contentLimit: Math.min(maxSources, 3)
          })

          searchResults = filterQualitySources(searchResults)
          
          if (searchResults.length > 0) {
            searchContext = formatEnhancedSearchContext(searchResults)
          }
        } catch (error) {
          console.error('Enhanced search failed:', error)
        }
      }
    }

    // Build context-aware system prompt
    const systemPrompt = buildEnhancedSystemPrompt(searchContext, userQuery)

    const openAIMessages = [
      { role: 'system', content: systemPrompt },
      ...messages
    ]

    // Create streaming response with custom callbacks
    const response = await openai.createChatCompletion({
      model: 'gpt-4-turbo-preview',
      messages: openAIMessages,
      temperature: 0.1,
      stream: true,
      max_tokens: 2000
    })

    // Enhanced stream with source information
    const stream = OpenAIStream(response, {
      async onStart() {
        // Stream sources at the beginning
        if (searchResults.length > 0) {
          console.log(`Streaming with ${searchResults.length} sources`)
        }
      },
      async onToken(token: string) {
        // Process each token if needed
        return token
      },
      async onCompletion(completion: string) {
        // Log successful completion
        console.log('Stream completed successfully')
      }
    })

    // Create response with custom headers for sources
    const headers = new Headers()
    headers.set('X-Sources-Count', searchResults.length.toString())
    headers.set('X-Search-Query', userQuery)
    
    if (searchResults.length > 0) {
      headers.set('X-Sources', JSON.stringify(
        searchResults.map(r => ({
          title: r.title,
          url: r.url,
          snippet: r.snippet.substring(0, 100)
        }))
      ))
    }

    return new StreamingTextResponse(stream, { headers })

  } catch (error) {
    console.error('Enhanced API error:', error)
    return new Response('Internal Server Error', { status: 500 })
  }
}

function generateSearchQuery(userQuery: string, strategy: string): string | null {
  const query = userQuery.trim()
  
  switch (strategy) {
    case 'news':
      return `${query} latest news today`
    case 'academic':
      return `${query} research study academic`
    case 'auto':
      // Auto-detect if search is needed
      if (shouldPerformSearch(userQuery)) {
        return enhanceQuery(query)
      }
      return null
    default:
      return query
  }
}

function enhanceQuery(query: string): string {
  // Enhance query based on detected intent
  const currentYear = new Date().getFullYear()
  
  if (query.toLowerCase().includes('latest') || query.toLowerCase().includes('recent')) {
    return `${query} ${currentYear}`
  }
  
  if (query.toLowerCase().includes('news') || query.toLowerCase().includes('breaking')) {
    return `${query} today news`
  }
  
  return query
}

function formatEnhancedSearchContext(results: SearchResult[]): string {
  return results
    .map((result, index) => {
      const metadata = result.metadata
      const metaInfo = [
        metadata?.author && `Author: ${metadata.author}`,
        metadata?.publishedTime && `Published: ${metadata.publishedTime}`,
        `Relevance: High`
      ].filter(Boolean).join(' | ')

      return `[Source ${index + 1}: ${result.title}]
${metaInfo}
URL: ${result.url}
Content: ${result.content || result.snippet}
---`
    })
    .join('\n\n')
}

function buildEnhancedSystemPrompt(searchContext: string, userQuery: string): string {
  const basePrompt = `You are an expert AI assistant providing accurate, well-researched responses.`

  if (searchContext) {
    return `${basePrompt}

You have access to current web information related to: "${userQuery}"

RESPONSE GUIDELINES:
1. Provide a direct, comprehensive answer to the user's question
2. Use information from the web sources to support your response
3. Cite sources naturally within your response (e.g., "According to [Source Title]...")
4. If sources provide conflicting information, acknowledge this and explain
5. Maintain a conversational yet informative tone
6. End with a brief summary of key points if the topic is complex

CURRENT WEB SOURCES:
${searchContext}

Provide a well-structured response that combines the web sources with your knowledge.`
  }

  return `${basePrompt}

Provide helpful, accurate responses based on your training data. For questions requiring very recent information, suggest that the user might want to search for the latest updates.`
}

function shouldPerformSearch(query: string): boolean {
  const searchIndicators = [
    'latest', 'recent', 'current', 'today', 'now', 'breaking',
    'new', 'update', 'what happened', 'status', 'trends',
    'news', 'developments', '2024', 'this year', 'currently'
  ]

  const queryLower = query.toLowerCase()
  return searchIndicators.some(indicator => queryLower.includes(indicator))
}

Building the Frontend Interface

Chat Component with Sources

Create a comprehensive chat interface (components/ai-chat.tsx):

'use client'

import React, { useState, useRef, useEffect } from 'react'
import { useChat } from 'ai/react'
import { Send, Search, ExternalLink, Copy, RefreshCw } from 'lucide-react'
import { Button } from '@/components/ui/button'
import { Textarea } from '@/components/ui/textarea'
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'
import { Badge } from '@/components/ui/badge'
import { Separator } from '@/components/ui/separator'

interface Source {
  title: string
  url: string
  snippet: string
}

interface ChatMessage {
  id: string
  role: 'user' | 'assistant'
  content: string
  sources?: Source[]
  timestamp?: string
}

interface AIChatProps {
  apiEndpoint?: string
  placeholder?: string
  searchEnabled?: boolean
  className?: string
}

export function AIChat({
  apiEndpoint = '/api/chat-with-sources',
  placeholder = 'Ask me anything...',
  searchEnabled = true,
  className = ''
}: AIChatProps) {
  const [searchMode, setSearchMode] = useState(searchEnabled)
  const [sources, setSources] = useState<Source[]>([])
  const messagesEndRef = useRef<HTMLDivElement>(null)

  const {
    messages,
    input,
    handleInputChange,
    handleSubmit,
    isLoading,
    error,
    reload,
    stop
  } = useChat({
    api: apiEndpoint,
    body: {
      searchEnabled: searchMode,
      options: {
        searchStrategy: 'auto',
        maxSources: 5
      }
    },
    onResponse: async (response) => {
      // Extract sources from response headers
      const sourcesHeader = response.headers.get('X-Sources')
      if (sourcesHeader) {
        try {
          const parsedSources = JSON.parse(sourcesHeader)
          setSources(parsedSources)
        } catch (error) {
          console.error('Failed to parse sources:', error)
        }
      }
    },
    onError: (error) => {
      console.error('Chat error:', error)
    }
  })

  const scrollToBottom = () => {
    messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' })
  }

  useEffect(() => {
    scrollToBottom()
  }, [messages])

  const handleFormSubmit = (e: React.FormEvent) => {
    e.preventDefault()
    if (!input.trim() || isLoading) return
    
    handleSubmit(e)
    setSources([]) // Clear previous sources
  }

  const copyToClipboard = (text: string) => {
    navigator.clipboard.writeText(text)
  }

  return (
    <div className={`flex flex-col h-full max-w-4xl mx-auto ${className}`}>
      {/* Header */}
      <div className="flex items-center justify-between p-4 border-b">
        <div>
          <h1 className="text-2xl font-bold">AI Assistant</h1>
          <p className="text-sm text-muted-foreground">
            Powered by Zapserp & OpenAI
          </p>
        </div>
        
        <div className="flex items-center gap-2">
          <Button
            variant={searchMode ? "default" : "outline"}
            size="sm"
            onClick={() => setSearchMode(!searchMode)}
          >
            <Search className="w-4 h-4 mr-2" />
            Web Search {searchMode ? 'On' : 'Off'}
          </Button>
          
          {error && (
            <Button
              variant="outline"
              size="sm"
              onClick={() => reload()}
            >
              <RefreshCw className="w-4 h-4 mr-2" />
              Retry
            </Button>
          )}
        </div>
      </div>

      {/* Messages */}
      <div className="flex-1 overflow-y-auto p-4 space-y-4">
        {messages.length === 0 && (
          <div className="text-center text-muted-foreground py-8">
            <Search className="w-12 h-12 mx-auto mb-4 opacity-50" />
            <h3 className="text-lg font-medium mb-2">Ready to help!</h3>
            <p>Ask me anything and I'll search the web for current information.</p>
          </div>
        )}

        {messages.map((message, index) => (
          <div key={message.id} className="space-y-2">
            <div className={`flex ${message.role === 'user' ? 'justify-end' : 'justify-start'}`}>
              <div className={`max-w-[80%] rounded-lg p-4 ${
                message.role === 'user'
                  ? 'bg-primary text-primary-foreground'
                  : 'bg-muted'
              }`}>
                <div className="flex items-start justify-between">
                  <div className="flex-1 whitespace-pre-wrap">
                    {message.content}
                  </div>
                  <Button
                    variant="ghost"
                    size="sm"
                    className="ml-2 opacity-50 hover:opacity-100"
                    onClick={() => copyToClipboard(message.content)}
                  >
                    <Copy className="w-3 h-3" />
                  </Button>
                </div>
              </div>
            </div>

            {/* Show sources for the last assistant message */}
            {message.role === 'assistant' && 
             index === messages.length - 1 && 
             sources.length > 0 && (
              <div className="ml-4">
                <SourcesDisplay sources={sources} />
              </div>
            )}
          </div>
        ))}

        {isLoading && (
          <div className="flex justify-start">
            <div className="bg-muted rounded-lg p-4 max-w-[80%]">
              <div className="flex items-center space-x-2">
                <div className="animate-spin rounded-full h-4 w-4 border-b-2 border-primary"></div>
                <span className="text-sm text-muted-foreground">
                  {searchMode ? 'Searching web and thinking...' : 'Thinking...'}
                </span>
              </div>
            </div>
          </div>
        )}

        {error && (
          <div className="flex justify-start">
            <Card className="max-w-[80%] border-destructive">
              <CardContent className="p-4">
                <p className="text-sm text-destructive">
                  Something went wrong. Please try again.
                </p>
              </CardContent>
            </Card>
          </div>
        )}

        <div ref={messagesEndRef} />
      </div>

      {/* Input */}
      <div className="border-t p-4">
        <form onSubmit={handleFormSubmit} className="flex gap-2">
          <Textarea
            value={input}
            onChange={handleInputChange}
            placeholder={placeholder}
            className="flex-1 min-h-[60px] resize-none"
            onKeyDown={(e) => {
              if (e.key === 'Enter' && !e.shiftKey) {
                e.preventDefault()
                handleFormSubmit(e)
              }
            }}
          />
          <div className="flex flex-col gap-2">
            <Button 
              type="submit" 
              disabled={!input.trim() || isLoading}
              className="px-6"
            >
              {isLoading ? (
                <RefreshCw className="w-4 h-4 animate-spin" />
              ) : (
                <Send className="w-4 h-4" />
              )}
            </Button>
            {isLoading && (
              <Button
                type="button"
                variant="outline"
                onClick={stop}
                className="px-6"
              >
                Stop
              </Button>
            )}
          </div>
        </form>
      </div>
    </div>
  )
}

// Sources Display Component
function SourcesDisplay({ sources }: { sources: Source[] }) {
  if (sources.length === 0) return null

  return (
    <Card>
      <CardHeader>
        <CardTitle className="text-sm">Sources</CardTitle>
      </CardHeader>
      <CardContent>
        <div className="space-y-3">
          {sources.map((source, index) => (
            <div key={index} className="space-y-2">
              <div className="flex items-start justify-between">
                <div className="flex-1">
                  <a
                    href={source.url}
                    target="_blank"
                    rel="noopener noreferrer"
                    className="text-sm font-medium text-primary hover:underline flex items-center gap-1"
                  >
                    {source.title}
                    <ExternalLink className="w-3 h-3" />
                  </a>
                  <p className="text-xs text-muted-foreground mt-1">
                    {source.snippet}
                  </p>
                </div>
                <Badge variant="secondary" className="ml-2">
                  {index + 1}
                </Badge>
              </div>
              {index < sources.length - 1 && <Separator />}
            </div>
          ))}
        </div>
      </CardContent>
    </Card>
  )
}

Main Chat Page

Create the main chat page (app/page.tsx):

import { AIChat } from '@/components/ai-chat'

export default function HomePage() {
  return (
    <div className="container mx-auto h-screen flex flex-col">
      <AIChat
        searchEnabled={true}
        placeholder="Ask me anything about current events, technology, or any topic..."
        className="h-full"
      />
    </div>
  )
}

Advanced Patterns

Real-Time Search Suggestions

Create a component that provides search suggestions (components/search-suggestions.tsx):

'use client'

import React, { useState, useEffect, useMemo } from 'react'
import { searchWithZapserp } from '@/lib/zapserp'
import { Button } from '@/components/ui/button'
import { Badge } from '@/components/ui/badge'
import { TrendingUp, Clock, Search } from 'lucide-react'

interface SearchSuggestionsProps {
  onSuggestionClick: (suggestion: string) => void
  className?: string
}

export function SearchSuggestions({ onSuggestionClick, className = '' }: SearchSuggestionsProps) {
  const [trendingTopics, setTrendingTopics] = useState<string[]>([])
  const [loading, setLoading] = useState(false)

  // Predefined trending topics that update based on current events
  const baseTrendingTopics = useMemo(() => [
    'latest AI developments 2024',
    'climate change news today',
    'technology trends this year',
    'stock market updates',
    'space exploration recent',
    'cybersecurity threats 2024',
    'renewable energy progress',
    'quantum computing breakthroughs'
  ], [])

  const quickPrompts = [
    "What's happening in tech today?",
    "Latest AI safety research developments",
    "Current economic trends and analysis",
    "Breaking news in science and technology",
    "Recent startup funding and acquisitions",
    "Climate change latest developments"
  ]

  useEffect(() => {
    setTrendingTopics(baseTrendingTopics)
  }, [baseTrendingTopics])

  const handleRefreshTrending = async () => {
    setLoading(true)
    try {
      // You could implement real trending topic detection here
      // For now, we'll rotate through predefined topics
      const shuffled = [...baseTrendingTopics].sort(() => Math.random() - 0.5)
      setTrendingTopics(shuffled.slice(0, 6))
    } catch (error) {
      console.error('Failed to refresh trending topics:', error)
    } finally {
      setLoading(false)
    }
  }

  return (
    <div className={`space-y-6 ${className}`}>
      {/* Quick Prompts */}
      <div>
        <div className="flex items-center gap-2 mb-3">
          <Search className="w-4 h-4" />
          <h3 className="font-medium">Quick Questions</h3>
        </div>
        <div className="grid grid-cols-1 md:grid-cols-2 gap-2">
          {quickPrompts.map((prompt, index) => (
            <Button
              key={index}
              variant="outline"
              className="justify-start text-left h-auto p-3"
              onClick={() => onSuggestionClick(prompt)}
            >
              <span className="truncate">{prompt}</span>
            </Button>
          ))}
        </div>
      </div>

      {/* Trending Topics */}
      <div>
        <div className="flex items-center justify-between mb-3">
          <div className="flex items-center gap-2">
            <TrendingUp className="w-4 h-4" />
            <h3 className="font-medium">Trending Topics</h3>
          </div>
          <Button
            variant="ghost"
            size="sm"
            onClick={handleRefreshTrending}
            disabled={loading}
          >
            <Clock className="w-3 h-3 mr-1" />
            Refresh
          </Button>
        </div>
        <div className="flex flex-wrap gap-2">
          {trendingTopics.map((topic, index) => (
            <Badge
              key={index}
              variant="secondary"
              className="cursor-pointer hover:bg-primary hover:text-primary-foreground transition-colors"
              onClick={() => onSuggestionClick(topic)}
            >
              {topic}
            </Badge>
          ))}
        </div>
      </div>
    </div>
  )
}

Enhanced Chat with Suggestions

Create an enhanced chat page that includes suggestions (app/enhanced-chat/page.tsx):

'use client'

import React, { useState } from 'react'
import { AIChat } from '@/components/ai-chat'
import { SearchSuggestions } from '@/components/search-suggestions'
import { Card, CardContent } from '@/components/ui/card'
import { Tabs, TabsContent, TabsList, TabsTrigger } from '@/components/ui/tabs'

export default function EnhancedChatPage() {
  const [inputValue, setInputValue] = useState('')
  const [showSuggestions, setShowSuggestions] = useState(true)

  const handleSuggestionClick = (suggestion: string) => {
    setInputValue(suggestion)
    setShowSuggestions(false)
    // Trigger the chat input
    // This would need to be connected to the AIChat component
  }

  return (
    <div className="container mx-auto h-screen flex">
      {/* Main Chat Area */}
      <div className="flex-1 flex flex-col">
        <AIChat
          searchEnabled={true}
          placeholder="Ask me anything about current events, technology, or any topic..."
          className="h-full"
        />
      </div>

      {/* Sidebar with Suggestions */}
      {showSuggestions && (
        <div className="w-80 border-l p-4 overflow-y-auto">
          <Card>
            <CardContent className="p-4">
              <Tabs defaultValue="suggestions" className="w-full">
                <TabsList className="grid w-full grid-cols-2">
                  <TabsTrigger value="suggestions">Suggestions</TabsTrigger>
                  <TabsTrigger value="history">History</TabsTrigger>
                </TabsList>
                
                <TabsContent value="suggestions" className="mt-4">
                  <SearchSuggestions
                    onSuggestionClick={handleSuggestionClick}
                  />
                </TabsContent>
                
                <TabsContent value="history" className="mt-4">
                  <div className="text-center text-muted-foreground py-8">
                    <p>Chat history will appear here</p>
                  </div>
                </TabsContent>
              </Tabs>
            </CardContent>
          </Card>
        </div>
      )}
    </div>
  )
}

Production Deployment on Vercel

Optimized Vercel Configuration

Create vercel.json for optimal deployment:

{
  "framework": "nextjs",
  "buildCommand": "npm run build",
  "devCommand": "npm run dev",
  "installCommand": "npm install",
  "functions": {
    "app/api/chat/route.ts": {
      "maxDuration": 30,
      "runtime": "edge"
    },
    "app/api/chat-with-sources/route.ts": {
      "maxDuration": 30,
      "runtime": "edge"
    }
  },
  "headers": [
    {
      "source": "/api/(.*)",
      "headers": [
        {
          "key": "Access-Control-Allow-Origin",
          "value": "*"
        },
        {
          "key": "Access-Control-Allow-Methods",
          "value": "GET, POST, PUT, DELETE, OPTIONS"
        },
        {
          "key": "Access-Control-Allow-Headers",
          "value": "Content-Type, Authorization"
        }
      ]
    }
  ]
}

Environment Variables Setup

Set up your environment variables in Vercel dashboard:

# Required
OPENAI_API_KEY=your_openai_api_key
ZAPSERP_API_KEY=your_zapserp_api_key

# Optional
VERCEL_URL=your_vercel_domain
NODE_ENV=production

Middleware for Rate Limiting

Create middleware for production (middleware.ts):

import { NextRequest, NextResponse } from 'next/server'

// Simple in-memory rate limiting (use Redis in production)
const rateLimitMap = new Map<string, { count: number; timestamp: number }>()

export function middleware(request: NextRequest) {
  // Rate limiting for API routes
  if (request.nextUrl.pathname.startsWith('/api/')) {
    const ip = request.ip || request.headers.get('x-forwarded-for') || 'anonymous'
    const now = Date.now()
    const windowMs = 60 * 1000 // 1 minute
    const limit = 20 // 20 requests per minute

    const clientData = rateLimitMap.get(ip)
    
    if (clientData) {
      if (now - clientData.timestamp < windowMs) {
        if (clientData.count >= limit) {
          return new NextResponse('Too Many Requests', { status: 429 })
        }
        clientData.count++
      } else {
        // Reset window
        clientData.count = 1
        clientData.timestamp = now
      }
    } else {
      rateLimitMap.set(ip, { count: 1, timestamp: now })
    }

    // Cleanup old entries periodically
    if (rateLimitMap.size > 1000) {
      for (const [key, data] of rateLimitMap.entries()) {
        if (now - data.timestamp > windowMs * 2) {
          rateLimitMap.delete(key)
        }
      }
    }
  }

  return NextResponse.next()
}

export const config = {
  matcher: [
    '/api/:path*'
  ]
}

Performance Optimization

Caching Strategy

Implement caching for better performance (lib/cache.ts):

interface CacheEntry<T> {
  data: T
  timestamp: number
  ttl: number
}

class MemoryCache<T> {
  private cache = new Map<string, CacheEntry<T>>()
  
  set(key: string, data: T, ttlSeconds: number = 300): void {
    this.cache.set(key, {
      data,
      timestamp: Date.now(),
      ttl: ttlSeconds * 1000
    })
  }
  
  get(key: string): T | null {
    const entry = this.cache.get(key)
    
    if (!entry) return null
    
    if (Date.now() - entry.timestamp > entry.ttl) {
      this.cache.delete(key)
      return null
    }
    
    return entry.data
  }
  
  clear(): void {
    this.cache.clear()
  }
}

// Export cache instances
export const searchCache = new MemoryCache<any>()
export const contentCache = new MemoryCache<any>()

// Cached search function
export async function cachedSearchWithZapserp(
  query: string,
  options: any = {}
): Promise<any> {
  const cacheKey = `search:${query}:${JSON.stringify(options)}`
  
  // Try cache first
  const cached = searchCache.get(cacheKey)
  if (cached) {
    console.log('Cache hit for search:', query)
    return cached
  }
  
  // Perform search
  const result = await searchWithZapserp(query, options)
  
  // Cache result
  searchCache.set(cacheKey, result, 300) // 5 minutes
  
  return result
}

Key Benefits & Best Practices

Benefits of This Stack

  1. Streaming Responses: Real-time UI updates as AI generates responses
  2. Edge Runtime: Fast, globally distributed AI inference
  3. Real-Time Data: Current web information via Zapserp
  4. Type Safety: Full TypeScript support across the stack
  5. Zero Config Deployment: Deploy to Vercel with minimal setup
  6. Scalable Architecture: Built for production workloads

Best Practices

  1. Edge Runtime: Use edge runtime for faster response times
  2. Streaming: Implement streaming for better user experience
  3. Caching: Cache search results and responses appropriately
  4. Error Handling: Implement robust error handling and fallbacks
  5. Rate Limiting: Protect your APIs with rate limiting
  6. Source Quality: Filter sources for authority and relevance

Next Steps

Ready to build your Next.js AI app with Zapserp? Here's your roadmap:

  1. Start with Basic Chat: Implement the basic streaming chat interface
  2. Add Search Integration: Connect Zapserp for real-time data
  3. Enhance UI: Add sources display and search suggestions
  4. Optimize Performance: Implement caching and rate limiting
  5. Deploy to Vercel: Deploy with optimized edge configuration

Need help with your implementation? Contact our team for Next.js and Vercel deployment guidance.

Found this helpful?

Share it with your network and help others discover great content.

Related Articles

Master advanced Next.js patterns for AI applications with Zapserp. Learn server components, edge functions, real-time updates, and enterprise deployment strategies.

16 min read
Next.js & Vercel

Build a live stock market monitoring dashboard using Zapserp for real-time financial news and market analysis. Complete with React components and WebSocket updates.

4 min read
Financial Apps

Learn how to integrate Zapserp with LLMs for powerful RAG applications. Complete guide with implementation examples, vector embeddings, and best practices for real-time AI systems.

15 min read
AI & LLM