Back to Blog
January 25, 2024
Developer Relations
4 min read
Best Practices

5 Common Zapserp Integration Mistakes and How to Avoid Them

Learn from common pitfalls when integrating Zapserp into your applications. Discover best practices, error handling patterns, and optimization techniques that will save you time and credits.

troubleshootingbest-practicesoptimizationerror-handlingintegration

5 Common Zapserp Integration Mistakes and How to Avoid Them

Integrating Zapserp into your application can dramatically enhance its capabilities, but there are several common pitfalls that developers encounter. Here are the top 5 mistakes and how to avoid them.

1. Not Implementing Proper Error Handling

The Mistake: Many developers assume API calls will always succeed and don't implement comprehensive error handling.

// ❌ Bad: No error handling
const searchResults = await zapserp.search({ query: userInput })
return searchResults.results

The Solution: Always wrap Zapserp calls in try-catch blocks and handle different error scenarios:

// ✅ Good: Comprehensive error handling
async function safeSearch(query: string) {
  try {
    const searchResults = await zapserp.search({ 
      query,
      limit: 10,
      engines: ['google', 'bing']
    })
    
    if (!searchResults.results || searchResults.results.length === 0) {
      return { success: false, error: 'No results found', results: [] }
    }
    
    return { success: true, results: searchResults.results }
    
  } catch (error) {
    console.error('Search failed:', error)
    
    // Handle specific error types
    if (error.message.includes('rate limit')) {
      return { success: false, error: 'Rate limit exceeded. Please try again later.' }
    }
    
    if (error.message.includes('network')) {
      return { success: false, error: 'Network error. Please check your connection.' }
    }
    
    return { success: false, error: 'Search temporarily unavailable. Please try again.' }
  }
}

Why it matters: Proper error handling prevents your application from crashing and provides users with meaningful feedback when issues occur.

2. Making Too Many Individual Requests

The Mistake: Processing multiple URLs with individual reader calls instead of using batch operations.

// ❌ Bad: Individual requests
async function extractMultipleUrls(urls: string[]) {
  const results = []
  for (const url of urls) {
    const content = await zapserp.reader({ url })
    results.push(content)
  }
  return results
}

The Solution: Use batch operations for better performance and credit efficiency:

// ✅ Good: Batch processing
async function extractMultipleUrls(urls: string[]) {
  // Process in batches of 10 for optimal performance
  const batchSize = 10
  const allResults = []
  
  for (let i = 0; i < urls.length; i += batchSize) {
    const batch = urls.slice(i, i + batchSize)
    
    try {
      const batchResponse = await zapserp.readerBatch({ urls: batch })
      allResults.push(...batchResponse.results)
      
      // Add small delay between batches to be respectful
      if (i + batchSize < urls.length) {
        await new Promise(resolve => setTimeout(resolve, 100))
      }
    } catch (error) {
      console.error(`Batch ${i / batchSize + 1} failed:`, error)
      // Continue with next batch instead of failing entirely
    }
  }
  
  return allResults.filter(result => result && result.content)
}

Why it matters: Batch processing is more efficient, uses fewer credits, and reduces the chance of hitting rate limits.

3. Not Caching Results

The Mistake: Making the same API calls repeatedly without caching results.

// ❌ Bad: No caching
async function getNewsAbout(topic: string) {
  // This runs every time, even for the same topic
  const results = await zapserp.search({
    query: `${topic} news latest`,
    limit: 5
  })
  return results
}

The Solution: Implement smart caching to avoid redundant API calls:

// ✅ Good: Smart caching
class SearchCache {
  private cache = new Map<string, { data: any, timestamp: number }>()
  private ttl = 5 * 60 * 1000 // 5 minutes
  
  async getCachedSearch(query: string, searchFn: () => Promise<any>) {
    const cacheKey = this.createCacheKey(query)
    const cached = this.cache.get(cacheKey)
    
    // Return cached result if still valid
    if (cached && Date.now() - cached.timestamp < this.ttl) {
      console.log('Cache hit for:', query)
      return cached.data
    }
    
    // Fetch fresh data
    const freshData = await searchFn()
    
    // Cache the result
    this.cache.set(cacheKey, {
      data: freshData,
      timestamp: Date.now()
    })
    
    return freshData
  }
  
  private createCacheKey(query: string): string {
    return `search_${query.toLowerCase().replace(/\s+/g, '_')}`
  }
}

const searchCache = new SearchCache()

async function getNewsAbout(topic: string) {
  return searchCache.getCachedSearch(
    `${topic} news latest`,
    () => zapserp.search({
      query: `${topic} news latest`,
      limit: 5
    })
  )
}

Why it matters: Caching reduces API costs, improves response times, and provides a better user experience.

4. Ignoring Content Quality Filtering

The Mistake: Using all search results without filtering for quality and relevance.

// ❌ Bad: Using all results indiscriminately
const searchResults = await zapserp.search({ query: 'AI trends' })
const urls = searchResults.results.map(r => r.url) // Could include low-quality sources

The Solution: Filter results for quality domains and relevant content:

// ✅ Good: Quality filtering
function filterQualityResults(results: any[]) {
  const qualityDomains = [
    'wikipedia.org', 'stackoverflow.com', 'github.com', 'medium.com',
    'techcrunch.com', 'wired.com', 'arstechnica.com', 'reuters.com',
    'bbc.com', 'nytimes.com', 'wsj.com', 'bloomberg.com'
  ]
  
  const excludeDomains = [
    'pinterest.com', 'youtube.com', 'facebook.com', 'twitter.com'
  ]
  
  return results.filter(result => {
    const url = result.url.toLowerCase()
    
    // Include known quality domains
    if (qualityDomains.some(domain => url.includes(domain))) {
      return true
    }
    
    // Exclude known low-quality domains
    if (excludeDomains.some(domain => url.includes(domain))) {
      return false
    }
    
    // Additional quality checks
    return result.title && 
           result.title.length > 10 && 
           result.snippet && 
           result.snippet.length > 30
  })
}

async function getQualityResults(query: string) {
  const searchResults = await zapserp.search({ query, limit: 20 })
  const qualityResults = filterQualityResults(searchResults.results)
  
  return qualityResults.slice(0, 10) // Top 10 quality results
}

Why it matters: Quality filtering ensures you get reliable, relevant content while avoiding spam and low-value sources.

5. Not Optimizing Query Construction

The Mistake: Using raw user input directly as search queries without optimization.

// ❌ Bad: Raw user input
const userQuery = "tell me about AI"
const results = await zapserp.search({ query: userQuery })

The Solution: Enhance and optimize queries for better results:

// ✅ Good: Query optimization
class QueryOptimizer {
  static enhanceQuery(userInput: string, context?: string): string {
    let optimizedQuery = userInput.trim()
    
    // Add current year for recent information
    if (this.needsTimeContext(optimizedQuery)) {
      optimizedQuery += ' 2024'
    }
    
    // Add context if available
    if (context) {
      optimizedQuery = `${context} ${optimizedQuery}`
    }
    
    // Enhance with synonyms for better coverage
    optimizedQuery = this.addSynonyms(optimizedQuery)
    
    return optimizedQuery
  }
  
  private static needsTimeContext(query: string): boolean {
    const timeKeywords = ['latest', 'recent', 'current', 'new', 'trends']
    return timeKeywords.some(keyword => 
      query.toLowerCase().includes(keyword)
    )
  }
  
  private static addSynonyms(query: string): string {
    const synonymMap: Record<string, string> = {
      'AI': 'artificial intelligence',
      'ML': 'machine learning',
      'crypto': 'cryptocurrency bitcoin'
    }
    
    let enhanced = query
    Object.entries(synonymMap).forEach(([term, synonyms]) => {
      if (enhanced.toLowerCase().includes(term.toLowerCase())) {
        enhanced += ` ${synonyms}`
      }
    })
    
    return enhanced
  }
}

// Usage
async function smartSearch(userInput: string, context?: string) {
  const optimizedQuery = QueryOptimizer.enhanceQuery(userInput, context)
  
  const results = await zapserp.search({
    query: optimizedQuery,
    engines: ['google', 'bing'],
    limit: 10
  })
  
  return results
}

Why it matters: Optimized queries return more relevant, comprehensive results and improve the overall search experience.

Quick Checklist for Success

Before deploying your Zapserp integration, ensure you:

  • ✅ Implement comprehensive error handling
  • ✅ Use batch operations for multiple requests
  • ✅ Add intelligent caching with appropriate TTL
  • ✅ Filter results for quality and relevance
  • ✅ Optimize queries for better search results
  • ✅ Monitor API usage and performance
  • ✅ Set up rate limiting and retry logic

Conclusion

Avoiding these common mistakes will make your Zapserp integration more reliable, efficient, and cost-effective. Remember that good integration practices not only improve performance but also provide a better experience for your users.

Need help troubleshooting your integration? Contact our support team for personalized assistance.

Found this helpful?

Share it with your network and help others discover great content.

Related Articles

Learn how to implement smart content quality filtering to ensure you get only high-quality, relevant results from your Zapserp searches. Includes practical filtering techniques and code examples.

4 min read
Best Practices

Learn how to maximize your Zapserp API performance with proven optimization techniques, efficient request patterns, and smart caching strategies.

8 min read
Performance

Discover how developers and businesses are using Zapserp to build innovative applications across industries - from market research tools to content aggregation platforms.

10 min read
Use Cases