Back to Blog
January 12, 2024
Technical Team
8 min read
PerformanceFeatured

Optimizing API Performance: Best Practices for Zapserp

Learn how to maximize your Zapserp API performance with proven optimization techniques, efficient request patterns, and smart caching strategies.

optimizationperformancebest-practicescachingefficiency

Optimizing API Performance: Best Practices for Zapserp

When building applications that rely on web search and content extraction, performance is crucial. Whether you're processing hundreds or thousands of requests daily, optimizing your API usage can significantly impact your application's speed, cost, and user experience.

In this comprehensive guide, we'll explore proven techniques to maximize your Zapserp API performance.

Understanding API Performance Metrics

Before diving into optimization techniques, it's important to understand the key metrics that affect your API performance:

Request Latency

The time between sending a request and receiving a complete response. This includes:

  • Network round-trip time
  • Processing time on our servers
  • Content extraction complexity

Throughput

The number of requests your application can process per second while maintaining acceptable response times.

Credit Efficiency

How effectively you use your allocated credits to achieve your goals.

1. Implement Smart Request Batching

One of the most effective ways to improve performance is through intelligent batching of your requests.

Batch Reader Requests

Instead of making individual reader requests, group multiple URLs together:

import { Zapserp, ReaderBatchResponse } from 'zapserp'

const zapserp = new Zapserp({
  apiKey: 'YOUR_API_KEY'
})

// ❌ Inefficient: Multiple individual requests
const inefficientApproach = async (urls: string[]) => {
  const results = []
  
  for (const url of urls) {
    const result = await zapserp.reader({ url })
    results.push(result)
  }
  
  return results
}

// ✅ Efficient: Single batch request
const efficientApproach = async (urls: string[]) => {
  const batchResponse: ReaderBatchResponse = await zapserp.readerBatch({
    urls: urls
  })
  
  return batchResponse.results
}

// Example usage
const urlsToProcess = [
  'https://example.com/article1',
  'https://example.com/article2',
  'https://example.com/article3',
  'https://example.com/article4',
  'https://example.com/article5'
]

const results = await efficientApproach(urlsToProcess)
console.log(`Processed ${results.length} pages in a single request`)

Optimal Batch Sizes

Finding the right batch size is crucial for performance:

const optimizedBatchProcessor = async (urls: string[], batchSize: number = 10) => {
  const batches = []
  
  // Split URLs into optimal batch sizes
  for (let i = 0; i < urls.length; i += batchSize) {
    batches.push(urls.slice(i, i + batchSize))
  }
  
  // Process batches concurrently (but limit concurrency)
  const concurrentLimit = 3
  const results = []
  
  for (let i = 0; i < batches.length; i += concurrentLimit) {
    const currentBatches = batches.slice(i, i + concurrentLimit)
    
    const batchPromises = currentBatches.map(batch => 
      zapserp.readerBatch({ urls: batch })
    )
    
    const batchResults = await Promise.all(batchPromises)
    
    // Flatten results
    batchResults.forEach(batchResponse => {
      results.push(...batchResponse.results)
    })
    
    // Add small delay between batch groups to be respectful
    if (i + concurrentLimit < batches.length) {
      await new Promise(resolve => setTimeout(resolve, 100))
    }
  }
  
  return results
}

// Process 100 URLs efficiently
const largeUrlSet = Array.from({length: 100}, (_, i) => 
  `https://example.com/page${i + 1}`
)

const results = await optimizedBatchProcessor(largeUrlSet)
console.log(`Successfully processed ${results.length} pages`)

2. Implement Intelligent Caching

Caching is essential for reducing API calls and improving response times.

Memory Caching with TTL

interface CacheEntry<T> {
  data: T
  timestamp: number
  ttl: number
}

class APICache<T> {
  private cache = new Map<string, CacheEntry<T>>()
  
  set(key: string, data: T, ttlSeconds: number = 3600): void {
    this.cache.set(key, {
      data,
      timestamp: Date.now(),
      ttl: ttlSeconds * 1000
    })
  }
  
  get(key: string): T | null {
    const entry = this.cache.get(key)
    
    if (!entry) return null
    
    // Check if expired
    if (Date.now() - entry.timestamp > entry.ttl) {
      this.cache.delete(key)
      return null
    }
    
    return entry.data
  }
  
  clear(): void {
    this.cache.clear()
  }
  
  size(): number {
    return this.cache.size
  }
}

// Create cache instances for different data types
const searchCache = new APICache<SearchResponse>()
const contentCache = new APICache<Page>()

// Cached search function
const cachedSearch = async (query: string, options: any = {}) => {
  const cacheKey = `search:${query}:${JSON.stringify(options)}`
  
  // Try cache first
  let cached = searchCache.get(cacheKey)
  if (cached) {
    console.log('Cache hit for search query:', query)
    return cached
  }
  
  // Make API call
  const result = await zapserp.search({ query, ...options })
  
  // Cache result (cache search results for 1 hour)
  searchCache.set(cacheKey, result, 3600)
  
  return result
}

// Cached content extraction
const cachedReader = async (url: string) => {
  const cacheKey = `content:${url}`
  
  let cached = contentCache.get(cacheKey)
  if (cached) {
    console.log('Cache hit for URL:', url)
    return cached
  }
  
  const result = await zapserp.reader({ url })
  
  // Cache content for 24 hours (content changes less frequently)
  contentCache.set(cacheKey, result, 86400)
  
  return result
}

Database Caching for Persistent Storage

// Example with a simple database cache
interface DatabaseCache {
  get(key: string): Promise<any | null>
  set(key: string, data: any, ttlSeconds: number): Promise<void>
}

class PersistentAPICache {
  constructor(private db: DatabaseCache) {}
  
  async cachedSearch(query: string, options: any = {}) {
    const cacheKey = `zapserp:search:${Buffer.from(query + JSON.stringify(options)).toString('base64')}`
    
    // Try database cache
    const cached = await this.db.get(cacheKey)
    if (cached) {
      return JSON.parse(cached)
    }
    
    // Make API call
    const result = await zapserp.search({ query, ...options })
    
    // Store in database cache
    await this.db.set(cacheKey, JSON.stringify(result), 3600)
    
    return result
  }
}

3. Optimize Request Parameters

Fine-tuning your request parameters can significantly impact performance.

Smart Search Limits

// ❌ Requesting more results than needed
const inefficientSearch = async () => {
  const result = await zapserp.search({
    query: 'machine learning trends',
    limit: 50  // But you only need top 10
  })
  
  return result.results.slice(0, 10)
}

// ✅ Request only what you need
const efficientSearch = async () => {
  const result = await zapserp.search({
    query: 'machine learning trends',
    limit: 10  // Exactly what you need
  })
  
  return result.results
}

Engine Selection Strategy

import { SearchEngine } from 'zapserp'

// Different strategies for different use cases
const searchStrategies = {
  // Fast results for real-time applications
  realTime: {
    engines: [SearchEngine.GOOGLE],  // Single engine for speed
    limit: 5
  },
  
  // Comprehensive results for research
  comprehensive: {
    engines: [SearchEngine.GOOGLE, SearchEngine.BING],
    limit: 20
  },
  
  // Diverse results for market analysis
  diverse: {
    engines: [SearchEngine.GOOGLE, SearchEngine.BING, SearchEngine.DUCKDUCKGO],
    limit: 15
  }
}

const adaptiveSearch = async (query: string, useCase: 'realTime' | 'comprehensive' | 'diverse') => {
  const strategy = searchStrategies[useCase]
  
  const result = await zapserp.search({
    query,
    ...strategy
  })
  
  console.log(`Used ${useCase} strategy: ${strategy.engines.length} engines, ${strategy.limit} results`)
  return result
}

4. Error Handling and Retry Logic

Robust error handling improves both reliability and performance.

class ResilientAPIClient {
  private zapserp: Zapserp
  private maxRetries: number = 3
  private baseDelay: number = 1000
  
  constructor(apiKey: string) {
    this.zapserp = new Zapserp({ apiKey })
  }
  
  async searchWithRetry(params: any, retryCount: number = 0): Promise<any> {
    try {
      return await this.zapserp.search(params)
    } catch (error: any) {
      // Check if error is retryable
      if (this.isRetryableError(error) && retryCount < this.maxRetries) {
        const delay = this.calculateDelay(retryCount)
        
        console.log(`Request failed, retrying in ${delay}ms... (attempt ${retryCount + 1}/${this.maxRetries})`)
        
        await this.sleep(delay)
        return this.searchWithRetry(params, retryCount + 1)
      }
      
      throw error
    }
  }
  
  private isRetryableError(error: any): boolean {
    // Retry on network errors, timeouts, and certain HTTP status codes
    const retryableStatusCodes = [429, 500, 502, 503, 504]
    
    if (error.code === 'NETWORK_ERROR' || error.code === 'TIMEOUT') {
      return true
    }
    
    if (error.status && retryableStatusCodes.includes(error.status)) {
      return true
    }
    
    return false
  }
  
  private calculateDelay(retryCount: number): number {
    // Exponential backoff with jitter
    const delay = this.baseDelay * Math.pow(2, retryCount)
    const jitter = Math.random() * 1000
    return delay + jitter
  }
  
  private sleep(ms: number): Promise<void> {
    return new Promise(resolve => setTimeout(resolve, ms))
  }
}

// Usage
const resilientClient = new ResilientAPIClient('YOUR_API_KEY')

try {
  const results = await resilientClient.searchWithRetry({
    query: 'artificial intelligence news',
    limit: 10
  })
  
  console.log('Search completed successfully:', results)
} catch (error) {
  console.error('Search failed after all retries:', error)
}

5. Monitor and Measure Performance

Implementing performance monitoring helps you identify bottlenecks and optimize over time.

class PerformanceMonitor {
  private metrics: Array<{
    operation: string
    duration: number
    timestamp: number
    success: boolean
    creditsUsed?: number
  }> = []
  
  async measureAPICall<T>(
    operation: string,
    apiCall: () => Promise<T>
  ): Promise<T> {
    const startTime = Date.now()
    let success = false
    let result: T
    
    try {
      result = await apiCall()
      success = true
      return result
    } catch (error) {
      throw error
    } finally {
      const duration = Date.now() - startTime
      
      this.metrics.push({
        operation,
        duration,
        timestamp: startTime,
        success,
        creditsUsed: this.extractCreditsUsed(result!)
      })
      
      // Log slow operations
      if (duration > 5000) {
        console.warn(`Slow API operation detected: ${operation} took ${duration}ms`)
      }
    }
  }
  
  private extractCreditsUsed(result: any): number | undefined {
    // Extract credits from response if available
    if (result && typeof result === 'object') {
      return result.creditUsed || result.creditsUsed
    }
    return undefined
  }
  
  getPerformanceReport(timeframe: number = 3600000): any {
    const cutoff = Date.now() - timeframe
    const recentMetrics = this.metrics.filter(m => m.timestamp > cutoff)
    
    if (recentMetrics.length === 0) {
      return { message: 'No metrics available for the specified timeframe' }
    }
    
    const successfulCalls = recentMetrics.filter(m => m.success)
    const failedCalls = recentMetrics.filter(m => !m.success)
    
    const avgDuration = successfulCalls.reduce((sum, m) => sum + m.duration, 0) / successfulCalls.length
    const totalCredits = recentMetrics.reduce((sum, m) => sum + (m.creditsUsed || 0), 0)
    
    return {
      totalCalls: recentMetrics.length,
      successfulCalls: successfulCalls.length,
      failedCalls: failedCalls.length,
      successRate: (successfulCalls.length / recentMetrics.length) * 100,
      averageDuration: Math.round(avgDuration),
      totalCreditsUsed: totalCredits,
      slowestCall: Math.max(...recentMetrics.map(m => m.duration)),
      fastestCall: Math.min(...successfulCalls.map(m => m.duration))
    }
  }
}

// Usage
const monitor = new PerformanceMonitor()

// Monitor search operations
const monitoredSearch = async (query: string) => {
  return monitor.measureAPICall(
    'search',
    () => zapserp.search({ query, limit: 10 })
  )
}

// Monitor content extraction
const monitoredReader = async (url: string) => {
  return monitor.measureAPICall(
    'reader',
    () => zapserp.reader({ url })
  )
}

// Get performance insights
setInterval(() => {
  const report = monitor.getPerformanceReport()
  console.log('Performance Report:', report)
}, 300000) // Every 5 minutes

Key Takeaways

  1. Batch your requests whenever possible to reduce API calls and improve throughput
  2. Implement intelligent caching with appropriate TTL values for different data types
  3. Optimize request parameters by requesting only what you need
  4. Use robust error handling with exponential backoff for retries
  5. Monitor performance to identify bottlenecks and optimization opportunities
  6. Choose the right engine strategy based on your use case requirements

By implementing these optimization techniques, you can significantly improve your application's performance while making the most efficient use of your Zapserp credits.

Next Steps

  • Set up performance monitoring in your application
  • Implement caching for your most common queries
  • Experiment with different batch sizes to find your optimal configuration
  • Review your current API usage patterns and identify optimization opportunities

Have questions about optimizing your specific use case? Contact our technical team for personalized performance recommendations.

Found this helpful?

Share it with your network and help others discover great content.

Related Articles

Learn from common pitfalls when integrating Zapserp into your applications. Discover best practices, error handling patterns, and optimization techniques that will save you time and credits.

4 min read
Best Practices

Learn how to implement smart content quality filtering to ensure you get only high-quality, relevant results from your Zapserp searches. Includes practical filtering techniques and code examples.

4 min read
Best Practices

Build an automated SEO content gap analysis tool to discover ranking opportunities, analyze competitor content strategies, and identify high-value keywords your competitors rank for but you don't.

3 min read
Digital Marketing