Back to Blog
January 15, 2024
Zapserp Team
8 min read
TutorialFeatured

Getting Started with Zapserp API: Your First Search in Minutes

Learn how to integrate Zapserp's powerful search and content extraction APIs into your application with this comprehensive guide.

APIGetting StartedTutorialSearch

Getting Started with Zapserp API: Your First Search in Minutes

Building modern applications often requires accessing and processing web content at scale. Whether you're creating a research tool, content aggregator, or business intelligence platform, Zapserp's APIs provide the foundation you need to search across multiple engines and extract clean content from any webpage.

Why Zapserp?

Traditional web scraping and search solutions come with significant challenges:

  • Fragmented tools that require managing multiple services
  • Unreliable scrapers that break when websites change
  • Complex setup with expensive enterprise solutions
  • Poor performance and inconsistent results

Zapserp solves these problems by providing a unified API that combines multi-engine search, content extraction, and data analysis in one powerful platform.

What You'll Build

In this tutorial, we'll create a simple content research application that:

  1. Searches across multiple search engines
  2. Extracts content from the top results
  3. Analyzes the content for insights
  4. Presents the data in a clean interface

Prerequisites

Before we start, make sure you have:

  • Node.js 16+ installed
  • A Zapserp API key (sign up for free at zapserp.com)
  • Basic knowledge of JavaScript/TypeScript

Installation

First, install the Zapserp SDK:

npm install zapserp
# or
yarn add zapserp

Your First Search

Let's start with a simple search across multiple engines:

import { 
  Zapserp, 
  SearchEngine, 
  SafeSearchLevel, 
  TimeRange, 
  SearchResponse,
  ReaderBatchResponse,
  Page,
  PageMetadata 
} from 'zapserp'

const zapserp = new Zapserp({
  apiKey: 'your-api-key-here'
})

async function searchWeb() {
  try {
    const response: SearchResponse = await zapserp.search({
      query: 'artificial intelligence trends 2024',
      engines: [SearchEngine.GOOGLE, SearchEngine.BING, SearchEngine.DUCKDUCKGO],
      limit: 10,
      language: 'en',
      country: 'us'
    })

    console.log(`Found ${response.results.length} results`)
    response.results.forEach((result, index) => {
      console.log(`${index + 1}. ${result.title}`)
      console.log(`   ${result.url}`)
      console.log(`   Engine: ${result.engine}`)
      console.log()
    })
  } catch (error) {
    console.error('Search failed:', error)
  }
}

searchWeb()

Adding Content Extraction

Now let's enhance our application by extracting content from the search results:

async function searchAndExtract() {
  try {
    // Step 1: Search for content
    const searchResponse: SearchResponse = await zapserp.search({
      query: 'machine learning best practices',
      engines: [SearchEngine.GOOGLE, SearchEngine.BING],
      limit: 5,
      language: 'en'
    })

    // Step 2: Extract content from top results
    const urls = searchResponse.results.slice(0, 3).map(result => result.url)
    const contentResponse = await zapserp.readerBatch({ urls })

    contentResponse.results.forEach((page, index) => {
      const searchResult = searchResponse.results[index]
      console.log(`\n--- Result ${index + 1} ---`)
      console.log(`Title: ${page.title}`)
      console.log(`URL: ${page.url}`)
      console.log(`Search Rank: ${searchResult.rank}`)
      console.log(`Content Length: ${page.contentLength} chars`)
      console.log(`Preview: ${page.content.substring(0, 200)}...`)
      
      // Access metadata if available
      if (page.metadata) {
        console.log(`Author: ${page.metadata.author || 'Unknown'}`)
        console.log(`Published: ${page.metadata.publishedTime || 'Unknown'}`)
        console.log(`Description: ${page.metadata.description || 'No description'}`)
        console.log(`Keywords: ${page.metadata.keywords || 'No keywords'}`)
      }
    })
  } catch (error) {
    console.error('Search and extract failed:', error)
  }
}

Advanced Filtering

Zapserp supports advanced filtering options for precise results:

async function advancedSearch() {
  const response: SearchResponse = await zapserp.search({
    query: 'sustainable energy solutions',
    engines: [SearchEngine.GOOGLE, SearchEngine.BING, SearchEngine.DUCKDUCKGO],
    limit: 20,
    language: 'en',
    country: 'us',
    safeSearch: SafeSearchLevel.MODERATE,
    timeRange: TimeRange.YEAR, // Results from the last year
  })

  // Filter results by domain
  const filteredResults = response.results.filter(result => 
    !result.url.includes('wikipedia.org') && 
    !result.url.includes('reddit.com')
  )

  return filteredResults
}

Error Handling and Best Practices

Always implement proper error handling and rate limiting:

import { Zapserp, SearchEngine, SearchResponse } from 'zapserp'

class ContentResearcher {
  private zapserp: Zapserp
  private requestCount = 0
  private maxRequestsPerMinute = 60

  constructor(apiKey: string) {
    this.zapserp = new Zapserp({ apiKey })
  }

  async search(query: string, options = {}) {
    // Simple rate limiting
    if (this.requestCount >= this.maxRequestsPerMinute) {
      throw new Error('Rate limit exceeded')
    }

    try {
      this.requestCount++
      const response: SearchResponse = await this.zapserp.search({
        query,
        engines: [SearchEngine.GOOGLE, SearchEngine.BING],
        limit: 10,
        ...options
      })

      return {
        success: true,
        data: response,
        timestamp: new Date().toISOString()
      }
    } catch (error) {
      console.error('Search error:', error)
      return {
        success: false,
        error: error.message,
        timestamp: new Date().toISOString()
      }
    }
  }

  // Reset rate limit counter every minute
  resetRateLimit() {
    setInterval(() => {
      this.requestCount = 0
    }, 60000)
  }
}

Next Steps

Congratulations! You've successfully integrated Zapserp into your application. Here are some ideas for extending your implementation:

  1. Build a Content Dashboard: Create a web interface to visualize search results and extracted content
  2. Add Data Storage: Store results in a database for historical analysis
  3. Implement Caching: Cache frequently searched queries to improve performance
  4. Create Automated Workflows: Set up scheduled searches for monitoring trends

Useful Resources

Conclusion

Zapserp's APIs make it incredibly easy to add powerful search and content extraction capabilities to your applications. With just a few lines of code, you can access multiple search engines, extract clean content, and analyze web data at scale.

Ready to build something amazing? Get your free API key and start exploring the possibilities!

Found this helpful?

Share it with your network and help others discover great content.

Related Articles

Discover the key differences between traditional web scraping and modern content extraction APIs, and why the latter is becoming the preferred choice for developers.

10 min read
Technology

Build an automated SEO content gap analysis tool to discover ranking opportunities, analyze competitor content strategies, and identify high-value keywords your competitors rank for but you don't.

3 min read
Digital Marketing

Build an intelligent research assistant that finds academic papers, extracts key findings, and generates literature reviews automatically. Perfect for researchers, students, and academics.

3 min read
Education