Unparalleled suite of productivity-boosting Web APIs & cloud-based micro-service applications for developers and companies of any size.

APIAutomationJavascriptSecurity

Easily Spider Websites Using Node.js And Powerful REST APIs

scrapestack blog banner

What is scrapestack, and What Makes it Powerful for Web Scraping?

The scrapestack API was built to offer a simple REST API interface for scraping web pages at scale without having to programmatically deal with geolocations, IP blocks, or CAPTCHAs. The API supports a series of features essential to web scrapings, such as JavaScript rendering, custom HTTP headers, various geo-targets, POST/PUT requests, and an option to use premium residential proxies instead of datacenter proxies. A great API to pair with this API is a geolocation with IP API.

Here is the reason why scrapestack have 2000+ satisfied customers:

  • Millions of Proxies & IPs: scrapestack provides an extensive pool of 35+ million datacenter and residential IP addresses across dozens of global ISPs, supporting real devices, smart retries, and IP rotation.
  • 100+ Global Locations: Chosen from 100+ supported global locations to send your web scraping API requests or simply use random geo-targets — supporting a series of major cities worldwide.
  • Rock-Solid Infrastructure: Scrape the web at scale at an unparalleled speed and enjoy advanced features like concurrent API requests, CAPTCHA solving, browser support, and JS rendering.
  • Free & Premium Options: If you are here to test the API without any commitments, scrapestack provides the Free Plan. If you ever need more advanced access, premium pricing plans start at $19.99 per month.

The scrapestack API is a product built and maintained by apilayer, an Austrian technology company aiming to build a variety of reliable programming interfaces (APIs) and make them affordable for developers and startups. Browse all available products here.

scrapestack powered by one of the most powerful web scraping engines on the market — offering the #1 solution for all your scraping requirements in one place. This article outlines in detail diverse API endpoints, available options, and tutorials with Node.js and other platforms (Postman and RAD Studio REST Debugger).

What Web Scraping Endpoints are Available via the API?

In total, the scrapestack API is offering 6 API endpoints, each with different powerful functionalities. Here are their short summaries:

  1. Basic Request: Scrape any website using the GET request.
  2. JavaScript Rendering: The scrapestack API is capable of accessing the target web using a headless browser (Google Chrome) and allows JavaScript page elements to render before delivering the final scraping result.
  3. HTTP Headers: The scrapestack API will accept HTTP headers and pass them through to the target web page and the final API response if the keep_headers HTTP GET parameter is set to 1. By default, this parameter is set to 0 (off).
  4. Proxy Locations: Across both standard and premium proxies, the scrapestack API supports more than 100 global geolocations your scraping request can be sent from. This feature is supported by a pool of 35+ million IP addresses worldwide.
  5. Premium Proxies: For customers subscribed to the Professional Plan or higher, the scrapestack API allows access to premium (residential) proxies, which are associated with real residential addresses and therefore much less likely to get blocked while scraping data on the web.
  6. HTTP POST/PUT Requests: The scrapestack API also offers a way of scraping forms or API endpoints directly by supporting API requests via HTTP POST/PUT.

How can I Access scrapestack API?

First, get your API Credentials here, and set up your subscription plan:

And you can monitor your usage via this dashboard.

API Access Key & Authentication

Next, to check if everything is working properly, just simply run this URL in your favorite web browser:

You will get this API response inside your browser:

Here is another example, for “blog.apilayer.com”:

You will get this API response inside your browser:

Here are all available request parameters:

Object Description
access_key [Required] Specify your unique API access key to authenticate with the API. Your API access key can be found in your account dashboard.
url [Required] Specify the URL of the web page you would like to scrape.
render_js [optional] Set to 0 (off, default) or 1 (on) depending on whether or not to render JavaScript on the target web page. JavaScript rendering is done using a Google Chrome headless browser.
keep_headers [optional] Set 0 (off, default) or 1 (on) depending on whether or not to send currently active HTTP headers to the target URL with your API request and have the API return these headers along with your API response.
proxy_location [optional] Specify the 2-letter code of the country you would like to use as a proxy geolocation for your scraping API request. Supported countries differ by proxy type, please refer to the Proxy Locations section for details.
premium_proxy [optional] Set 0 (off, default) or 1 (on) depending on whether or not to enable premium residential proxies for your scraping request. Please note that a single premium proxy API request is counted as 25 API requests.

And the following are the common API errors:

Code Type Info
404 404_not_found User requested a resource that does not exist.
101 missing_access_key User did not supply an access key.
101 invalid_access_key User supplied an invalid access key.
102 inactive_user User account is inactive or blocked.
103 invalid_api_function User requested a non-existent API function.
104 usage_limit_reached User has reached his subscription’s monthly request allowance.
105 function_access_restricted The user’s current subscription does not support this API function.
105 https_access_restricted The user’s current subscription plan does not support HTTPS.
210 missing_url User has not specified a valid URL to scrape.
211 invalid_url User has specified an invalid value in the URL parameter.
212 invalid_proxy_location User has specified an invalid or unsupported proxy location.
213 scrape_request_failed The current scraping request failed due to a technical issue. If this error occurs, please report this to technical customer support.

How can I Scrape any Website with Node.js and scrapestack?

Use the following code to make an API request using Node.js, you can change the “https://apilayer.com” value to any URL of the website you want to scrape:

Here is the API response inside your Node.js IDE (here, I’m using Atom for this example):

Here is the API response after I change the URL to “https://blog.apilayer.com”:

Bonus: API Request using other Platforms

How can I Make an API Request using RAD Studio REST Debugger?

First, download Delphi REST Debugger here.

Choose the GET method, and send the request to the following URL:

Here is another example, for “blog.apilayer.com”:

After getting the API responses you need, and you want to create Desktop apps based on it using Delphi, please refer to this article to get started:

https://blogs.embarcadero.com/using-apilayer-rest-apis-from-delphi/

How can I Make an API Request using Postman?

If this is your first time using Postman, you can get the installer here, and follow the installation and introductory guide here.

Choose the GET method, and send the request to the following URL:

Here is another example, for “blog.apilayer.com”:

Are you Ready to Perform Large-Scale Web Scraping for Your Projects or Company?

As you can see, there are many powerful web scraping endpoints provided by scrapestack REST API which can be connected to any platform and any programming language you work with. In this article, we show you the basic demo of how you can use the scrapestack REST API for scraping any website you want. 

Take advantage of the free tier on scrapestack and we strongly recommend you to upgrade your subscription plan if you need more powerful features (JavaScript Rendering, Concurrent Requests, 100+ Geolocations, and many more). We can’t wait to see what you build with our REST API!

Head over and sign up for free to start building automated web scrapers today!

Related posts
API

12 Steps to Find the Perfect API to Verify Email Address

API

API Integration: How to Integrate API into WordPress Page

API

API Development | A Guide to Develop RESTful API with Node JS

APIAutomation

Best Web Scraping API: Scrapestack vs. ScrapingBee

Leave a Reply

Your email address will not be published. Required fields are marked *