Set a 30-second client timeout to match PriceFetch's server timeout. Use async requests for batch operations. Retry PAGE_LOAD_FAILED with exponential backoff.
PriceFetch does live scraping — every request launches a headless browser, loads the product page, and extracts the price. This takes 2-8 seconds for a typical product page. Occasionally it takes longer:
**Retailer is slow** — the product page itself takes a long time to load. Heavy pages with lots of JavaScript can take 5-10 seconds.
**High traffic** — during peak hours, requests may queue briefly on PriceFetch's side. This is rare but can add 1-2 seconds.
**Page requires extra rendering** — some retailers load prices via client-side JavaScript after the initial page load. PriceFetch waits for these elements, which adds time.
PriceFetch's server-side timeout is 30 seconds. If the page hasn't loaded by then, you get a PAGE_LOAD_FAILED error. Set your client-side timeout to at least 30 seconds to avoid cutting off a request that would have succeeded.
Configure your HTTP client with a 30-second timeout and handle timeouts gracefully. For batch operations, use async requests so one slow response doesn't block everything.
# Python with requests — simple retry on timeout
import requests
from requests.exceptions import Timeout
def fetch_price_with_retry(url: str, api_key: str) -> dict | None:
for attempt in range(3):
try:
resp = requests.get(
"https://api.pricefetch.dev/v1/price",
params={"url": url},
headers={"X-API-Key": api_key},
timeout=30, # Match PriceFetch server timeout
)
data = resp.json()
if data["success"]:
return data["data"]
if data["error"]["code"] == "PAGE_LOAD_FAILED":
continue # Retry — page might load on second attempt
return None # Non-retryable error
except Timeout:
if attempt < 2:
continue # Retry
return NoneWhen checking many URLs, use async HTTP to run requests concurrently. This way, one slow request doesn't block the rest. With a semaphore matching your rate limit, you process URLs as fast as possible while respecting limits.
The key principle: your total batch time is determined by your slowest request plus overhead, not the sum of all requests. Checking 100 URLs sequentially at 5 seconds each takes 8 minutes. Checking them concurrently (5 at a time) takes about 2 minutes.
For Node.js, use `Promise.all` with a concurrency limiter like `p-limit`. For Python, use `asyncio.gather` with a `Semaphore`. Both achieve the same result — concurrent requests with controlled parallelism.
Still stuck?
Our support team can help debug your integration.
Sign up in 30 seconds. No credit card required. One credit per successful API call.