Skip to main content

Overview

Diagnose and fix common job issues including failures, timeouts, and performance problems.

Job Fails Immediately

Symptoms: Job transitions from Scheduled to Failed within seconds Common causes:
  • Connection credentials invalid or expired
  • Source system unreachable
  • Network firewall blocking access
Resolution:
  1. Check connection authentication settings
  2. Update credentials if expired
  3. Test connectivity to the source system
  4. Review firewall rules

Job Times Out

Symptoms: Job runs for an extended period then fails with a timeout error Common causes:
  • Data volume larger than expected
  • Source system query performance issues
  • Network latency or intermittent connectivity
Resolution:
  1. Check source system performance and query execution time
  2. Consider using filters to reduce data volume
  3. Switch to incremental load if using full load
  4. Contact support if timeouts persist despite optimization

Job Collects No Records

Symptoms: Job completes successfully but records = 0 Common causes:
  • Filters too restrictive, excluding all data
  • Incremental cursor field already up-to-date (no new records)
  • Resource is empty in the source system
  • Permissions limiting visible data
Resolution:
  1. Review and test filters
  2. For incremental loads, check the last collected timestamp
  3. Verify resource has data in the source system
  4. Check connection user permissions

Job Collects Fewer Records Than Expected

Symptoms: Job completes but record count is lower than anticipated Common causes:
  • Incremental load working correctly (only new records)
  • Filters excluding some records
  • Source data deleted or archived
  • Permissions limiting access to some records
Resolution:
  1. For incremental loads, verify the incremental field is working correctly
  2. Review filters and their impact
  3. Check source system for data changes
  4. Verify connection user can see all expected records

Job Collects More Records Than Expected

Symptoms: Job complects but record count is higher than anticipated, possibly duplicates Common causes:
  • Unique keys not properly configured
  • Incremental cursor reset or not working
  • Source data has duplicates
  • Schema change affected uniqueness logic
Resolution:
  1. Verify unique keys are correct and cover all uniqueness dimensions
  2. Check incremental load field is updating correctly in source
  3. Query source system directly to confirm duplicates exist there
  4. Review recent schema changes

Inconsistent Job Performance

Symptoms: Job duration varies significantly between runs Common causes:
  • Source system load varies (peak vs. off-peak hours)
  • Data volume fluctuates
  • Network conditions change
  • Entegrata system capacity varies
Resolution:
  1. Schedule collections during source system off-peak hours
  2. Monitor data volume trends to identify growth
  3. Consider adjusting collection frequency for very large or very small datasets