Overview
Diagnose and fix common job issues including failures, timeouts, and performance problems.Job Fails Immediately
Symptoms: Job transitions from Scheduled to Failed within seconds Common causes:- Connection credentials invalid or expired
- Source system unreachable
- Network firewall blocking access
- Check connection authentication settings
- Update credentials if expired
- Test connectivity to the source system
- Review firewall rules
Job Times Out
Symptoms: Job runs for an extended period then fails with a timeout error Common causes:- Data volume larger than expected
- Source system query performance issues
- Network latency or intermittent connectivity
- Check source system performance and query execution time
- Consider using filters to reduce data volume
- Switch to incremental load if using full load
- Contact support if timeouts persist despite optimization
Job Collects No Records
Symptoms: Job completes successfully but records = 0 Common causes:- Filters too restrictive, excluding all data
- Incremental cursor field already up-to-date (no new records)
- Resource is empty in the source system
- Permissions limiting visible data
- Review and test filters
- For incremental loads, check the last collected timestamp
- Verify resource has data in the source system
- Check connection user permissions
Job Collects Fewer Records Than Expected
Symptoms: Job completes but record count is lower than anticipated Common causes:- Incremental load working correctly (only new records)
- Filters excluding some records
- Source data deleted or archived
- Permissions limiting access to some records
- For incremental loads, verify the incremental field is working correctly
- Review filters and their impact
- Check source system for data changes
- Verify connection user can see all expected records
Job Collects More Records Than Expected
Symptoms: Job complects but record count is higher than anticipated, possibly duplicates Common causes:- Unique keys not properly configured
- Incremental cursor reset or not working
- Source data has duplicates
- Schema change affected uniqueness logic
- Verify unique keys are correct and cover all uniqueness dimensions
- Check incremental load field is updating correctly in source
- Query source system directly to confirm duplicates exist there
- Review recent schema changes
Inconsistent Job Performance
Symptoms: Job duration varies significantly between runs Common causes:- Source system load varies (peak vs. off-peak hours)
- Data volume fluctuates
- Network conditions change
- Entegrata system capacity varies
- Schedule collections during source system off-peak hours
- Monitor data volume trends to identify growth
- Consider adjusting collection frequency for very large or very small datasets
