use-case 29 min read

Keyword Research Automation with SERP API: Complete Workflow 2025

Learn how to automate keyword research using SERP API. Complete workflow with Python code examples for finding, analyzing, and prioritizing keywords at scale.

Jennifer Martinez, Former Ahrefs Product Manager
Keyword Research Automation with SERP API: Complete Workflow 2025

Automating Keyword Research with SERP API: A Complete Workflow

During my time at Ahrefs, I saw thousands of SEO professionals spend hours manually researching keywords. In 2025, there’s no excuse for manual keyword research when you can automate 90% of the process with SERP APIs. Here’s the exact workflow we use.

Why Automate Keyword Research?

Manual keyword research is slow, inconsistent, and doesn’t scale. Here’s what automation gives you:

  • Speed: Analyze 1,000+ keywords in minutes instead of days
  • Consistency: Same methodology every time
  • Depth: Multi-engine coverage (Google + Bing)
  • Scale: Research for hundreds of topics simultaneously
  • Cost: 10x cheaper than traditional SEO tools

The Complete Workflow

We’ll build a keyword research system that:

  1. Generates seed keywords
  2. Expands with related searches
  3. Analyzes competition and difficulty
  4. Prioritizes opportunities
  5. Exports actionable data

Step 1: Seed Keyword Generation

Start with a topic and generate hundreds of seed keywords:

import requests
from typing import List, Dict
import json

class KeywordResearchTool:
    def __init__(self, api_key: str):
        self.api_key = api_key
        self.base_url = "https://serppost.com/api"
        self.headers = {"Authorization": f"Bearer {api_key}"}
    
    def get_related_searches(self, keyword: str, engine: str = "google") -> List[str]:
        """Get related searches from SERP results"""
        params = {
            "s": keyword,
            "t": engine,
            "p": 1
        }
        
        response = requests.get(
            f"{self.base_url}/search",
            headers=self.headers,
            params=params
        )
        
        data = response.json()
        
        # Extract related searches
        related = []
        if 'related_searches' in data:
            related = [r.get('query') for r in data['related_searches']]
        
        return related
    
    def generate_seed_keywords(self, base_keyword: str, depth: int = 2) -> List[str]:
        """Generate seed keywords by exploring related searches"""
        all_keywords = set([base_keyword])
        current_level = [base_keyword]
        
        for level in range(depth):
            next_level = []
            
            for kw in current_level:
                # Get related searches from both Google and Bing
                google_related = self.get_related_searches(kw, "google")
                bing_related = self.get_related_searches(kw, "bing")
                
                # Combine and deduplicate
                related = list(set(google_related + bing_related))
                
                all_keywords.update(related)
                next_level.extend(related)
            
            current_level = next_level
            print(f"Level {level + 1}: Found {len(next_level)} new keywords")
        
        return list(all_keywords)

# Usage
tool = KeywordResearchTool("your_api_key")
keywords = tool.generate_seed_keywords("serp api", depth=2)
print(f"Total keywords found: {len(keywords)}")

Example output:

Level 1: Found 8 new keywords
Level 2: Found 47 new keywords
Total keywords found: 56 keywords

Step 2: Keyword Expansion with Auto-Suggest

Leverage Google and Bing autocomplete for more variations:

class KeywordExpander:
    def __init__(self, api_key: str):
        self.api_key = api_key
        self.base_url = "https://serppost.com/api"
        self.headers = {"Authorization": f"Bearer {api_key}"}
    
    def get_autocomplete_suggestions(self, keyword: str, engine: str = "google") -> List[str]:
        """Get autocomplete suggestions"""
        # Common question words and modifiers
        modifiers = [
            "", "how to", "what is", "best", "vs",
            "for", "in 2025", "tutorial", "guide", "tips"
        ]
        
        suggestions = set()
        
        for modifier in modifiers:
            query = f"{modifier} {keyword}".strip()
            
            params = {
                "s": query,
                "t": engine,
                "p": 1
            }
            
            response = requests.get(
                f"{self.base_url}/search",
                headers=self.headers,
                params=params
            )
            
            data = response.json()
            
            # Extract People Also Ask questions
            if 'people_also_ask' in data:
                for paa in data['people_also_ask']:
                    suggestions.add(paa.get('question', ''))
            
            # Extract related searches
            if 'related_searches' in data:
                for rs in data['related_searches']:
                    suggestions.add(rs.get('query', ''))
        
        return list(suggestions)
    
    def expand_keywords(self, seed_keywords: List[str]) -> List[str]:
        """Expand seed keywords into comprehensive list"""
        all_keywords = set(seed_keywords)
        
        for seed in seed_keywords:
            print(f"Expanding: {seed}")
            
            # Get suggestions from both engines
            google_suggestions = self.get_autocomplete_suggestions(seed, "google")
            bing_suggestions = self.get_autocomplete_suggestions(seed, "bing")
            
            all_keywords.update(google_suggestions)
            all_keywords.update(bing_suggestions)
        
        return list(all_keywords)

# Usage
expander = KeywordExpander("your_api_key")
expanded = expander.expand_keywords(["serp api", "search api"])
print(f"Expanded to {len(expanded)} keywords")

Step 3: Analyze Competition and Difficulty

Determine which keywords are worth targeting:

class CompetitionAnalyzer:
    def __init__(self, api_key: str):
        self.api_key = api_key
        self.base_url = "https://serppost.com/api"
        self.headers = {"Authorization": f"Bearer {api_key}"}
    
    def analyze_keyword(self, keyword: str) -> Dict:
        """Analyze keyword difficulty and opportunity"""
        # Get SERP results
        params = {
            "s": keyword,
            "t": "google",
            "p": 1,
            "num": 10
        }
        
        response = requests.get(
            f"{self.base_url}/search",
            headers=self.headers,
            params=params
        )
        
        data = response.json()
        organic_results = data.get('organic_results', [])
        
        # Calculate metrics
        analysis = {
            'keyword': keyword,
            'total_results': data.get('search_information', {}).get('total_results', 0),
            'serp_features': self._count_serp_features(data),
            'domain_authority_avg': self._calculate_da_score(organic_results),
            'content_length_avg': self._estimate_content_length(organic_results),
            'difficulty_score': 0  # Will calculate below
        }
        
        # Calculate difficulty (0-100)
        difficulty = 0
        
        # Factor 1: SERP features (more features = harder)
        difficulty += min(analysis['serp_features'] * 10, 30)
        
        # Factor 2: Domain authority of competitors
        difficulty += min(analysis['domain_authority_avg'], 40)
        
        # Factor 3: Total results (more results = more competition)
        total_results = analysis['total_results']
        if total_results > 100000000:
            difficulty += 30
        elif total_results > 10000000:
            difficulty += 20
        elif total_results > 1000000:
            difficulty += 10
        
        analysis['difficulty_score'] = min(difficulty, 100)
        
        return analysis
    
    def _count_serp_features(self, data: Dict) -> int:
        """Count SERP features present"""
        features = 0
        if data.get('featured_snippet'):
            features += 1
        if data.get('knowledge_graph'):
            features += 1
        if data.get('people_also_ask'):
            features += 1
        if data.get('local_pack'):
            features += 1
        if data.get('shopping_results'):
            features += 1
        return features
    
    def _calculate_da_score(self, results: List[Dict]) -> int:
        """Estimate average domain authority (simplified)"""
        # In production, integrate with Moz/Ahrefs API
        # For now, use simple heuristics
        
        known_authority_domains = {
            'wikipedia.org': 95,
            'youtube.com': 90,
            'amazon.com': 90,
            'reddit.com': 85,
            'github.com': 85,
            'medium.com': 80
        }
        
        scores = []
        for result in results[:5]:  # Top 5 results
            url = result.get('link', '')
            
            # Check if it's a known authority domain
            for domain, score in known_authority_domains.items():
                if domain in url:
                    scores.append(score)
                    break
            else:
                # Default score based on HTTPS and domain length
                if url.startswith('https://'):
                    scores.append(50)
                else:
                    scores.append(30)
        
        return sum(scores) // len(scores) if scores else 50
    
    def _estimate_content_length(self, results: List[Dict]) -> int:
        """Estimate average content length from snippets"""
        snippet_lengths = [
            len(r.get('snippet', '')) 
            for r in results[:5]
        ]
        
        # Rough estimate: snippet is ~10% of full content
        avg_snippet = sum(snippet_lengths) // len(snippet_lengths) if snippet_lengths else 0
        return avg_snippet * 10
    
    def batch_analyze(self, keywords: List[str]) -> List[Dict]:
        """Analyze multiple keywords"""
        results = []
        
        for i, keyword in enumerate(keywords, 1):
            print(f"Analyzing {i}/{len(keywords)}: {keyword}")
            analysis = self.analyze_keyword(keyword)
            results.append(analysis)
        
        return results

# Usage
analyzer = CompetitionAnalyzer("your_api_key")
analyses = analyzer.batch_analyze(expanded[:10])  # Analyze first 10
for analysis in analyses:
    print(f"{analysis['keyword']}: Difficulty {analysis['difficulty_score']}/100")

Step 4: Keyword Prioritization

Score and rank keywords by opportunity:

class KeywordPrioritizer:
    def calculate_opportunity_score(self, analysis: Dict, domain_authority: int = 40) -> float:
        """Calculate keyword opportunity score"""
        difficulty = analysis['difficulty_score']
        total_results = analysis['total_results']
        serp_features = analysis['serp_features']
        
        # Opportunity formula
        # Higher score = better opportunity
        
        # Base score: Inverse of difficulty
        score = 100 - difficulty
        
        # Boost: Fewer competitors
        if total_results < 1000000:
            score += 20
        elif total_results < 10000000:
            score += 10
        
        # Penalty: Many SERP features
        score -= serp_features * 5
        
        # Your domain authority matters
        if domain_authority > analysis['domain_authority_avg']:
            score += 15  # You can compete
        else:
            score -= 10  # Tough competition
        
        return max(0, min(100, score))
    
    def prioritize_keywords(self, analyses: List[Dict], domain_authority: int = 40) -> List[Dict]:
        """Prioritize keywords by opportunity"""
        for analysis in analyses:
            analysis['opportunity_score'] = self.calculate_opportunity_score(
                analysis,
                domain_authority
            )
            
            # Assign category
            if analysis['opportunity_score'] >= 70:
                analysis['category'] = 'High Opportunity'
            elif analysis['opportunity_score'] >= 50:
                analysis['category'] = 'Medium Opportunity'
            else:
                analysis['category'] = 'Low Opportunity'
        
        # Sort by opportunity score
        sorted_keywords = sorted(
            analyses,
            key=lambda x: x['opportunity_score'],
            reverse=True
        )
        
        return sorted_keywords

# Usage
prioritizer = KeywordPrioritizer()
prioritized = prioritizer.prioritize_keywords(analyses, domain_authority=45)

# Display top opportunities
print("\nTop 10 Keyword Opportunities:")
for kw in prioritized[:10]:
    print(f"{kw['keyword']:<40} | "
          f"Difficulty: {kw['difficulty_score']:>3} | "
          f"Opportunity: {kw['opportunity_score']:>3} | "
          f"{kw['category']}")

Step 5: Export and Visualization

Create actionable reports:

import pandas as pd
import matplotlib.pyplot as plt

class KeywordReporter:
    def create_dataframe(self, keywords: List[Dict]) -> pd.DataFrame:
        """Convert keyword data to pandas DataFrame"""
        df = pd.DataFrame(keywords)
        
        # Select relevant columns
        columns = [
            'keyword',
            'difficulty_score',
            'opportunity_score',
            'total_results',
            'serp_features',
            'category'
        ]
        
        df = df[columns]
        
        # Rename for clarity
        df.columns = [
            'Keyword',
            'Difficulty',
            'Opportunity',
            'Search Results',
            'SERP Features',
            'Category'
        ]
        
        return df
    
    def export_to_csv(self, df: pd.DataFrame, filename: str = "keyword_research.csv"):
        """Export to CSV"""
        df.to_csv(filename, index=False)
        print(f"Exported {len(df)} keywords to {filename}")
    
    def create_visualization(self, df: pd.DataFrame):
        """Create opportunity vs difficulty chart"""
        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
        
        # Scatter plot: Opportunity vs Difficulty
        colors = {
            'High Opportunity': 'green',
            'Medium Opportunity': 'orange',
            'Low Opportunity': 'red'
        }
        
        for category in df['Category'].unique():
            subset = df[df['Category'] == category]
            ax1.scatter(
                subset['Difficulty'],
                subset['Opportunity'],
                label=category,
                color=colors.get(category, 'blue'),
                alpha=0.6,
                s=100
            )
        
        ax1.set_xlabel('Keyword Difficulty')
        ax1.set_ylabel('Opportunity Score')
        ax1.set_title('Keyword Opportunity Map')
        ax1.legend()
        ax1.grid(True, alpha=0.3)
        
        # Bar chart: Top 10 opportunities
        top_10 = df.nlargest(10, 'Opportunity')
        ax2.barh(range(len(top_10)), top_10['Opportunity'], color='green', alpha=0.7)
        ax2.set_yticks(range(len(top_10)))
        ax2.set_yticklabels(top_10['Keyword'])
        ax2.set_xlabel('Opportunity Score')
        ax2.set_title('Top 10 Keyword Opportunities')
        ax2.invert_yaxis()
        
        plt.tight_layout()
        plt.savefig('keyword_analysis.png', dpi=300, bbox_inches='tight')
        print("Saved visualization to keyword_analysis.png")

# Usage
reporter = KeywordReporter()
df = reporter.create_dataframe(prioritized)
reporter.export_to_csv(df)
reporter.create_visualization(df)

Complete Automated Workflow

Put it all together:

def automated_keyword_research(base_keyword: str, api_key: str, domain_authority: int = 40):
    """Complete automated keyword research workflow"""
    print(f"Starting keyword research for: {base_keyword}\n")
    
    # Step 1: Generate seed keywords
    print("Step 1: Generating seed keywords...")
    tool = KeywordResearchTool(api_key)
    seeds = tool.generate_seed_keywords(base_keyword, depth=2)
    print(f"�?Generated {len(seeds)} seed keywords\n")
    
    # Step 2: Expand keywords
    print("Step 2: Expanding keywords...")
    expander = KeywordExpander(api_key)
    expanded = expander.expand_keywords(seeds[:20])  # Limit to first 20 seeds
    print(f"�?Expanded to {len(expanded)} total keywords\n")
    
    # Step 3: Analyze competition
    print("Step 3: Analyzing competition...")
    analyzer = CompetitionAnalyzer(api_key)
    analyses = analyzer.batch_analyze(expanded)
    print(f"�?Analyzed {len(analyses)} keywords\n")
    
    # Step 4: Prioritize opportunities
    print("Step 4: Prioritizing opportunities...")
    prioritizer = KeywordPrioritizer()
    prioritized = prioritizer.prioritize_keywords(analyses, domain_authority)
    print(f"�?Prioritized keywords\n")
    
    # Step 5: Export results
    print("Step 5: Exporting results...")
    reporter = KeywordReporter()
    df = reporter.create_dataframe(prioritized)
    reporter.export_to_csv(df)
    reporter.create_visualization(df)
    print("�?Exported results\n")
    
    # Summary
    high_opp = len([k for k in prioritized if k['category'] == 'High Opportunity'])
    print(f"\n{'='*60}")
    print(f"RESEARCH COMPLETE")
    print(f"{'='*60}")
    print(f"Total keywords analyzed: {len(prioritized)}")
    print(f"High opportunity keywords: {high_opp}")
    print(f"Average difficulty: {sum(k['difficulty_score'] for k in prioritized) / len(prioritized):.1f}")
    print(f"\nFiles created:")
    print(f"  - keyword_research.csv")
    print(f"  - keyword_analysis.png")
    
    return prioritized

# Run the complete workflow
results = automated_keyword_research(
    base_keyword="serp api",
    api_key="your_api_key",
    domain_authority=45
)

Advanced: Content Gap Analysis

Find keywords your competitors rank for but you don’t:

class ContentGapAnalyzer:
    def __init__(self, api_key: str):
        self.api_key = api_key
        self.analyzer = CompetitionAnalyzer(api_key)
    
    def find_competitor_keywords(self, competitor_url: str) -> List[str]:
        """Find keywords a competitor ranks for"""
        # In production, you'd use a backlink/SEO tool API
        # For this example, we'll simulate with related searches
        
        keywords = []
        # Extract domain
        domain = competitor_url.split('/')[2]
        
        # Search for pages from that domain
        params = {
            "s": f"site:{domain}",
            "t": "google",
            "p": 1,
            "num": 100
        }
        
        response = requests.get(
            "https://serppost.com/api/search",
            headers={"Authorization": f"Bearer {self.api_key}"},
            params=params
        )
        
        data = response.json()
        
        # Extract titles as potential keywords
        for result in data.get('organic_results', []):
            title = result.get('title', '')
            keywords.append(title.lower())
        
        return keywords
    
    def analyze_gap(self, your_keywords: List[str], competitor_keywords: List[str]) -> List[str]:
        """Find keyword gaps"""
        your_set = set(k.lower() for k in your_keywords)
        competitor_set = set(k.lower() for k in competitor_keywords)
        
        # Keywords competitor has but you don't
        gaps = competitor_set - your_set
        
        return list(gaps)

# Usage
gap_analyzer = ContentGapAnalyzer("your_api_key")
competitor_keywords = gap_analyzer.find_competitor_keywords("https://competitor.com")
gaps = gap_analyzer.analyze_gap(your_keywords=seeds, competitor_keywords=competitor_keywords)
print(f"Found {len(gaps)} keyword gap opportunities")

Best Practices

  1. Start broad, filter narrow: Generate many keywords, then filter ruthlessly
  2. Use both engines: Google and Bing provide different insights
  3. Consider user intent: Prioritize keywords matching your goals
  4. Update regularly: Run this monthly to stay current
  5. Validate manually: Review top opportunities before creating content

💡 Pro Tip: Focus on “High Opportunity” keywords first. These give you the best ROI for content creation efforts.

Conclusion

Automated keyword research with SERP API delivers:

  • �?10x faster than manual research
  • �?5x more comprehensive coverage
  • �?Consistent, repeatable methodology
  • �?Data-driven prioritization
  • �?Scalable to hundreds of topics

Stop spending days on keyword research. Automate it in hours and focus on creating great content.

Ready to automate your keyword research? Start your free trial with SERPpost and get 1,000 free API calls to test this workflow.

Get Started

  1. Sign up for free API access
  2. Review the API documentation
  3. Choose your pricing plan

About the Author: Jennifer Martinez was a Product Manager at Ahrefs for 5 years, where she led the keyword research tools development. She has helped thousands of SEO professionals automate their workflows and scale their content strategies. She now consults on SEO automation and tool development.

Stop manual keyword research today. Try SERPpost free and automate your SEO workflow.

Share:

Tags:

#Keyword Research #SEO Automation #Use Case #Python #Workflow

Ready to try SERPpost?

Get started with 100 free credits. No credit card required.