Show HN: LlamaPReview – AI code reviewer trusted by 2000 repos, 40%+ effective

Hi HN! A month ago, I shared LlamaPReview in Show HN[1]. Since then, we've grown to 2000+ repos (60%+ public) with 16k+ combined stars. More importantly, we've made significant improvements in both efficiency and review quality.

Key improvements in recent month:

1. ReAct-based Review Pipeline We implemented a ReAct (Reasoning + Acting) pattern that mimics how senior developers review code. Here's a simplified version:

  ```python
  def react_based_review(pr_context) -> Review:
    # Step 1: Initial Assessment - Understand the changes
    initial_analysis = initial_assessment(pr_context)
    # Step 2: Deep Technical Analysis
    deep_analysis = deep_analysis(pr_context, initial_analysis)
    # Step 3: Final Synthesis
    return synthesize_review(pr_context, initial_analysis, deep_analysis)
  ```
2. Two-stage format alignment pipeline

  ```python
  def review_pipeline(pr) -> Review:
    # Stage 1: Deep analysis with large LLM
    review = react_based_review(pr_context)
    # Stage 2: Format standardization with small LLM
    return format_standardize(review)
  ```
This two-stage approach (large LLM for analysis + small LLM for format standardization) ensures both high-quality insights and consistent output format.

3. Intelligent Skip Analysis We now automatically identify PRs that don't need deep review (docs, dependencies, formatting), reducing token consumption by 40%. Implementation:

  ```python
  def intelligent_skip_analysis(pr_changes) -> Tuple[bool, str]:
    skip_conditions = {
      'docs_only': check_documentation_changes,
      'dependency_updates': check_dependency_files,
      'formatting': check_formatting_only,
      'configuration': check_config_files
    }

    for condition_name, checker in skip_conditions.items():
      if checker(pr_changes):
        return True, f"Optimizing review: {condition_name}"
        
    return False, "Proceeding with full review"
  ```
Key metrics since launch:

  - 2000+ repos using LlamaPReview  
  - 60% public, 40% private repositories  
  - 40% reduction in token consumption  
  - 30% faster PR processing  
  - 25% higher user satisfaction
Privacy & Security:

  Many asked about code privacy in the last thread. Here's how we handle it:  
  - All PR review processing happens in-memory  
  - No permanent storage of repository code  
  - Immediate cleanup after PR review  
  - No training on user code
What's next:

  We are actively working on GraphRAG-based repository understanding for better in-depth code review analysis and pattern detection.
Links:

  [1] Previous Show HN discussion: [https://news.ycombinator.com/item?id=41996859]  
  [2] Technical deep-dive: [https://github.com/JetXu-LLM/LlamaPReview-site/discussions/3]  
  [3] Link for Install (free): [https://github.com/marketplace/llamapreview]
Happy to discuss our approach to privacy, technical implementation, or future plans!


Get Top 5 Posts of the Week



best of all time best of today best of yesterday best of this week best of this month best of last month best of this year best of 2024 best of 2023 yc s25 yc w25 yc s24 yc w24 yc s23 yc w23 yc s22 yc w22 yc s21 yc w21 yc s20 yc w20 yc s19 yc w19 yc s18 yc w18 yc all-time 3d algorithms animation android [ai] artificial-intelligence api augmented-reality big data bitcoin blockchain book bootstrap bot css c chart chess chrome extension cli command line compiler crypto covid-19 cryptography data deep learning elexir ether excel framework game git go html ios iphone java js javascript jobs kubernetes learn linux lisp mac machine-learning most successful neural net nft node optimisation parser performance privacy python raspberry pi react retro review my ruby rust saas scraper security sql tensor flow terminal travel virtual reality visualisation vue windows web3 young talents


andrey azimov by Andrey Azimov