Frequently asked questions
General questions
What is this project?
A starter kit for automating release notes generation from GitHub commits using AI coding agents. It demonstrates practical automation patterns for technical writers working in docs-as-code environments.
Who is this for?
Technical writers and documentation teams who:
- Work in docs-as-code workflows
- Use GitHub, GitLab, or similar version control
- Want to automate repetitive documentation tasks
- Are exploring AI-assisted automation
- Have basic command line and Git knowledge
No programming experience is required.
Do I need to know how to code?
No. This project teaches you how to work with AI coding agents to build automation. You'll learn:
- How to document workflows (you already do this)
- How to write clear instructions for AI (like writing documentation)
- How to iterate based on results (like editing)
Your technical writing skills are exactly what you need.
What AI tools does this work with?
The example uses Anthropic Claude or OpenAI GPT, but the principles apply to:
- Cursor with Claude
- GitHub Copilot
- ChatGPT
- Any AI coding assistant
The key is the approach, not the specific tool.
What version control systems are supported?
The code uses GitHub, but patterns work with:
- GitHub (implemented)
- GitLab (similar API)
- Bitbucket (similar API)
- Azure DevOps (similar API)
The same automation pattern applies—just different API calls.
Setup questions
How long does setup take?
- Initial setup: 15-20 minutes
- Document your process: 30-45 minutes
- First successful run: 10 minutes
- Prompt iteration: 30-60 minutes
Total: 2-3 hours for complete tutorial
This investment can save 4-8 hours per release cycle.
What if I don't have API keys?
You can still:
- Read through all documentation
- Run with sample data (no API keys needed)
- Learn the concepts and approach
- Get API keys when ready to use with real repos
Free tier available for both Anthropic and OpenAI.
How much do API calls cost?
Per release notes generation:
- GitHub API: Free (rate limited)
- AI API: $0.01 - $0.05 per run
For typical biweekly releases:
- Approximately $0.50 - $1.00 per month
Can I use this with private repositories?
Yes. Just ensure:
- Your GitHub token has
reposcope (not justpublic_repo) - You have access to the private repositories
- Your token permissions are appropriate
The automation works the same for public or private repos.
Usage questions
How accurate is the categorization?
Depends on prompt refinement:
- Initial run: 60-70%
- After iteration: 85-95%
Remember: 90% accuracy with 10 minutes of human review beats 100% manual work taking 90 minutes.
What if the AI miscategorizes commits?
This is expected and normal. That's why human review is part of the workflow:
- AI generates draft (accurate approximately 90%)
- Human reviews and corrects (approximately 10 minutes)
- Human adds business context
- Publish
Still 80%+ time savings over fully manual process.
Can I customize categories?
Yes. You can:
- Change category names
- Add new categories
- Remove categories
- Customize for different audiences
See Prompt engineering reference.
How do I handle different release note audiences?
Create different prompt files:
prompts/internal_categorization.txt- For internal teamprompts/external_categorization.txt- For customersprompts/technical_categorization.txt- For developers
Use with --prompt flag:
python generate_release_notes.py \
--repo owner/repo \
--since 2024-01-01 \
--prompt prompts/external_categorization.txt
What about merge commits?
By default, merge commits are excluded unless they contain meaningful changes. You can customize this in your prompt's exclusion rules.
Can I generate notes for specific file paths?
Not in the current version, but you could:
- Modify the script to filter commits by path
- Use your AI coding tool to add this feature
- Ask: "How can I filter commits by file path?"
This is a great example of extending the automation.
Prompt engineering questions
How do I improve categorization accuracy?
Follow the iteration process in Tutorial step 5:
- Run with current prompt
- Review output systematically
- Identify patterns of errors
- Add examples to prompt
- Test improvements
- Repeat until 85%+ accuracy
What makes a good categorization prompt?
Key elements:
- Clear category definitions
- Concrete examples (positive and negative)
- Keyword indicators
- Comprehensive exclusion rules
- Decision rules for edge cases
Should I use examples from my actual commits?
Yes. The more domain-specific your examples, the better. Generic examples work OK, but examples from your actual repository work better.
How long should my prompt be?
Sweet spot: 500-1000 words
- Too short (less than 300 words): Vague, inconsistent results
- Just right (500-1000 words): Clear, consistent, maintainable
- Too long (more than 1500 words): Diminishing returns, hard to maintain
Focus on quality over quantity.
Can I use the same prompt for all repositories?
Start with one prompt, then customize per repository:
- Frontend repos might emphasize UI changes
- Backend repos might separate breaking changes
- Documentation repos have different categories
Clone and customize rather than using one prompt for all.
Workflow questions
When should I run this automation?
Recommended timing:
- Before release (1-2 days prior)
- After all commits are merged
- When you have time to review output
Workflow:
- Development team finalizes release
- Run automation to generate draft
- Review and refine draft (10-15 min)
- Add business context
- Publish
Can I integrate this into CI/CD?
Yes. You could:
- Run automatically on release branch
- Create pull request with draft notes
- Human reviews and approves
- Publish on merge
This requires additional setup not covered in the tutorial, but ask your AI coding tool: "How can I run this in GitHub Actions?"
What if my team uses poor commit messages?
The automation can only work with what it has. If commit messages are vague:
- Short term: Add context during review
- Long term: Improve commit message standards
Consider creating a commit message template or guide for your team.
How do I handle breaking changes?
Add a custom category or marker:
Categories:
- New Features
- Enhancements
- Breaking Changes # New category
- Bug Fixes
- Documentation
Or mark in existing categories:
Integration questions
Can I output in different formats?
The default is Markdown, but you can modify the script to output:
- HTML
- JSON
- Confluence wiki format
- Jira description format
Ask your AI coding tool: "How can I output release notes as HTML instead of Markdown?"
Can this create GitHub releases automatically?
Not by default, but you could extend it:
# After generating notes, create GitHub release
gh release create v1.0.0 \
--title "Release 1.0.0" \
--notes-file release_notes.md
Can I post results to Slack or email?
Not built-in, but you could add:
# After generation, post to Slack
import requests
webhook_url = "your-slack-webhook"
requests.post(webhook_url, json={"text": notes})
These are great examples of extending the automation.
Does this work with Jira or other issue trackers?
The current version focuses on Git commits, but you could:
- Fetch Jira tickets closed in date range
- Categorize tickets instead of commits
- Link tickets to commits
This requires additional API integration.
Security questions
Is it safe to use API keys?
Yes, if you follow best practices:
- Use
config.yaml(in.gitignore) - Use environment variables for production
- Rotate keys regularly (every 90 days)
- Use minimal scopes needed
- Never commit keys to version control
- Don't share keys in Slack or email
What data is sent to AI providers?
Only commit messages and metadata:
- Commit message text
- Commit dates
- Author names
- Commit SHA
Not sent:
- Actual code changes
- File contents
- Repository code
Can I run this without cloud AI?
The current version requires cloud AI (Anthropic or OpenAI), but you could:
- Use local LLMs (Ollama, llama.cpp)
- Self-hosted AI models
- Azure OpenAI (enterprise)
This requires code modifications.
How do I handle sensitive repositories?
For sensitive code:
- Use self-hosted AI if available
- Review what's sent (only commit messages)
- Ensure commit messages don't contain secrets
- Use fine-grained GitHub tokens with minimal access
- Consider internal-only AI solutions
Maintenance questions
Do I need to update prompts regularly?
Not usually. Once refined, prompts remain stable. Update when:
- Team's commit message style changes
- New categories needed
- Accuracy drops below 80%
- Repository patterns change
What if the automation stops working?
Check:
- API keys still valid (not expired)
- GitHub token has correct permissions
- Rate limits not exceeded
- Dependencies up to date:
pip install --upgrade -r requirements.txt
How do I onboard new team members?
Share:
- This documentation site
- Your customized prompts
- Your documented manual process
- Example outputs from your repository
The Tutorial is designed for onboarding.
Advanced questions
Can I use this for other documentation tasks?
Yes. The same pattern applies to:
- Translation status tracking
- Broken link checking
- Documentation quality analysis
- Automated screenshots
- API reference generation
The approach: Document manual process → Convert to automation prompt → Iterate
Can I contribute improvements?
Yes. See Contributing guidelines.
Ways to contribute:
- Improve documentation
- Add examples
- Fix bugs
- Share your refined prompts
- Add new features
Where can I learn more?
Resources:
Community:
- Write the Docs Slack
- Technical Writer HQ
- r/technicalwriting
Open an issue if you have a question not answered here.