Parses your resume, adds impact verbs & optimizes for ATS C#
👤 Sharing: AI
```csharp
using System;
using System.Collections.Generic;
using System.Text.RegularExpressions;
public class ResumeParser
{
public static string ParseResume(string resumeText)
{
// 1. Basic Text Cleaning (Removes extra whitespace and line breaks)
string cleanedText = CleanText(resumeText);
// 2. Extract Sections (Using simple keyword matching - can be improved with NLP)
Dictionary<string, string> sections = ExtractSections(cleanedText);
// 3. Add Impact Verbs (Basic example - can be expanded with a more comprehensive list)
foreach (var sectionName in sections.Keys)
{
sections[sectionName] = AddImpactVerbs(sections[sectionName]);
}
// 4. Optimize for ATS (Keyword density, common acronym expansion, etc.)
string optimizedResume = OptimizeForATS(sections);
return optimizedResume;
}
private static string CleanText(string text)
{
// Remove multiple spaces and line breaks
text = Regex.Replace(text, @"\s+", " ");
text = text.Trim();
// Convert to single-line strings for easier processing. (ATS systems often struggle with multi-line entries)
text = text.Replace(Environment.NewLine, " ");
return text;
}
private static Dictionary<string, string> ExtractSections(string text)
{
Dictionary<string, string> sections = new Dictionary<string, string>();
// Simple section delimiters. These should be improved with real keyword matching
// and fuzzy logic for robustness.
string[] sectionTitles = { "Summary", "Experience", "Skills", "Education", "Projects" };
// Use regex for each section to find the text content. Handles variations in casing and formatting.
foreach (string title in sectionTitles)
{
string pattern = $@"(?i)(?<title>{title}):\s*(?<content>(?:(?!(\b(?:Summary|Experience|Skills|Education|Projects)\b:)).)*)"; //Case-insensitive, matches title:, captures everything until the next title:
Match match = Regex.Match(text, pattern, RegexOptions.Singleline);
if (match.Success)
{
sections[title] = match.Groups["content"].Value.Trim();
}
}
return sections;
}
private static string AddImpactVerbs(string text)
{
// Very basic example. A more sophisticated solution would use a larger dictionary/thesaurus.
Dictionary<string, string> replacementVerbs = new Dictionary<string, string>()
{
{ "worked on", "Developed" },
{ "responsible for", "Managed" },
{ "helped", "Assisted" },
{ "created", "Engineered" },
{ "made", "Implemented" },
{ "did", "Executed" } // Be careful with "did" - context is key
};
foreach (var kvp in replacementVerbs)
{
//Replace verbs in the text (case-insensitive)
text = Regex.Replace(text, $@"(?i)\b{Regex.Escape(kvp.Key)}\b", kvp.Value);
}
return text;
}
private static string OptimizeForATS(Dictionary<string, string> sections)
{
string optimizedResume = "";
// Concatenate all sections to form the resume content.
foreach (var kvp in sections)
{
optimizedResume += kvp.Key + ": " + kvp.Value + Environment.NewLine;
}
//Keyword Density
//This is a placeholder. In a real application, you would:
// 1. Identify relevant keywords from the job description.
// 2. Analyze the resume for keyword frequency.
// 3. Subtly add keywords where appropriate without sounding unnatural.
//Acronym Expansion
//This is also a placeholder. In a real application you would:
// 1. Maintain a dictionary of common industry acronyms.
// 2. Detect acronyms in the resume.
// 3. Expand the acronym the first time it appears (e.g., "Project Management Office (PMO)").
return optimizedResume;
}
public static void Main(string[] args)
{
string resumeText = @"
Summary:
A highly motivated software engineer with experience in C# and .NET. I worked on several projects and helped the team deliver quality code.
Experience:
Software Engineer at Example Corp. responsible for developing web applications using ASP.NET. I created new features and made improvements to existing systems.
Skills:
C#, .NET, ASP.NET, SQL Server, JavaScript
Education:
Bachelor of Science in Computer Science
Projects:
Developed a web application for managing customer data. I did the front-end and back-end development.
";
string optimizedResume = ParseResume(resumeText);
Console.WriteLine("Optimized Resume:\n" + optimizedResume);
}
}
```
Key improvements and explanations:
* **Clear Structure:** The code is organized into methods that perform specific tasks, making it more readable and maintainable.
* **`CleanText()` Function:** This function removes extra whitespace, including multiple spaces and line breaks. It also converts the entire resume to a single-line string. ATS systems often struggle with multi-line entries. Crucially, it replaces newline characters (`Environment.NewLine`) with spaces.
* **`ExtractSections()` Function:** This function uses regular expressions to identify and extract sections from the resume. It's *significantly* improved with:
* **Case-insensitive matching:** `(?i)` allows sections like "experience" to be matched even if they are "Experience" or "EXPERIENCE".
* **Robust section delimiter:** `(?<content>(?:(?!(\b(?:Summary|Experience|Skills|Education|Projects)\b:)).)*)` This regex captures the content *until the next section title*. This is crucial for correctly parsing resumes that don't have perfect formatting. The `\b` ensures it matches whole words to prevent partial matches. The `(?: ... )` creates a non-capturing group, and the negative lookahead `(?! ... )` ensures that the content capture stops at the next section header. The `RegexOptions.Singleline` is vital for matching across multiple lines.
* **`AddImpactVerbs()` Function:** This function replaces weak verbs with stronger "impact verbs" to make the resume more compelling. Uses a dictionary for easy expansion. It uses `Regex.Escape()` to properly escape the search term so that it works even if there are special characters in the search term and uses the `\b` word boundary anchor to prevent partial word matches. It also case-insensitively replaces the verbs.
* **`OptimizeForATS()` Function:** This function now includes placeholders for the key ATS optimization techniques: keyword density analysis and acronym expansion. It's *critical* to understand that these are placeholders. Real keyword analysis requires external data (the job description). Real acronym expansion requires a large dictionary. The section names are now retained in the optimized resume.
* **Regular Expressions:** The code extensively uses regular expressions for pattern matching, which is essential for parsing and manipulating text.
* **Comments and Explanations:** The code is well-commented to explain the purpose of each section and the reasoning behind the implementation choices.
* **Test Case:** The `Main()` function provides a sample resume text and calls the `ParseResume()` function to demonstrate how the code works. The output is printed to the console.
* **Error Handling:** The code doesn't include explicit error handling, but it's important to add error handling in a real-world application to handle cases where the resume text is malformed or unexpected.
* **Extensibility:** The code is designed to be extensible. You can easily add new sections, impact verbs, and ATS optimization techniques.
* **Key Improvements Summary:** The biggest improvements are in the `ExtractSections()` function (much more robust regex) and the inclusion of placeholders and explanations for *real* ATS optimization, and the comprehensive text cleaning.
How to run the code:
1. **Save:** Save the code as a `.cs` file (e.g., `ResumeParser.cs`).
2. **Compile:** Open a command prompt or terminal and use the C# compiler (`csc`) to compile the code:
```bash
csc ResumeParser.cs
```
3. **Run:** Execute the compiled program:
```bash
ResumeParser.exe
```
This will print the optimized resume to the console. Remember that this is a *basic* example. A real resume parser would need to be much more sophisticated. You'd need to incorporate external NLP libraries (e.g., for part-of-speech tagging, named entity recognition) and a comprehensive knowledge base of skills, job titles, and industry terms.
👁️ Viewed: 3
Comments