Job Skill Extractor - Fine-Tuned Llama Model
Key Features
- Fast Training and Inference: Achieved 2x faster performance using Unsloth's techniques.
- Base Model:
unsloth/Llama-3.2-3B-Instruct
. - Language Support: English.
- License: Apache 2.0.
Intended Use
This model is designed to assist in:
- Extracting required job skills from job descriptions and titles.
- Automating job-skill matching for HR applications.
- Enabling intelligent job posting analysis in recruitment systems.
Usage Example
Below is an example usage with the Unsloth library:
from unsloth import FastLanguageModel
# Load model and tokenizer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "batuhanmtl/job-skill-extractor-llama3.2",
max_seq_length = 8000,
dtype = False,
load_in_4bit = False,
)
# Enable faster inference
FastLanguageModel.for_inference(model)
# Define job description prompt
prompt_template = f"""
##### JOB TITLE #####
{job_title}
##### JOB DESCRIPTION #####
{job_description}
"""
# Tokenize input
messages = [
{"role": "user", "content": prompt_template}
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True,
return_tensors = "pt",
).to("cuda")
# Generate output
outputs = model.generate(
input_ids=inputs,
max_new_tokens=150,
use_cache=True,
temperature=1.5,
min_p=0.1
)
start_token = "<|start_header_id|>assistant<|end_header_id|>"
end_token = "<|eot_id|>"
output = tokenizer.batch_decode(outputs)[0]
skill_list = output.split(start_token)[1].split(end_token)[0].strip()
print(skill_list)
Model Overview
This fine-tuned Llama model, batuhanmtl/job-skill-extractor-llama3.2
, is optimized for extracting relevant job skills from job titles and descriptions. It was trained 2x faster using Unsloth and HF中国镜像站's TRL (Transformers Reinforcement Learning) library. The model leverages the efficiency of the unsloth/Llama-3.2-3B-Instruct
base model to provide fast and accurate text generation capabilities.
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for batuhanmtl/job-skill-extractor-llama3.2
Base model
meta-llama/Llama-3.2-3B-Instruct
Finetuned
unsloth/Llama-3.2-3B-Instruct