Safetensors
qwen2

Edit Prediction: Fine-Tuned from Qwen2.5-Coder-7B

This repository contains a fine-tuned version of Qwen2.5-Coder-7B to support edit prediction in Zed.

Training Details

The model has been fine-tuned using the zeta dataset. If you want to fine-tune the model yourself, you can refer to the following scripts:

Dataset

The dataset used for training is available at: zed-industries/zeta

Running Zeta

vLLM - Simple

vllm serve zed-industries/zeta --served-model-name zeta

vLLM - Advanced

  • Quantization vLLM supports FP8 (8-bit floating point) weight and activation quantization using hardware acceleration on GPUs such as Nvidia H100 and AMD MI300x.

  • NGram Speculative Decoding configures vLLM to use speculative decoding where proposals are generated by matching n-grams in the prompt. This is a great fit for edit predictions since many of the tokens are already present in the prompt and the model is only needed to generate changes to the code file.

vllm serve zed-industries/zeta --served-model-name zeta --enable-prefix-caching --enable-chunked-prefill --quantization="fp8" --speculative-model [ngram] --ngram-prompt-lookup-max 4 --ngram-prompt-lookup-min 2 --num-speculative-tokens 8

Learn More

For more insights about the model and its integration in Zed, check out the official blog post: Zed Blog - Edit Prediction

Downloads last month
2,396
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for zed-industries/zeta

Finetunes
2 models
Quantizations
11 models

Dataset used to train zed-industries/zeta