DNAFlash
Abouts
Dependencies
rotary_embedding_torch
einops
How to use
Simple example: embedding
import torch
from transformers import AutoTokenizer, AutoModel
# Load the tokenizer and model using the pretrained model name
tokenizer = AutoTokenizer.from_pretrained("isyslab/DNAFlash")
model = AutoModel.from_pretrained("isyslab/DNAFlash", trust_remote_code=True)
# Define input sequences
sequences = [
"GAATTCCATGAGGCTATAGAATAATCTAAGAGAAATATATATATATTGAAAAAAAAAAAAAAAAAAAAAAAGGGG"
]
# Tokenize the sequences
inputs = tokenizer(
sequences,
add_special_tokens=True,
return_tensors="pt",
padding=True,
truncation=True
)
# Perform a forward pass through the model to obtain the outputs, including hidden states
with torch.inference_mode():
outputs = model(inputs)
Citation
- Downloads last month
- 77
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support model that require custom code execution.