runtime error
Exit code: 1. Reason: 0:00, 39.1MB/s] added_tokens.json: 0%| | 0.00/80.0 [00:00<?, ?B/s][A added_tokens.json: 100%|██████████| 80.0/80.0 [00:00<00:00, 795kB/s] special_tokens_map.json: 0%| | 0.00/367 [00:00<?, ?B/s][A special_tokens_map.json: 100%|██████████| 367/367 [00:00<00:00, 2.92MB/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. config.json: 0%| | 0.00/811 [00:00<?, ?B/s][A config.json: 100%|██████████| 811/811 [00:00<00:00, 6.91MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 40, in <module> from multipurpose_chatbot.demos.base_demo import CustomTabbedInterface File "/home/user/app/multipurpose_chatbot/demos/__init__.py", line 4, in <module> from .chat_interface import ChatInterfaceDemo File "/home/user/app/multipurpose_chatbot/demos/chat_interface.py", line 69, in <module> from ..globals import MODEL_ENGINE File "/home/user/app/multipurpose_chatbot/globals.py", line 13, in <module> MODEL_ENGINE = load_multipurpose_chatbot_engine(BACKEND) File "/home/user/app/multipurpose_chatbot/engines/__init__.py", line 48, in load_multipurpose_chatbot_engine model_engine.load_model() File "/home/user/app/multipurpose_chatbot/engines/transformers_engine.py", line 531, in load_model self._model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=self.torch_dtype, device_map=self.device_map, trust_remote_code=True).eval() File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained return model_class.from_pretrained( File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3234, in from_pretrained raise EnvironmentError( OSError: SeaLLMs/SeaLLMs-v3-7B-Chat does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.
Container logs:
Fetching error logs...