You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The command /generate tries to use the LLM's response to determine the file name. This doesn't work with thinking models because the first token is <think>. The LLM's response should be sanitized. It's almost (but not quite) a security issue.
Configure to use local ollama with huggingface.co/lmstudio-community/DeepCoder-14B-Preview-GGUF:Q6_K
Open Chat
Type: /generate write "hi" in the file "allo.txt" in python
Expected behavior
Generates code.
Actual
Traceback (most recent call last):
File "/Users/maruel/src-my/ml/venv/lib/python3.11/site-packages/nbformat/__init__.py", line 204, in write
fp.write(s)
^^^^^^^^
AttributeError: 'str' object has no attribute 'write'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/maruel/src-my/ml/venv/lib/python3.11/site-packages/jupyter_ai/chat_handlers/base.py", line 229, in on_message
await self.process_message(message)
File "/Users/maruel/src-my/ml/venv/lib/python3.11/site-packages/jupyter_ai/chat_handlers/generate.py", line 301, in process_message
final_path = await self._generate_notebook(prompt=message.body)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/maruel/src-my/ml/venv/lib/python3.11/site-packages/jupyter_ai/chat_handlers/generate.py", line 291, in _generate_notebook
nbformat.write(notebook, final_path)
File "/Users/maruel/src-my/ml/venv/lib/python3.11/site-packages/nbformat/__init__.py", line 208, in write
with Path(fp).open("w", encoding="utf8") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/maruel/bin/homebrew/Cellar/[email protected]/3.11.8/Frameworks/Python.framework/Versions/3.11/lib/python3.11/pathlib.py", line 1044, in open
return io.open(self, mode, buffering, encoding, errors, newline)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 63] File name too long: '/Users/maruel/src-my/ml/notebooks/<think>\nAlright, so I need to create a short, descriptive title for this Jupyter notebook. Let me see what the content is about.\n\nFirst, looking at the sections: there\'s an Environment Setup and Writing to a File. The prompt mentions writing "hi" in "allo.txt" using Python. So it\'s all about file operations in Python.\n\nThe user wants the title to be few words and descriptive. Maybe something like "File Operations with Python." That covers both opening files and handling them, which is what the notebook focuses on. It\'s concise and directly relates to the content.\n</think>\n\n"File Operations with Python.ipynb'
Context
Operating System and version: macOS
Browser and version: Latest Chrome
JupyterLab version: 4.3.6
The text was updated successfully, but these errors were encountered:
@maruel Thanks for opening this issue! Thinking/reasoning models aren't fully supported in Jupyter AI, since we don't give special treatment to contents within <think> or <thinking> XML tags. Since different thinking models use different ways to "mark" the reasoning portion of their response, we will have to do more research to support thinking models in /generate. For now, I recommend using another model when you need to use /generate.
The LLM's response should be sanitized. It's almost (but not quite) a security issue.
I don't think this is a security issue. XML tags in filenames, while undesirable, are generally safe. XML injection vulnerabilities are generally only a threat when you render AI responses in the browser. We use JupyterLab's Rendermime library to render the AI responses in the chat panel, which automatically sanitizes the text to protect our users. JupyterLab should be doing something similar for filenames with XML tags. Thanks for putting this into consideration however!
Description
The command
/generate
tries to use the LLM's response to determine the file name. This doesn't work with thinking models because the first token is<think>
. The LLM's response should be sanitized. It's almost (but not quite) a security issue.Reproduce
ollama pull huggingface.co/lmstudio-community/DeepCoder-14B-Preview-GGUF:Q6_K
/generate write "hi" in the file "allo.txt" in python
Expected behavior
Generates code.
Actual
Context
The text was updated successfully, but these errors were encountered: