-
Notifications
You must be signed in to change notification settings - Fork 160
Fix Phi long context issue #1504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
I believe minor differences are expected on SPR. But if possible, WWB similarity should be run to see if the difference is significant or not. |
| logits_to_keep=None, | ||
| **kwargs, | ||
| ): | ||
| # Overwritten -- this model may need to switch between short and long rope, invalidating the cache in the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am I correct that we have a problem when we have short and long prompts in consecutive generate calls?
We can't re-initialize inv_freqs from long_inv_freqs to short_inv_freqs and vise-versa? How this problem is solved?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed offline: this is handled in optimum-intel by resetting the kv-cache when the number of input tokens is equal to the long rope boundary (e.g. 4096). This is done the same way in transformers code. Tested that this works as expected in chat context with https://gist.github.com/helena-intel/b55522cda91d9d61a644f153e71f0f98 .
echarlaix
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @helena-intel !!
|
|
||
|
|
||
| class OVPhi3ForCausalLM(OVModelForCausalLM): | ||
| def prepare_inputs_for_generation( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would you mind adding a link to the original code
https://github.com/huggingface/transformers/blob/v4.57.0/src/transformers/models/phi3/modeling_phi3.py#L493
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| super().__enter__() | ||
| # Call OVDecoderModelPatcher.__enter__() directly to skip Phi3ModelPatcher's longrope logic | ||
| # PhiMoE has a different rotary embedding structure, longrope is not yet supported |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need to add all this modifications to PhiMoEModelPatcher? (if longrope is not yet supported then self._model.model.rotary_emb will never be set to "longrope") If we want to make sure we can raise an error in case it's ever the case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initially tests failed for phi_moe, see https://github.com/huggingface/optimum-intel/actions/runs/18952102871/job/54119192964 . We should have longrope support for the MoE model too but not in this PR. I would be happy with a simpler solution to not enable longrope for the MoE model (but still have it working as it is now).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will fix this in a better way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a _disable_longrope property instead of the previous code.
| return torch.where(seq_len <= max_pos_embeddings, short_factor, long_factor) | ||
|
|
||
|
|
||
| def long_rope(self, x, position_ids, seq_len=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would you mind adding a link to original code (https://github.com/huggingface/transformers/blob/v4.57.1/src/transformers/models/phi3/modeling_phi3.py#L324 ?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| scaling_factor = 1.0 | ||
| else: | ||
| scaling_factor = math.sqrt(1 + math.log(scale) / math.log(original_max_position_embeddings)) | ||
| cos = emb.cos() * scaling_factor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here we can't use self.attention_scaling ? https://github.com/huggingface/transformers/blob/63fbd50fb4ff7b586ab1b59b67f7464e62f9df69/src/transformers/modeling_rope_utils.py#L519
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is a good point. @helena-intel, please use it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| # Force float32 since bfloat16 loses precision on long contexts | ||
| # See https://github.com/huggingface/transformers/pull/29285 | ||
| device_type = x.device.type | ||
| device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not used and should we ensure fp32 dtype also ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Added the autocast line with enabled=False.
| return torch.where(seq_len <= max_pos_embeddings, short_factor, long_factor) | ||
|
|
||
|
|
||
| def long_rope(self, x, position_ids, seq_len=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@helena-intel, you actually patch this function https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_rope_utils.py#L442
but I don't see that short_factor from model config is used in the patch. Please clarify it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@helena-intel, I think we need to re-write this patch more accurately to be aligned with https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_rope_utils.py#L442 for longrope
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
short_factor is in the select_ext_factor function: return torch.where(seq_len <= max_pos_embeddings, short_factor, long_factor)
I agree it would be clearer to rewrite - but it is functionally working now. We see the same outputs as transformers, for both short and long context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rkazants I refactored the function and added more comments. I think it is clearer now, please review.
| ): | ||
| past_length = cache_position[0] | ||
| if past_length <= self.config.original_max_position_embeddings: | ||
| past_key_values = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add a link to https://github.com/huggingface/transformers/blob/main/src/transformers/models/phi3/modeling_phi3.py#L522 and comment that it is aligned with phi3 for long context.
And add a comment that we reset KV cache and it means that the next step will be prefill for extended (computed so far) tokens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the link a few lines above. The comment that was there was copied verbatim from the transformers code. I modified the second line a bit to make it clearer (transformers comment references "current failure" but it is not clear what that is).
|
@helena-intel, also it is needed to create tiny-model phi3 that has small values original_max_embedding < max_embedding, for example, equal to 10 and 20. This is how we test KV cache reset and applying new scaling factors based on it. And you can easily embed this tiny model into existing tests in |
Yes, added model yesterday (https://huggingface.co/optimum-intel-internal-testing/tiny-random-phi-4-mini-instruct) and just added a test that fails in main branch and passes with this PR. |
| f"values are not close for {dtype if dtype is not None else 'None'}, max diff = {torch.abs(ov_logits - ref_logits).max()}", | ||
| ) | ||
|
|
||
| def test_phi3_longrope_support(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need in this new test. Just add your model into SUPPORTED_ARCHITECTURES above. All requited testing will be activated. You also need to add model id into util_tests.py
Why |
I wanted to use a model that is being used by people who reported this issue, and I figured it would be useful to have a phi-4 tiny model too. I can change it if needed.
So it should replace the existing model? https://github.com/huggingface/optimum-intel/blob/main/tests/openvino/utils_tests.py#L152
I think it's useful to test both short and long context because it is also relevant to know if short context starts failing. And long context should be tested with prompts above the threshold value so if we rely on existing tests we should always remember that the generic model input needs to exceed the long context threshold. If someone changes the existing "same output as transformers" test, or the tiny model, the test may miss issues.
I will look into that. Values probably need to be a bit higher, but can be lower than default. We can't just set the values to 10 and 20, the model is sensitive to parameters and it's easy to get collapsing outputs or differences between PyTorch and OpenVINO. |
- Explicitly disable torch.autocast to ensure float32 precision - Add sources for adapted code - Use self.attention_scaling instead of manual computation - Save and restore original _orig_max_position_embeddings - Modify F32_CONFIG to use EXECUTION_MODE_HINT
Exclude longrope for phi3-moe with _disable_longrope
- Add more comments - Remove superfluous select_ext_factor function - Rename long_rope to _phi3_longrope_forward for clarity
tests/openvino/test_decoder.py
Outdated
| ) | ||
|
|
||
| # Creating model inputs with more than original max position embeddings and enough variation for varied output tokens | ||
| tokens = torch.as_tensor(list(tokenizer.get_vocab().values())[: original_max_pos + 50]).unsqueeze(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the tokenizer is not really needed here, you can use torch.randint with model.config.vocab_size
also shouldn't we test staring with less than max position embeddings and generating enough to surpass it (to trigger cache re-computation)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed test to use randint, and now we test both scenarios, where input tokens exceeds original_max_pos and where generation tokens exceeds it.
tests/openvino/utils_tests.py
Outdated
| # With this config, inference runs in f32 and optimizations that may influence accuracy are disabled | ||
| F32_CONFIG = {"EXECUTION_MODE_HINT": "ACCURACY"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we wanna change it for all models ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should not change it globally. Let have it only for phi3 models.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverted
| model_id, export=True, ov_config=F32_CONFIG, device=OPENVINO_DEVICE | ||
| ) | ||
|
|
||
| # Creating model inputs with more than original max position embeddings and enough variation for varied output tokens |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please test two cases when input_ids length exceeds threshold and when only max_new_tokens exceeds threshold
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
tests/openvino/test_decoder.py
Outdated
| def test_phi3_longrope_support(self): | ||
| """Test LongRoPE support for Phi3 with inputs > 4096 tokens.""" | ||
| set_seed(SEED) | ||
| model_id = "optimum-intel-internal-testing/tiny-random-phi-4-mini-instruct" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please change model card id. Now it is quite confusing with phi-4 but this is not phi-4
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
- rename tiny model to phi3 - add test for cumulative context - revert F32_CONFIG change
- Set MIN_TRANSFORMERS_VERSION to 4.49 for Phi3 - Remove code specific for transformers<4.49 - Disable trust-remote-code for Phi3
IlyasMoutawwakil
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM ! Thanks for the awesome fix !
rkazants
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@IlyasMoutawwakil, please let me review before merge. Thanks!
This is #1297 updated to latest main branch.
Currently inference on Phi-3-mini and Phi-4-mini returns bad outputs (random characters) when context gets larger than about 2000 tokens. This PR, contributed by @eaidova , fixes that. This is not my code. The original PR is no longer being updated; I'm making this a new PR to make it easier to discuss and add updates.
I saw no negative impact on inference speed. I see slightly different outputs with shorter contexts on SPR (on inference with the model exported with the PR vs the model exported with main). Any suggestions to fix that would be much appreciated.
Draft PR for now, awaiting some feedback and testing, but I hope we can merge this soon.