Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Add hashsum for stablelm models #7018

Closed
wants to merge 0 commits into from

Conversation

teleprint-me
Copy link
Contributor

@ggerganov
Copy link
Owner

You have to add the models in chovert-hf-to-gguf-update.py - read #6920

@arch-btw
Copy link

arch-btw commented May 1, 2024

Related: #7024

@teleprint-me
Copy link
Contributor Author

teleprint-me commented May 1, 2024

@ggerganov I can do that, but I would recommend against it. It will couple the scripts, increase complexity, and is completely unneccessary for producing the output. Supporting rationale for why would increase my confidence in this approach.

Ill read it again (for the nth time), but I feel less dubious in my stance.

@slaren
Copy link
Collaborator

slaren commented May 1, 2024

The function get_vocab_base_pre is generated by convert-hf-to-gguf-update.py, you are not supposed to edit it directly.

@teleprint-me
Copy link
Contributor Author

@slaren Thanks, I'll need to review it again with fresh eyes. I must've missed it.

@teleprint-me
Copy link
Contributor Author

teleprint-me commented May 1, 2024

Should I add the mi(s|x)tral BPE or SPM? I noticed that I can't convert the original models anymore with either script, but I can convert the safetensors.

Also, there are so many qwen models. 😅

Looks like there's a repo dedicated to just the tokenizer though, https://huggingface.co/Qwen/Qwen-tokenizer/tree/main

@teleprint-me
Copy link
Contributor Author

teleprint-me commented May 1, 2024

I really don't like this approach. This is gonna get really bad, really fast. I see the intent here to automate the get_vocab_base_pre method, though. Not sure if it's any better than manually adding it. Still stand by my original stance with coupling and increased complexity. It's just extra steps to the same end goal.

@teleprint-me
Copy link
Contributor Author

Don't merge this yet. I have some ideas.

Copy link
Contributor

github-actions bot commented May 7, 2024

📈 llama.cpp server for bench-server-baseline on Standard_NC4as_T4_v3 for phi-2-q4_0: 550 iterations 🚀

Expand details for performance related PR only
  • Concurrent users: 8, duration: 10m
  • HTTP request : avg=8497.66ms p(95)=20211.57ms fails=, finish reason: stop=481 truncated=69
  • Prompt processing (pp): avg=104.57tk/s p(95)=423.35tk/s
  • Token generation (tg): avg=33.11tk/s p(95)=46.9tk/s
  • ggml-org/models/phi-2/ggml-model-q4_0.gguf parallel=8 ctx-size=16384 ngl=33 batch-size=2048 ubatch-size=256 pp=1024 pp+tg=2048 branch=add-stablelm-hash commit=d9eaa44d1604843ae2d2b6c0060d17981fb4c3f6

prompt_tokens_seconds

More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 550 iterations"
    y-axis "llamacpp:prompt_tokens_seconds"
    x-axis "llamacpp:prompt_tokens_seconds" 1715062478 --> 1715063110
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 730.13, 730.13, 730.13, 730.13, 730.13, 622.77, 622.77, 622.77, 622.77, 622.77, 651.34, 651.34, 651.34, 651.34, 651.34, 684.4, 684.4, 684.4, 684.4, 684.4, 748.52, 748.52, 748.52, 748.52, 748.52, 746.89, 746.89, 746.89, 746.89, 746.89, 770.15, 770.15, 770.15, 770.15, 770.15, 786.25, 786.25, 786.25, 786.25, 786.25, 785.21, 785.21, 785.21, 785.21, 785.21, 800.97, 800.97, 800.97, 800.97, 800.97, 806.78, 806.78, 806.78, 806.78, 806.78, 846.24, 846.24, 846.24, 846.24, 846.24, 888.17, 888.17, 888.17, 888.17, 888.17, 851.95, 851.95, 851.95, 851.95, 851.95, 851.54, 851.54, 851.54, 851.54, 851.54, 851.38, 851.38, 851.38, 851.38, 851.38, 871.48, 871.48, 871.48, 871.48, 871.48, 872.37, 872.37, 872.37, 872.37, 872.37, 870.7, 870.7, 870.7, 870.7, 870.7, 875.53, 875.53, 875.53, 875.53, 875.53, 878.25, 878.25, 878.25, 878.25, 878.25, 889.09, 889.09, 889.09, 889.09, 889.09, 868.28, 868.28, 868.28, 868.28, 868.28, 868.33, 868.33, 868.33, 868.33, 868.33, 882.58, 882.58, 882.58, 882.58, 882.58, 879.37, 879.37, 879.37, 879.37, 879.37, 878.37, 878.37, 878.37, 878.37, 878.37, 875.87, 875.87, 875.87, 875.87, 875.87, 878.03, 878.03, 878.03, 878.03, 878.03, 882.03, 882.03, 882.03, 882.03, 882.03, 879.05, 879.05, 879.05, 879.05, 879.05, 882.17, 882.17, 882.17, 882.17, 882.17, 895.59, 895.59, 895.59, 895.59, 895.59, 901.11, 901.11, 901.11, 901.11, 901.11, 886.86, 886.86, 886.86, 886.86, 886.86, 885.35, 885.35, 885.35, 885.35, 885.35, 883.98, 883.98, 883.98, 883.98, 883.98, 882.54, 882.54, 882.54, 882.54, 882.54, 882.04, 882.04, 882.04, 882.04, 882.04, 890.24, 890.24, 890.24, 890.24, 890.24, 897.75, 897.75, 897.75, 897.75, 897.75, 899.08, 899.08, 899.08, 899.08, 899.08, 897.24, 897.24, 897.24, 897.24, 897.24, 894.1, 894.1, 894.1, 894.1, 894.1, 891.58, 891.58, 891.58, 891.58, 891.58, 890.38, 890.38, 890.38, 890.38, 890.38, 891.79, 891.79, 891.79, 891.79, 891.79, 891.31, 891.31, 891.31, 891.31, 891.31, 895.48, 895.48, 895.48, 895.48, 895.48, 895.35, 895.35, 895.35, 895.35, 895.35, 894.67, 894.67, 894.67, 894.67, 894.67, 897.2, 897.2, 897.2, 897.2, 897.2, 896.15, 896.15, 896.15, 896.15, 896.15, 902.23, 902.23, 902.23, 902.23, 902.23, 902.53, 902.53, 902.53, 902.53, 902.53, 901.83, 901.83, 901.83, 901.83, 901.83, 902.56, 902.56, 902.56, 902.56, 902.56, 902.96, 902.96, 902.96, 902.96, 902.96, 902.33, 902.33, 902.33, 902.33, 902.33, 904.99, 904.99, 904.99, 904.99, 904.99, 904.99, 904.99]
                    
predicted_tokens_seconds
More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 550 iterations"
    y-axis "llamacpp:predicted_tokens_seconds"
    x-axis "llamacpp:predicted_tokens_seconds" 1715062478 --> 1715063110
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 32.59, 32.59, 32.59, 32.59, 32.59, 29.09, 29.09, 29.09, 29.09, 29.09, 28.8, 28.8, 28.8, 28.8, 28.8, 30.1, 30.1, 30.1, 30.1, 30.1, 31.75, 31.75, 31.75, 31.75, 31.75, 32.92, 32.92, 32.92, 32.92, 32.92, 34.33, 34.33, 34.33, 34.33, 34.33, 34.75, 34.75, 34.75, 34.75, 34.75, 34.78, 34.78, 34.78, 34.78, 34.78, 34.49, 34.49, 34.49, 34.49, 34.49, 34.65, 34.65, 34.65, 34.65, 34.65, 33.97, 33.97, 33.97, 33.97, 33.97, 33.72, 33.72, 33.72, 33.72, 33.72, 32.96, 32.96, 32.96, 32.96, 32.96, 32.5, 32.5, 32.5, 32.5, 32.5, 32.76, 32.76, 32.76, 32.76, 32.76, 32.9, 32.9, 32.9, 32.9, 32.9, 32.47, 32.47, 32.47, 32.47, 32.47, 32.44, 32.44, 32.44, 32.44, 32.44, 32.41, 32.41, 32.41, 32.41, 32.41, 32.28, 32.28, 32.28, 32.28, 32.28, 32.49, 32.49, 32.49, 32.49, 32.49, 32.43, 32.43, 32.43, 32.43, 32.43, 32.66, 32.66, 32.66, 32.66, 32.66, 32.87, 32.87, 32.87, 32.87, 32.87, 32.74, 32.74, 32.74, 32.74, 32.74, 32.1, 32.1, 32.1, 32.1, 32.1, 31.88, 31.88, 31.88, 31.88, 31.88, 32.07, 32.07, 32.07, 32.07, 32.07, 32.25, 32.25, 32.25, 32.25, 32.25, 32.38, 32.38, 32.38, 32.38, 32.38, 32.54, 32.54, 32.54, 32.54, 32.54, 32.4, 32.4, 32.4, 32.4, 32.4, 32.32, 32.32, 32.32, 32.32, 32.32, 32.24, 32.24, 32.24, 32.24, 32.24, 32.09, 32.09, 32.09, 32.09, 32.09, 32.1, 32.1, 32.1, 32.1, 32.1, 32.21, 32.21, 32.21, 32.21, 32.21, 32.32, 32.32, 32.32, 32.32, 32.32, 32.49, 32.49, 32.49, 32.49, 32.49, 32.47, 32.47, 32.47, 32.47, 32.47, 32.32, 32.32, 32.32, 32.32, 32.32, 31.79, 31.79, 31.79, 31.79, 31.79, 30.84, 30.84, 30.84, 30.84, 30.84, 30.15, 30.15, 30.15, 30.15, 30.15, 30.15, 30.15, 30.15, 30.15, 30.15, 30.04, 30.04, 30.04, 30.04, 30.04, 30.08, 30.08, 30.08, 30.08, 30.08, 30.19, 30.19, 30.19, 30.19, 30.19, 30.3, 30.3, 30.3, 30.3, 30.3, 30.34, 30.34, 30.34, 30.34, 30.34, 30.34, 30.34, 30.34, 30.34, 30.34, 30.11, 30.11, 30.11, 30.11, 30.11, 30.07, 30.07, 30.07, 30.07, 30.07, 30.06, 30.06, 30.06, 30.06, 30.06, 30.11, 30.11, 30.11, 30.11, 30.11, 30.3, 30.3, 30.3, 30.3, 30.3, 30.38, 30.38, 30.38, 30.38, 30.38, 30.46, 30.46, 30.46, 30.46, 30.46, 30.5, 30.5, 30.5, 30.5, 30.5, 30.54, 30.54]
                    

Details

kv_cache_usage_ratio

More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 550 iterations"
    y-axis "llamacpp:kv_cache_usage_ratio"
    x-axis "llamacpp:kv_cache_usage_ratio" 1715062478 --> 1715063110
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.24, 0.24, 0.24, 0.24, 0.24, 0.37, 0.37, 0.37, 0.37, 0.37, 0.09, 0.09, 0.09, 0.09, 0.09, 0.13, 0.13, 0.13, 0.13, 0.13, 0.12, 0.12, 0.12, 0.12, 0.12, 0.16, 0.16, 0.16, 0.16, 0.16, 0.13, 0.13, 0.13, 0.13, 0.13, 0.15, 0.15, 0.15, 0.15, 0.15, 0.19, 0.19, 0.19, 0.19, 0.19, 0.14, 0.14, 0.14, 0.14, 0.14, 0.2, 0.2, 0.2, 0.2, 0.2, 0.26, 0.26, 0.26, 0.26, 0.26, 0.31, 0.31, 0.31, 0.31, 0.31, 0.27, 0.27, 0.27, 0.27, 0.27, 0.19, 0.19, 0.19, 0.19, 0.19, 0.16, 0.16, 0.16, 0.16, 0.16, 0.3, 0.3, 0.3, 0.3, 0.3, 0.2, 0.2, 0.2, 0.2, 0.2, 0.14, 0.14, 0.14, 0.14, 0.14, 0.2, 0.2, 0.2, 0.2, 0.2, 0.15, 0.15, 0.15, 0.15, 0.15, 0.25, 0.25, 0.25, 0.25, 0.25, 0.11, 0.11, 0.11, 0.11, 0.11, 0.12, 0.12, 0.12, 0.12, 0.12, 0.31, 0.31, 0.31, 0.31, 0.31, 0.37, 0.37, 0.37, 0.37, 0.37, 0.23, 0.23, 0.23, 0.23, 0.23, 0.18, 0.18, 0.18, 0.18, 0.18, 0.12, 0.12, 0.12, 0.12, 0.12, 0.13, 0.13, 0.13, 0.13, 0.13, 0.11, 0.11, 0.11, 0.11, 0.11, 0.13, 0.13, 0.13, 0.13, 0.13, 0.26, 0.26, 0.26, 0.26, 0.26, 0.14, 0.14, 0.14, 0.14, 0.14, 0.29, 0.29, 0.29, 0.29, 0.29, 0.14, 0.14, 0.14, 0.14, 0.14, 0.14, 0.14, 0.14, 0.14, 0.14, 0.17, 0.17, 0.17, 0.17, 0.17, 0.11, 0.11, 0.11, 0.11, 0.11, 0.25, 0.25, 0.25, 0.25, 0.25, 0.42, 0.42, 0.42, 0.42, 0.42, 0.54, 0.54, 0.54, 0.54, 0.54, 0.61, 0.61, 0.61, 0.61, 0.61, 0.49, 0.49, 0.49, 0.49, 0.49, 0.08, 0.08, 0.08, 0.08, 0.08, 0.19, 0.19, 0.19, 0.19, 0.19, 0.14, 0.14, 0.14, 0.14, 0.14, 0.1, 0.1, 0.1, 0.1, 0.1, 0.16, 0.16, 0.16, 0.16, 0.16, 0.13, 0.13, 0.13, 0.13, 0.13, 0.17, 0.17, 0.17, 0.17, 0.17, 0.26, 0.26, 0.26, 0.26, 0.26, 0.19, 0.19, 0.19, 0.19, 0.19, 0.24, 0.24, 0.24, 0.24, 0.24, 0.14, 0.14, 0.14, 0.14, 0.14, 0.1, 0.1, 0.1, 0.1, 0.1, 0.14, 0.14, 0.14, 0.14, 0.14, 0.21, 0.21, 0.21, 0.21, 0.21, 0.13, 0.13, 0.13, 0.13, 0.13, 0.15, 0.15, 0.15, 0.15, 0.15, 0.22, 0.22]
                    
requests_processing
More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 550 iterations"
    y-axis "llamacpp:requests_processing"
    x-axis "llamacpp:requests_processing" 1715062478 --> 1715063110
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.0, 3.0, 3.0, 3.0, 3.0, 7.0, 7.0, 7.0, 7.0, 7.0, 6.0, 6.0, 6.0, 6.0, 6.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 3.0, 3.0, 3.0, 3.0, 3.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 4.0, 4.0, 4.0, 4.0, 4.0, 7.0, 7.0, 7.0, 7.0, 7.0, 2.0, 2.0, 2.0, 2.0, 2.0, 7.0, 7.0, 7.0, 7.0, 7.0, 5.0, 5.0, 5.0, 5.0, 5.0, 2.0, 2.0, 2.0, 2.0, 2.0, 4.0, 4.0, 4.0, 4.0, 4.0, 6.0, 6.0, 6.0, 6.0, 6.0, 5.0, 5.0, 5.0, 5.0, 5.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 4.0, 4.0, 4.0, 4.0, 4.0, 3.0, 3.0, 3.0, 3.0, 3.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 4.0, 4.0, 4.0, 4.0, 4.0, 3.0, 3.0, 3.0, 3.0, 3.0, 5.0, 5.0, 5.0, 5.0, 5.0, 3.0, 3.0, 3.0, 3.0, 3.0, 2.0, 2.0, 2.0, 2.0, 2.0, 6.0, 6.0, 6.0, 6.0, 6.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 5.0, 5.0, 5.0, 5.0, 5.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 6.0, 6.0, 6.0, 6.0, 6.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 6.0, 6.0, 6.0, 6.0, 6.0, 5.0, 5.0, 5.0, 5.0, 5.0, 4.0, 4.0, 4.0, 4.0, 4.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 6.0, 6.0, 6.0, 6.0, 6.0, 8.0, 8.0, 8.0, 8.0, 8.0, 1.0, 1.0, 1.0, 1.0, 1.0, 4.0, 4.0, 4.0, 4.0, 4.0, 3.0, 3.0, 3.0, 3.0, 3.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 8.0, 8.0, 8.0, 8.0, 8.0, 1.0, 1.0]
                    

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants