Skip to content

the-crypt-keeper/can-ai-code

Repository files navigation

Can AI Code?

A cute robot working on a laptop

A self-evaluating interview for AI coding models.

Key Ideas

  • Interview questions written by humans, test taken by AI
  • Inference scripts for all common API providers and CUDA-enabled quantization runtimes
  • Sandbox enviroment (Docker-based) for untrusted Python and NodeJS code validation
  • Evaluate effects of prompting techniques and sampling parameters on LLM coding performance
  • Evaluate LLM coding performance degradation due to quantization

News

5/16 Evaluate rombodawg/Everyone-Coder-33b-v2-Base (FP16), rombodawg/DeepMagic-Coder-7b-Alt (FP16), tiiuae/falcon-11B (FP16 and NF4). Something appears to be wrong with falcon-11b.

5/15 Evaluate Llama3-8B BF16 GGUFs.

5/12 Evaluate Yi-1.5 6B, 9B and 34B (FP16).

5/11 Evaluate ajibawa-2023/Code-Llama-3-8B (FP16), bigcode/starcoder2-15b-instruct-v0.1 and sft (FP16 and AWQ), HuggingFaceH4/starchat2-15b-v0.1 (FP16), rombodawg/Llama-3-8B-Instruct-Coder-v2 (FP16), Mixtral-Instruct-8x22B (AWQ, GPTQ, EXL2-4bit), WizardLM2-8x22B (AWQ, EXL2-4bit).

5/7 Evaluate ibm-granite/granite-code-instruct 3B (FP16, NF4), 8B (FP16, NF4), 20B (FP16, NF4) and 34B (FP16).

5/6 Evaluate Qwen/CodeQwen1.5-7B-Chat (FP16, AWQ, Q8), openchat/openchat-3.5-0106-gemma (FP16), NousResearch/Hermes-2-Pro-Llama-3-8B (FP16).

5/6 Evaluate CodeGemma-7b-It (FP16, NF4, Q6_K_M) for instruction following, CodeGemma-2b and 7b for Completion and FIM (FP16, NF4, Q8).

5/5 Evaluate Phi-3 (transformers and GGUF) - there seems to be something wrong with the GGUF, even at fp16 performance is lower then transformers.

Test Suites

junior-v2 is a multi-language (Python, JavaScript) suite of 12 tests created for this project to test small LLM coding performance. This project provides all necessary components to execute this evaluation.

🚧 humaneval is a Python-only suite of 164 tests created by OpenAI. This project provides template scripts to prepare and execute the humaneval interview, as well as result extraction scripts to help their evaluator. See https://github.com/openai/human-eval for more information.

Results data

All model answers and evaluation results are now included inside this repository! Install a recent release of streamlit pip install streamlit==1.23 then streamlit run app.py or streamlit run compare-app.py to run the above webapps locally.

Results HumanEval

🚧 humaneval/ development work is currently paused, there's other projects that are much further along.

See https://github.com/my-other-github-account/llm-humaneval-benchmarks and https://github.com/abacaj/code-eval for large lists of Humaneval LLM benchmark results.

Repository Structure

Interviews

  • junior-v2/*.yaml - junior coder interview questions (stable)
  • senior/*.yaml - senior coder interview questions (WIP)

Prepare

  • prompts/*.txt - LLM prompt templates for the various models
  • prepare.py - Applies templates to question turning them into language- and model-specific prompts suitable for interview

Prompts

See prompts/ for all prompts references in the leaderboard.

Interview

  • params/*.json - Sampling hyper-parameter sets (used by all interview scripts)
  • interview-*.py - Interview scripts

Parameters

See params/ for all params references in the leaderboard.

Evaluate

Compare

Interviewers: API

API Runtime Script
LiteLLM (OpenAI, etc..) interview-litellm.py
OobaBooga/KoboldCpp interview-oobabooga.py
Huggingface Inference interview-hfinference.py
Gradio (HF Spaces) interview-gradio.py

Interviewers: CUDA (Local)

Quantization Type Script Dependency
GGUF interview-llamacpp.py llamacpp or ggml binary
GPTQ (AutoGptQ) interview-cuda.py auto-gptq==0.6.0
GPTQ (ExLlama) interview-cuda.py exllama @ 3b013cd53c7d413cf99ca04c7c28dd5c95117c0d
EXL2, GPTQ (ExLlama2) interview-cuda.py exllamav2 @ 0.0.12
HQQ interview-cuda.py hqq @ 0.1.1
AWQ, FP16 (vLLM) interview-cuda.py vllm==0.3.0
CTranslate2 interview-cuda.py ctranslate2>=3.16.0
bitsandbytes interview-cuda.py bitsandbytes==0.41.3
FP16 (Transformers) interview-cuda.py transformers==4.37.2

Running on Modal

The recommended modal wrapper is interview_modal_cuda11.py which builds a CUDA11.8 based container with all the above dependencies working. An interview_modal_cuda12.py is also provided, but AutoGPTQ and CTranslate2 are not compatible.

Unfortunately the nature of Modal does not allow command-line selection of eitehr LLM model or runtime engine.

To select models, open the script and uncomment the .run_function(download...) line of choice. Note that only one model can be selected at a time. To add a new model, implement a new download... function.

To select runtime, open the script and uncomment one of the RUNTIME options. Note that for transformers you must also specify QUANT.

Question Format

A set of interview questions is a folder of .yaml files. Each Question is a top-level key:

SanityList:
    Signature: "things()"
    Input: "with no inputs"
    Output: "a list with three values: the number 5, the string 'foobar', the capital city of Spain"
    Fact: "the capital city of Spain is Madrid"
    Description: "List function, see if the model can combine input facts with internal knowledge."
    Checks:
        input_name:
            assert: "f.name"
            eq: "things"

In this example SanityList is the name of the interview question.

The first four fields are used by prepare.py to create the interview:

  • Signature is the desired function signature
  • Input describes the function inputs
  • Output describes the function outputs
  • Fact is optional and provides any context that is required to correctly perform the task

These 4 variables along with language (either python or javascript) are used to expand templates in prompts/.

The last two fields are used by evaluate.py to judge the results:

  • Description is a human-readable explanation of why this test is useful
  • Checks defines the expected behavior of the output.

Checks and the 'f' object

Each check has a name, some assert value (python code) and an expected eq value.

The f object represents the sandbox view of the function. Static analysis is performed on the function signature to extract the f.name and f.args fields, while f.call allows for function evaluation.

Output formats

All scripts output automatically named .ndjson files to the results/ directory.

Each stage outputs a super-set of fields from the stage before it, so its possible to feed eval/interview back to interview (to re-run the questions) or back to eval (to re-run the eval).

prepare

results/prepare_{interview}_{languages}_{template}.ndjson

Fields:

  • all Question fields (Signature, Input, Output, Fact, Description)
  • name
  • language
  • prompt

interview

results/interview_{interview}_{languages}_{template}_{templateout}_{params}_{model}_{timestamp}.ndjson

Fields:

  • all prepare fields
  • model
  • params
  • answer
  • runtime

eval

results/eval_{interview}_{languages}_{template}_{templateout}_{params}_{model}_{timestamp}.ndjson

Fields:

  • all eval fields
  • status
  • passed
  • total
  • checks

Roadmap / Future Work