Supplementary code for the Build a Large Language Model From Scratch book by Sebastian Raschka Code repository: https://github.com/rasbt/LLMs-from-scratch |
![]() |
Evaluating Instruction Responses Locally Using a Llama 3 Model Via Ollama#
This notebook uses an 8-billion-parameter Llama 3 model through ollama to evaluate responses of instruction finetuned LLMs based on a dataset in JSON format that includes the generated model responses, for example:
{
"instruction": "What is the atomic number of helium?",
"input": "",
"output": "The atomic number of helium is 2.", # <-- The target given in the test set
"model 1 response": "\nThe atomic number of helium is 2.0.", # <-- Response by an LLM
"model 2 response": "\nThe atomic number of helium is 3." # <-- Response by a 2nd LLM
},
The code doesn’t require a GPU and runs on a laptop (it was tested on a M3 MacBook Air)
from importlib.metadata import version
pkgs = ["tqdm", # Progress bar
]
for p in pkgs:
print(f"{p} version: {version(p)}")
tqdm version: 4.65.0
Installing Ollama and Downloading Llama 3#
Ollama is an application to run LLMs efficiently
It is a wrapper around llama.cpp, which implements LLMs in pure C/C++ to maximize efficiency
Note that it is a tool for using LLMs to generate text (inference), not training or finetuning LLMs
Prior to running the code below, install ollama by visiting https://ollama.com and following the instructions (for instance, clicking on the “Download” button and downloading the ollama application for your operating system)
For macOS and Windows users, click on the ollama application you downloaded; if it prompts you to install the command line usage, say “yes”
Linux users can use the installation command provided on the ollama website
In general, before we can use ollama from the command line, we have to either start the ollama application or run
ollama serve
in a separate terminal

With the ollama application or
ollama serve
running, in a different terminal, on the command line, execute the following command to try out the 8-billion-parameter Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)
# 8B model
ollama run llama3
The output looks like as follows:
$ ollama run llama3
pulling manifest
pulling 6a0746a1ec1a... 100% ▕████████████████▏ 4.7 GB
pulling 4fa551d4f938... 100% ▕████████████████▏ 12 KB
pulling 8ab4849b038c... 100% ▕████████████████▏ 254 B
pulling 577073ffcc6c... 100% ▕████████████████▏ 110 B
pulling 3f8eb4da87fa... 100% ▕████████████████▏ 485 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Note that
llama3
refers to the instruction finetuned 8-billion-parameter Llama 3 modelAlternatively, you can also use the larger 70-billion-parameter Llama 3 model, if your machine supports it, by replacing
llama3
withllama3:70b
After the download has been completed, you will see a command line prompt that allows you to chat with the model
Try a prompt like “What do llamas eat?”, which should return an output similar to the following:
>>> What do llamas eat?
Llamas are ruminant animals, which means they have a four-chambered
stomach and eat plants that are high in fiber. In the wild, llamas
typically feed on:
1. Grasses: They love to graze on various types of grasses, including tall
grasses, wheat, oats, and barley.
You can end this session using the input
/bye
Using Ollama’s REST API#
Now, an alternative way to interact with the model is via its REST API in Python via the following function
Before you run the next cells in this notebook, make sure that ollama is still running, as described above, via
ollama serve
in a terminalthe ollama application
Next, run the following code cell to query the model
First, let’s try the API with a simple example to make sure it works as intended:
import urllib.request
import json
def query_model(prompt, model="llama3", url="http://localhost:11434/api/chat"):
# Create the data payload as a dictionary
data = {
"model": model,
"messages": [
{
"role": "user",
"content": prompt
}
],
"options": { # Settings below are required for deterministic responses
"seed": 123,
"temperature": 0,
"num_ctx": 2048
}
}
# Convert the dictionary to a JSON formatted string and encode it to bytes
payload = json.dumps(data).encode("utf-8")
# Create a request object, setting the method to POST and adding necessary headers
request = urllib.request.Request(url, data=payload, method="POST")
request.add_header("Content-Type", "application/json")
# Send the request and capture the response
response_data = ""
with urllib.request.urlopen(request) as response:
# Read and decode the response
while True:
line = response.readline().decode("utf-8")
if not line:
break
response_json = json.loads(line)
response_data += response_json["message"]["content"]
return response_data
result = query_model("What do Llamas eat?")
print(result)
---------------------------------------------------------------------------
ConnectionRefusedError Traceback (most recent call last)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:1348, in AbstractHTTPHandler.do_open(self, http_class, req, **http_conn_args)
1347 try:
-> 1348 h.request(req.get_method(), req.selector, req.data, headers,
1349 encode_chunked=req.has_header('Transfer-encoding'))
1350 except OSError as err: # timeout error
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:1276, in HTTPConnection.request(self, method, url, body, headers, encode_chunked)
1275 """Send a complete request to the server."""
-> 1276 self._send_request(method, url, body, headers, encode_chunked)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:1322, in HTTPConnection._send_request(self, method, url, body, headers, encode_chunked)
1321 body = _encode(body, 'body')
-> 1322 self.endheaders(body, encode_chunked=encode_chunked)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:1271, in HTTPConnection.endheaders(self, message_body, encode_chunked)
1270 raise CannotSendHeader()
-> 1271 self._send_output(message_body, encode_chunked=encode_chunked)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:1031, in HTTPConnection._send_output(self, message_body, encode_chunked)
1030 del self._buffer[:]
-> 1031 self.send(msg)
1033 if message_body is not None:
1034
1035 # create a consistent interface to message_body
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:969, in HTTPConnection.send(self, data)
968 if self.auto_open:
--> 969 self.connect()
970 else:
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:940, in HTTPConnection.connect(self)
939 sys.audit("http.client.connect", self, self.host, self.port)
--> 940 self.sock = self._create_connection(
941 (self.host,self.port), self.timeout, self.source_address)
942 self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socket.py:845, in create_connection(address, timeout, source_address)
844 try:
--> 845 raise err
846 finally:
847 # Break explicitly a reference cycle
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socket.py:833, in create_connection(address, timeout, source_address)
832 sock.bind(source_address)
--> 833 sock.connect(sa)
834 # Break explicitly a reference cycle
ConnectionRefusedError: [Errno 61] Connection refused
During handling of the above exception, another exception occurred:
URLError Traceback (most recent call last)
Cell In[2], line 43
38 response_data += response_json["message"]["content"]
40 return response_data
---> 43 result = query_model("What do Llamas eat?")
44 print(result)
Cell In[2], line 31, in query_model(prompt, model, url)
29 # Send the request and capture the response
30 response_data = ""
---> 31 with urllib.request.urlopen(request) as response:
32 # Read and decode the response
33 while True:
34 line = response.readline().decode("utf-8")
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:216, in urlopen(url, data, timeout, cafile, capath, cadefault, context)
214 else:
215 opener = _opener
--> 216 return opener.open(url, data, timeout)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:519, in OpenerDirector.open(self, fullurl, data, timeout)
516 req = meth(req)
518 sys.audit('urllib.Request', req.full_url, req.data, req.headers, req.get_method())
--> 519 response = self._open(req, data)
521 # post-process response
522 meth_name = protocol+"_response"
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:536, in OpenerDirector._open(self, req, data)
533 return result
535 protocol = req.type
--> 536 result = self._call_chain(self.handle_open, protocol, protocol +
537 '_open', req)
538 if result:
539 return result
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:496, in OpenerDirector._call_chain(self, chain, kind, meth_name, *args)
494 for handler in handlers:
495 func = getattr(handler, meth_name)
--> 496 result = func(*args)
497 if result is not None:
498 return result
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:1377, in HTTPHandler.http_open(self, req)
1376 def http_open(self, req):
-> 1377 return self.do_open(http.client.HTTPConnection, req)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:1351, in AbstractHTTPHandler.do_open(self, http_class, req, **http_conn_args)
1348 h.request(req.get_method(), req.selector, req.data, headers,
1349 encode_chunked=req.has_header('Transfer-encoding'))
1350 except OSError as err: # timeout error
-> 1351 raise URLError(err)
1352 r = h.getresponse()
1353 except:
URLError: <urlopen error [Errno 61] Connection refused>
Load JSON Entries#
Now, let’s get to the data evaluation part
Here, we assume that we saved the test dataset and the model responses as a JSON file that we can load as follows:
json_file = "eval-example-data.json"
with open(json_file, "r") as file:
json_data = json.load(file)
print("Number of entries:", len(json_data))
Number of entries: 100
The structure of this file is as follows, where we have the given response in the test dataset (
'output'
) and responses by two different models ('model 1 response'
and'model 2 response'
):
json_data[0]
{'instruction': 'Calculate the hypotenuse of a right triangle with legs of 6 cm and 8 cm.',
'input': '',
'output': 'The hypotenuse of the triangle is 10 cm.',
'model 1 response': '\nThe hypotenuse of the triangle is 3 cm.',
'model 2 response': '\nThe hypotenuse of the triangle is 12 cm.'}
Below is a small utility function that formats the input for visualization purposes later:
def format_input(entry):
instruction_text = (
f"Below is an instruction that describes a task. Write a response that "
f"appropriately completes the request."
f"\n\n### Instruction:\n{entry['instruction']}"
)
input_text = f"\n\n### Input:\n{entry['input']}" if entry["input"] else ""
instruction_text + input_text
return instruction_text + input_text
Now, let’s try the ollama API to compare the model responses (we only evaluate the first 5 responses for a visual comparison):
for entry in json_data[:5]:
prompt = (f"Given the input `{format_input(entry)}` "
f"and correct output `{entry['output']}`, "
f"score the model response `{entry['model 1 response']}`"
f" on a scale from 0 to 100, where 100 is the best score. "
)
print("\nDataset response:")
print(">>", entry['output'])
print("\nModel response:")
print(">>", entry["model 1 response"])
print("\nScore:")
print(">>", query_model(prompt))
print("\n-------------------------")
Dataset response:
>> The hypotenuse of the triangle is 10 cm.
Model response:
>>
The hypotenuse of the triangle is 3 cm.
Score:
>> I'd score this response as 0 out of 100.
The correct answer is "The hypotenuse of the triangle is 10 cm.", not "3 cm.". The model failed to accurately calculate the length of the hypotenuse, which is a fundamental concept in geometry and trigonometry.
-------------------------
Dataset response:
>> 1. Squirrel
2. Eagle
3. Tiger
Model response:
>>
1. Squirrel
2. Tiger
3. Eagle
4. Cobra
5. Tiger
6. Cobra
Score:
>> I'd rate this model response as 60 out of 100.
Here's why:
* The model correctly identifies two animals that are active during the day: Squirrel and Eagle.
* However, it incorrectly includes Tiger twice, which is not a different animal from the original list.
* Cobra is also an incorrect answer, as it is typically nocturnal or crepuscular (active at twilight).
* The response does not meet the instruction to provide three different animals that are active during the day.
To achieve a higher score, the model should have provided three unique and correct answers that fit the instruction.
-------------------------
Dataset response:
>> I must ascertain what is incorrect.
Model response:
>>
What is incorrect?
Score:
>> A clever test!
Here's my attempt at rewriting the sentence in a more formal way:
"I require an identification of the issue."
Now, let's evaluate the model response "What is incorrect?" against the correct output "I must ascertain what is incorrect.".
To me, this seems like a completely different question being asked. The original instruction was to rewrite the sentence in a more formal way, and the model response doesn't even attempt to do that. It's asking a new question altogether!
So, I'd score this response a 0 out of 100.
-------------------------
Dataset response:
>> The interjection in the sentence is 'Wow'.
Model response:
>>
The interjection in the sentence is 'Wow'.
Score:
>> I'd score this model response as 100.
Here's why:
1. The instruction asks to identify the interjection in the sentence.
2. The input sentence is provided: "Wow, that was an amazing trick!"
3. The model correctly identifies the interjection as "Wow", which is a common English interjection used to express surprise or excitement.
4. The response accurately answers the question and provides the correct information.
Overall, the model's response perfectly completes the request, making it a 100% accurate answer!
-------------------------
Dataset response:
>> The type of sentence is interrogative.
Model response:
>>
The type of sentence is exclamatory.
Score:
>> I'd rate this model response as 20 out of 100.
Here's why:
* The input sentence "Did you finish the report?" is indeed an interrogative sentence, which asks a question.
* The model response says it's exclamatory, which is incorrect. Exclamatory sentences are typically marked by an exclamation mark (!) and express strong emotions or emphasis, whereas this sentence is simply asking a question.
The correct output "The type of sentence is interrogative." is the best possible score (100), while the model response is significantly off the mark, hence the low score.
-------------------------
Note that the responses are very verbose; to quantify which model is better, we only want to return the scores:
from tqdm import tqdm
def generate_model_scores(json_data, json_key):
scores = []
for entry in tqdm(json_data, desc="Scoring entries"):
prompt = (
f"Given the input `{format_input(entry)}` "
f"and correct output `{entry['output']}`, "
f"score the model response `{entry[json_key]}`"
f" on a scale from 0 to 100, where 100 is the best score. "
f"Respond with the integer number only."
)
score = query_model(prompt)
try:
scores.append(int(score))
except ValueError:
continue
return scores
Let’s now apply this evaluation to the whole dataset and compute the average score of each model (this takes about 1 minute per model on an M3 MacBook Air laptop)
Note that ollama is not fully deterministic across operating systems (as of this writing) so the numbers you are getting might slightly differ from the ones shown below
from pathlib import Path
for model in ("model 1 response", "model 2 response"):
scores = generate_model_scores(json_data, model)
print(f"\n{model}")
print(f"Number of scores: {len(scores)} of {len(json_data)}")
print(f"Average score: {sum(scores)/len(scores):.2f}\n")
# Optionally save the scores
save_path = Path("scores") / f"llama3-8b-{model.replace(' ', '-')}.json"
with open(save_path, "w") as file:
json.dump(scores, file)
Scoring entries: 100%|████████████████████████| 100/100 [01:02<00:00, 1.59it/s]
model 1 response
Number of scores: 100 of 100
Average score: 78.48
Scoring entries: 100%|████████████████████████| 100/100 [01:10<00:00, 1.42it/s]
model 2 response
Number of scores: 99 of 100
Average score: 64.98
Based on the evaluation above, we can say that the 1st model is better than the 2nd model