Skip to content

Questions

NOTE: Any time you edit a question info.json file on a local copy of PrairieLearn, you need to click “Load from disk” to reload the changes. Edits to HTML or Python files can be picked up by reloading the page. You might need to generate a new variant of a question to run new Python code.

NOTE: New-style PrairieLearn questions are marked with "type": "v3". This documentation only describes new-style questions, although old-style v2 questions are still supported in the code.

Directory structure

Questions are all stored inside the questions directory (or any subfolder) for a course. Each question is a single directory that contains all the files for that question. The name of the full question directory relative to questions is the QID (the "question ID") for that question. For example, here are three different questions:

questions
|
|-- fossilFuelsRadio          # first question, id is "fossilFuelsRadio"
|   |
|   +-- info.json             # metadata for the fossilFuelsRadio question
|   +-- server.py             # secret server-side code (optional)
|   `-- question.html         # HTML template for the question
|
|-- addVectors                # second question, id is "addVectors"
|   |
|   +-- info.json             # metadata for the addVectors question
|   +-- server.py
|   +-- question.html
|   +-- notes.docx            # more files, like notes on how the question works
|   +-- solution.docx         # these are secret (can't be seen by students)
|   |
|   +-- clientFilesQuestion/  # Files accessible to the client (web browser)
|   |   `-- fig1.png          # A client file (an image)
|   |
|   +-- tests/                # external grading files (see other doc)
|       `-- ...
|
`-- subfolder                 # a subfolder we can put questions in -- this itself can't be a question
    |
    `-- nestedQuestion        # third question, id is "subfolder/nestedQuestion"
        |
        +-- info.json         # metadata for the "subfolder/nestedQuestion" question
        `-- question.html

PrairieLearn assumes independent questions; nothing ties them together. However, each question could have multiple parts (inputs that are validated together).

Example questions are in the exampleCourse/questions directory inside PrairieLearn.

Question info.json

The info.json file for each question defines properties of the question. For example:

info.json
{
  "uuid": "cbf5cbf2-6458-4f13-a418-aa4d2b1093ff",
  "title": "Newton's third law",
  "topic": "Forces",
  "tags": ["secret", "Fa18"],
  "type": "v3",
  "comment": "You can add comments to JSON files using this property."
}
Property Type Description
uuid string Unique identifier. (Required; no default)
type enum Type of the test. Must be "v3" for new-style questions. (Required; no default)
title string The title of the question (e.g., "Addition of vectors in Cartesian coordinates"). (Required; no default)
topic string The category of question (e.g., "Vectors", "Energy"). Like the chapter in a textbook. (Required; no default)
tags array Optional extra tags associated with the question (e.g., ["secret", "concept"]). (Optional; default: no tags)
gradingMethod enum The grading method used for auto-grading this question. Valid values: Internal, External, or Manual (for manual-only questions). (Optional; default: Internal)
singleVariant boolean Whether the question is not randomized and only generates a single variant. (Optional; default: false)
showCorrectAnswer boolean Whether the question should display the answer panel. (Optional; default: true)
partialCredit boolean Whether the question will give partial points for fractional scores. (Optional; default: true)
externalGradingOptions object Options for externally graded questions. See the external grading docs. (Optional; default: none)
dependencies object External JavaScript or CSS dependencies to load. See below. (Optional; default: {})
sharePublicly boolean Whether the question should be available for anyone to preview or use in their course
shareSourcePublicly boolean Whether the source code of the question should be available
sharingSets array Sharing sets which the question belongs to

See the reference for infoQuestion.json for more information about what can be added to this file.

Question Dependencies

Your question can load client-side assets such as scripts or stylesheets from different sources. A full list of dependencies will be compiled based on the question's needs and any dependencies needed by page elements, then they will be deduplicated and loaded onto the page.

These dependencies are specified in the info.json file, and can be configured as follows:

info.json
{
  "dependencies": {
    "nodeModulesScripts": ["three/build/three.min.js"],
    "clientFilesQuestionScripts": ["my-question-script.js"],
    "clientFilesQuestionStyles": ["my-question-style.css"],
    "clientFilesCourseStyles": ["courseStylesheet1.css", "courseStylesheet2.css"]
  }
}

Question Sharing

Any question that is marked with "sharePublicly": true or "shareSourcePublicly": true will be considered and displayed as being published for free use under the CC-BY-NC license. Questions may be privately shared to individual courses using sharing sets, as explained in the sharing documentation. Sharing sets that a question belongs to are specified as a list of strings. These must match sharing sets that are declared in the course configuration.

info.json
{
  "sharingSets": ["python-exercises"]
}

The different types of dependency properties available are summarized in this table:

Property Description
nodeModulesStyles The styles required by this question, relative to [PrairieLearn directory]/node_modules.
nodeModulesScripts The scripts required by this question, relative to [PrairieLearn directory]/node_modules.
clientFilesQuestionStyles The scripts required by this question relative to the question's clientFilesQuestion directory.
clientFilesQuestionScripts The scripts required by this question relative to the question's clientFilesQuestion directory.
clientFilesCourseStyles The styles required by this question relative to [course directory]/clientFilesCourse.
clientFilesCourseScripts The scripts required by this question relative to [course directory]/clientFilesCourse.

Question question.html

The question.html is a template used to render the question to the student. A complete question.html example looks like:

question.html
<pl-question-panel>
  <p>
    A particle of mass $m = {{params.m}}\rm\ kg$ is observed to have acceleration $a =
    {{params.a}}\rm\ m/s^2$.
  </p>
  <p>What is the total force $F$ currently acting on the particle?</p>
</pl-question-panel>

<p>
  <pl-number-input
    answers-name="F"
    comparison="sigfig"
    digits="2"
    label="$F =$"
    suffix="$\rm m/s^2$"
  ></pl-number-input>
</p>

The question.html is regular HTML, with four special features:

  1. Any text in double-curly-braces (like {{params.m}}) is substituted with variable values. If you use triple-braces (like {{{params.html}}}) then raw HTML is substituted (don't use this unless you know you need it). This is using Mustache templating.

  2. Special HTML elements (like <pl-number-input>) enable input and formatted output. See the list of PrairieLearn elements. Note that that all submission elements must have unique answers-name attributes. This is necessary for questions to be graded properly.

  3. A special <markdown> tag allows you to write Markdown inline in questions.

  4. LaTeX equations are available within HTML by using $x^2$ for inline equations, and $$x^2$$ or \[x^2\] for display equations.

Question server.py

The server.py file for each question creates randomized question variants by generating random parameters and the corresponding correct answer. The server.py functions are:

Function Return object modifiable data keys unmodifiable data keys Description
generate() correct_answers, params options, variant_seed Generate the parameter and true answers for a new random question variant. Set data["params"][name] and data["correct_answers"][name] for any variables as needed. Modify the data dictionary in-place.
prepare() answers_names, correct_answers, params options, variant_seed Final question preparation after element code has run. Can modify data as necessary. Modify the data dictionary in-place.
render() html (string) correct_answers, editable, feedback, format_errors, options, panel, params, partial_scores, raw_submitted_answers, score, submitted_answers, variant_seed, num_valid_submissions Render the HTML for one panel and return it as a string.
parse() format_errors, submitted_answers, correct_answers, feedback options, params, raw_submitted_answers, variant_seed Parse the data["submitted_answers"][var] data entered by the student, modifying this variable. Modify the data dictionary in-place.
grade() correct_answers, feedback, format_errors, params, partial_scores, score, submitted_answers options, raw_submitted_answers, variant_seed Grade data["submitted_answers"][var] to determine a score. Store the score and any feedback in data["partial_scores"][var]["score"] and data["partial_scores"][var]["feedback"]. Modify the data dictionary in-place.
file() object (string, bytes-like, file-like) correct_answers, filename, options, params, variant_seed Generate a file object dynamically in lieu of a physical file. Trigger via type="dynamic" in the question element (e.g., pl-figure, pl-file-download). Access the requested filename via data['filename']. If file() returns nothing, an empty string will be used.

A complete question.html and server.py example looks like:

question.html
<pl-question-panel>
  <!-- params.x is defined by data["params"]["x"] in server.py's `generate()`. -->
  <!-- params.operation defined by in data["params"]["operation"] in server.py's `generate()`. -->
  If $x = {{params.x}}$ and $y$ is {{params.operation}} $x$, what is $y$?
</pl-question-panel>

<!-- y is defined by data["correct_answers"]["y"] in server.py's `generate()`. -->
<pl-number-input answers-name="y" label="$y =$"></pl-number-input>
<pl-submission-panel> {{feedback.y}} </pl-submission-panel>
server.py
import random
import math

def generate(data):
    # Generate random parameters for the question and store them in the data["params"] dict:
    data["params"]["x"] = random.randint(5, 10)
    data["params"]["operation"] = random.choice(["double", "triple"])

    # Also compute the correct answer (if there is one) and store in the data["correct_answers"] dict:
    if data["params"]["operation"] == "double":
        data["correct_answers"]["y"] = 2 * data["params"]["x"]
    else:
        data["correct_answers"]["y"] = 3 * data["params"]["x"]

def prepare(data):
    # This function will run after all elements have run `generate()`.
    # We can alter any of the element data here, but this is rarely needed.
    pass

def parse(data):
    # data["raw_submitted_answers"][NAME] is the exact raw answer submitted by the student.
    # data["submitted_answers"][NAME] is the answer parsed by elements (e.g., strings converted to numbers).
    # data["format_errors"][NAME] is the answer format error (if any) from elements.
    # We can modify or delete format errors if we have custom logic (rarely needed).
    # If there are format errors then the submission is "invalid" and is not graded.
    # To provide feedback but keep the submission "valid", data["feedback"][NAME] can be used.

    # As an example, we will reject negative numbers for "y":
    # check we don't already have a format error
    if "y" not in data["format_errors"] and data["submitted_answers"]["y"] < 0:
        data["format_errors"]["y"] = "Negative numbers are not allowed"

def grade(data):
    # All elements will have already graded their answers (if any) before this point.
    # data["partial_scores"][NAME]["score"] is the individual element scores (0 to 1).
    # data["score"] is the total score for the question (0 to 1).
    # We can modify or delete any of these if we have a custom grading method.
    # This function only runs if `parse()` did not produce format errors, so we can assume all data is valid.

    # grade(data) can also set data['format_errors'][NAME] if there is any reason to mark the question
    # invalid during grading time.  This will cause the question to not use up one of the student's attempts' on exams.
    # You are encouraged, though, to do any checks for invalid data that can be done in `parse(data)` there instead,
    # since that method is also called when the student hits "Save only", in manually graded questions, or in
    # assessments without real-time grading.

    # As an example, we will give half points for incorrect answers larger than "x",
    # only if not already correct. Use math.isclose to avoid possible floating point errors.
    if math.isclose(data["score"], 0.0) and data["submitted_answers"]["y"] > data["params"]["x"]:
        data["partial_scores"]["y"]["score"] = 0.5
        data["score"] = 0.5
        data["feedback"]["y"] = "Your value for $y$ is larger than $x$, but incorrect."

Question Data Storage

All persistent data related to a question variant is stored under different entries in the data dictionary. This dictionary is stored in JSON format by PrairieLearn, and as a result, everything in data must be JSON serializable. Some types in Python are natively JSON serializable, such as strings, lists, and dicts, while others are not, such as complex numbers, numpy ndarrays, and pandas DataFrames.

To account for this, the prairielearn Python library (usually aliased and used as pl), provides the functions to_json and from_json (part of conversion_utils.py), which can respectively serialize and deserialize various objects for storage as part of question data. Please refer to the docstrings on those functions for more information. Here is a simple example:

server.py
import numpy as np
import prairielearn as pl

def generate(data):
    data["params"]["numpy_array"] = pl.to_json(np.array([1.2, 3.5, 5.1]))

def grade(data):
    pl.from_json(data["params"]["numpy_array"])

The pl.to_json function supports keyword-only options for different types of encodings (e.g. pl.to_json(var, df_encoding_version=2)). These options have been added to allow for new encoding behavior while still retaining backwards compatibility with existing usage.

  • df_encoding_version controls the encoding of Pandas DataFrames. Encoding a DataFrame df by setting pl.to_json(df, df_encoding_version=2) allows for missing and date time values whereas pl.to_json(df, df_encoding_version=1) (default) does not. However, df_encoding_version=1 has support for complex numbers, while df_encoding_version=2 does not.
  • np_encoding_version controls the encoding of Numpy values. When using np_encoding_version=1, then only np.float64 and np.complex128 can be serialized by pl.to_json, and their types will be erased after deserialization (will become native Python float and complex respectively). It is recommended to set np_encoding_version=2, which supports serialization for all numpy scalars and does not result in type erasure on deserialization.

Accessing files on disk

From within server.py functions, directories can be accessed as:

data["options"]["question_path"]                      # on-disk location of the current question directory
data["options"]["client_files_question_path"]         # on-disk location of clientFilesQuestion/
data["options"]["client_files_question_url"]          # URL location of clientFilesQuestion/ (only in render() function)
data["options"]["client_files_question_dynamic_url"]  # URL location of dynamically-generated question files (only in render() function)
data["options"]["client_files_course_path"]           # on-disk location of clientFilesCourse/
data["options"]["client_files_course_url"]            # URL location of clientFilesCourse/ (only in render() function)
data["options"]["server_files_course_path"]           # on-disk location of serverFilesCourse/

Generating dynamic files

You can dynamically generate file objects in server.py. These files never appear physically on the disk. They are generated in file() and returned as strings, bytes-like objects, or file-like objects. A complete question.html and server.py example using a dynamically generated fig.png looks like:

question.html
<p>Here is a dynamically-rendered figure showing a line of slope $a = {{params.a}}$:</p>
<img src="{{options.client_files_question_dynamic_url}}/fig.png" />
server.py
import random
import io
import matplotlib.pyplot as plt

def generate(data):
    data["params"]["a"] = random.choice([0.25, 0.5, 1, 2, 4])

def file(data):
    # We should look at data["filename"], generate the corresponding file,
    # and return the contents of the file as a string, bytes-like, or file-like object.
    # We can access data["params"].
    # As an example, we will generate the "fig.png" figure.

    if data["filename"] == "fig.png":                # check for the appropriate filename
        plt.plot([0, data["params"]["a"]], [0, 1])   # plot a line with slope "a"
        buf = io.BytesIO()                           # make a bytes object (a buffer)
        plt.savefig(buf, format="png")               # save the figure data into the buffer
        return buf

You can also use this functionality in file-based elements (pl-figure, pl-file-download) by setting type="dynamic".

The singleVariant option for non-randomized questions

While it is recommended that all questions contain random parameters, sometimes it is impractical to do this. For questions that don't have a meaningful amount of randomization in them, the info.json file should set "singleVariant": true. This has the following effects:

  • On Homework-type assessments, each student will only ever be given one variant of the question, which they can repeatedly attempt without limit. The correct answer will never be shown to students.
  • On Exam-type assessments, all questions are effectively single-variant, so the singleVariant option has no effect.

The partialCredit option

By default, all questions award partial credit. For example, if there are two numeric answers in a question and only one of them is correct, the student will be awarded 50% of the available points.

To disable partial credit for a question, set "partialCredit": false in the info.json file for the question. This will mean that the question will either give 0% or 100%, and it will only give 100% if every element on the page is fully correct. Some question elements also provide more fine-grained control over partial credit.

In general, it is strongly recommended to leave partial credit enabled for all questions.

Using Markdown in questions

HTML and custom elements are great for flexibility and expressiveness. However, they're not great for working with large amounts of text, formatting text, and so on. Markdown is a lightweight plaintext markup syntax that's ideal for authoring simple but rich text. To enable this, PrairieLearn adds a special <markdown> tag to questions. When a <markdown> block is encountered, its contents are converted to HTML. Here's an example question.html that utilizes this element:

question.html
<markdown>
# Hello, world!

This is some **Markdown** text.
</markdown>

That question would be rendered like this:

<h1>Hello, world!</h1>
<p>This is some <strong>Markdown</strong> text.</p>

Warning

Note that markdown recognizes indentation as a code block, so text inside these tags should not be indented with the corresponding HTML content.

<div>
  <markdown>
# Hello, world!
  </markdown>
</div>
<div>
  <markdown>
    # Hello, world!
  </markdown>
</div>

A few special behaviors have been added to enable Markdown to work better within the PrairieLearn ecosystem, as described below.

Markdown code blocks

Fenced code blocks (those using triple-backticks ```) are rendered as <pl-code> elements, which will then be rendered as usual by PrairieLearn. These blocks support specifying language and highlighted lines, which are then passed to the resulting <pl-code> element. Consider the following Markdown:

question.html
<markdown>
```cpp{1-2,4}
int i = 1;
int j = 2;
int k = 3;
int m = 4;
```
</markdown>

This will be rendered to the following <pl-code> element (which itself will eventually be rendered to standard HTML):

<pl-code language="cpp" highlight-lines="1-2,4">
int i = 1;
int j = 2;
int k = 3;
int m = 4;
</pl-code>

Escaping <markdown> tags

Under the hood, PrairieLearn is doing some very simple parsing to determine what pieces of a question to process as Markdown: it finds an opening <markdown> tag and processes everything up to the closing </markdown> tag. But what if you want to have a literal <markdown> or </markdown> tag in your question? PrairieLearn defines a special escape syntax to enable this. If you have <markdown#> or </markdown#> in a Markdown block, they will be rendered as <markdown> and </markdown> respectively (but will not be used to find regions of text to process as Markdown). You can use more hashes to produce different strings: for instance, to have <markdown###> show up in the output, write <markdown####> in your question.

Using LaTeX in questions (math mode)

PrairieLearn supports LaTeX equations in questions. You can view a full list of supported MathJax commands.

Inline equations can be written using $x^2$ or \(x^2\), and display equations can be written using $$x^2$$ or \[x^2\]. For example:

question.html
<p>Here is some inline math: $x^2$. Here is some display math: $$x^2$$</p>
<p>What is the total force $F$ currently acting on the particle?</p>

<markdown>
# LaTeX works in Markdown too!

$$\phi = \frac{1+\sqrt{5}}{2}$$
</markdown>

Using a dollar sign ($) without triggering math mode

Dollar signs by default denote either inline ($ x $) or display mode ($$ x $$) environments.

To escape either math environment, consider using PrairieLearn's <markdown> tag and inline code syntax.

<markdown>
What happens if we use a `$` to reference the spreadsheet cell location `$A$1`?
</markdown>

In scenarios where it does not make sense to use the code environment, consider disabling math entirely by adding the mathjax_ignore class to an HTML element.

<div class="mathjax_ignore">
  Mary has $5 to spend. If each apple costs $2 dollars and a banana costs $1 dollar, then how many
  pieces of fruit can Mary get?
</div>

<div>$x = 1$ and I have <span class="mathjax_ignore">$</span>5 dollars.</div>

Rendering panels from question.html

When a question is displayed to a student, there are three "panels" that will be shown at different stages: the "question" panel, the "submission" panel, and the "answer" panel. These display the question prompt, the solution provided by the student, and the correct answer.

All three panels display the same question.html template, but elements will render differently in each panel. For example, the <pl-number-input> element displays an input box in the "question" panel, the submitted answer in the "submissions" panel, and the correct answer in the "answer" panel.

Text in question.html can be set to only display in the "question" panel by wrapping it in the <pl-question-panel> element. This is useful for the question prompt, which doesn't need to be repeated in the "submission" and "answer" panels. There are also elements that only render in the other two panels.

Hiding staff comments in question.html

Please note that HTML or JavaScript comments in your question.html source may be visible to students in the rendered page source. To leave small maintenance notes to staff in your question.html source, you may prefer to use a Mustache comment that will stay hidden. Please refer to this FAQ item.

Options for grading student answers

For most elements, there are four different ways of auto-grading the student answer. This applies to elements like pl-number-input and pl-string-input that allow students to input an answer of their choosing, but not pl-multiple-choice or pl-checkbox that are much more constrained. The four ways are:

  1. Set the correct answer using the correct-answer attributes for each element in question.html. This will use the built-in grading methods for each element. Given that this option is typically used for answers with a hard-coded value, without randomization, it is not expected to be used frequently.

  2. Set data["correct_answers"][VAR_NAME] in server.py. This is for questions where you can pre-compute a single correct answer based on the (randomized) parameters.

  3. Write a custom grade(data) function in server.py that checks data["submitted_answers"][VAR_NAME] and sets scores. This can do anything, including having multiple correct answers, testing properties of the submitted answer for correctness, compute correct answers of some elements based on the value of other elements, etc.

  4. Write an external grader, though this is typically applied to more complex questions like coding.

If a question has more than one of the above options, each of them overrides the one before it. Even if options 3 (custom grade function) or 4 (external grader) are used, then it can still be helpful to set a correct answer so that it is shown to students as a sample of what would be accepted. If there are multiple correct answers then it's probably a good idea to add a note with pl-answer-panel that any correct answer would be accepted, and the displayed answer is only an example. Moreover, if there is no relevant information to display on the correct answer panel (i.e., a question has multiple correct answers and is meant to be attempted until a full score is achieved), then the panel can be hidden by setting showCorrectAnswer: false in info.json.

Custom grading best practices

Although questions with custom grading usually don't use the grading functions from individual elements, it is highly recommended that built-in elements are used for student input, as these elements include helpful parsing and feedback by default. Parsed student answers are present in the data["submitted_answers"] dictionary.

Any custom grading function for the whole question should set data["score"] as a value between 0.0 and 1.0, which will be the final score for the given question. If a custom grading function is only grading a specific part of a question, the grading function should set the corresponding dictionary entry in data["partial_scores"] and then recompute the final data["score"] value for the whole question. The question_utils.py file from the prairielearn Python library provides the following score recomputation functions:

This can be used like so:

from prairielearn import set_weighted_score_data

def grade(data):
    # update partial_scores as necessary
    # ...

    # compute total question score
    set_weighted_score_data(data)

More detailed information can be found in the docstrings for these functions. If you prefer not to show score badges for individual parts, you may unset the dictionary entries in data["partial_scores"] once data["score"] has been computed.

To set custom feedback, the grading function should set the corresponding entry in the data["feedback"] dictionary. These feedback entries are passed in when rendering the question.html, which can be accessed by using the mustache prefix {{feedback.}}. See the above question or this demo question for examples of this. Note that the feedback set in the data["feedback"] dictionary is meant for use by custom grader code in a server.py file, while the feedback set in data["partial_scores"] is meant for use by element grader code.

For generated floating point answers, it's important to use consistent rounding when displaying numbers to students and when computing the correct answer. For example, the following is problematic:

def generate(data):
    a = 33.33337
    b = 33.33333
    data["params"]["a_for_student"] = f'{a:.2f}'
    data["params"]["b_for_student"] = f'{a:.2f}'
    # Note how the correct answer is computed with full precision,
    # but the parameters displayed to students are rounded.
    data["correct_answers"]["c"] = a - b

Instead, the numbers should be rounded at the beginning:

def generate(data):
  a = np.round(33.33337, 2)
  b = np.round(33.33333, 2)
  data["params"]["a_for_student"] = f'{a:.2f}'
  data["params"]["b_for_student"] = f'{b:.2f}'
  data["correct_answers"]["c"] = a - b

Similarly, for grading functions involving floating point numbers, avoid exact comparisons with ==. Floating point calculations in Python introduce error, and comparisons with == might unexpectedly fail. Instead, the function math.isclose can be used, as it performs comparisons within given tolerance values. The grading_utils.py file from the prairielearn Python library also offers several functions to perform more specialized comparisons:

More detailed information can be found in the docstrings for these functions.

Note: Data stored under the "submitted_answers" key in the data dictionary may be of varying type. Specifically, the pl-integer-input element sometimes stores very large integers as strings instead of the Python int type used in most cases. The best practice for custom grader code in this case is to always cast the data to the desired type, for example int(data["submitted_answers"][name]). See the PrairieLearn elements documentation for more detailed discussion related to specific elements.