Getting Started

Installing

L2P can be installed with pip:

pip install l2p

Using L2P

First things first, import the whole L2P library, or necessary modules (see L2P):

from l2p import *
# OR
from l2p import DomainBuilder, TaskBuilder, PromptBuilder, FeedbackBuilder

# util functions
from l2p.utils import *

L2P requires access to an LLM. Set up your LLM class and methods using the abstract BaseLLM(ABC) class. In this case, we have set up OpenAI’s models to our library for quickstart.

export OPENAI_API_KEY='YOUR-KEY' # e.g. OPENAI_API_KEY='sk-123456'
engine = "gpt-4o-mini"
api_key = os.environ.get('OPENAI_API_KEY')
openai_llm = OPENAI(model=engine, api_key=api_key)

Users can pass any prompt into their LLMs–as long as the structured output prompt follows the respective function (see Templates). We recommend using the PromptBuilder class that help organize prompts. This is a simple example:

 1from l2p.prompt_builder import PromptBuilder
 2prompt_builder = PromptBuilder()
 3
 4role_desc = "Your role is to..."
 5format_desc = "You must follow this format..."
 6ex_desc = "Here is an example..."
 7task_desc = "{PLACEHOLDER}"
 8
 9prompt = PromptBuilder(
10    role = role_desc,
11    format = format_desc,
12    examples = [ex_desc],
13    task = task_desc
14)
15
16print(prompt.generate_prompt())

Generated prompt:

[ROLE]: Your role is to...

------------------------------------------------
[FORMAT]: You must follow this format...

------------------------------------------------
[EXAMPLE(S)]:
Example 1:
Here is an example...

------------------------------------------------
[TASK]:
Here is the task to solve:
{PLACEHOLDER}

Build PDDL domain components using the DomainBuilder class. This is an example of extracting PDDL types using PromptBuilder:

 1import os
 2from l2p.domain_builder import DomainBuilder
 3from l2p.prompt_builder import PromptBuilder
 4from l2p.llm.openai import OPENAI
 5from l2p.utils import format_types, load_file
 6
 7api_key = os.environ.get('OPENAI_API_KEY')
 8gpt_4o_mini = OPENAI(model="gpt-4o-mini", api_key=api_key)
 9
10domain_builder = DomainBuilder()
11types_prompt = PromptBuilder(
12    role="You are a PDDL assistant that is helping me design :types.",
13    format=load_file("templates/domain_templates/formalize_type.txt"),
14    task="{domain_desc}"
15)
16
17domain_desc = "The AI agent here is a mechanical robot arm that can pick and " \
18    "place the blocks. Only one block may be moved at a time: it may either " \
19    "be placed on the table or placed atop another block. Because of this, " \
20    "any blocks that are, at a given time, under another block cannot be moved."
21
22# extract types via LLM
23types, llm_output, validation_info = domain_builder.formalize_types(
24    model=gpt_4o_mini,
25    domain_desc=domain_desc,
26    prompt_template=types_prompt.generate_prompt()
27    )
28
29# print out types
30print(format_types(types=types))

Generated types output:

{
    'block': '; A physical object that can be picked up and moved by the robot arm.',
    'table': '; A flat surface where blocks can be placed.',
    'robot_arm': '; The mechanical device capable of picking and placing blocks.'
}

Build PDDL problem components using the TaskBuilder class. This is an example of extracting PDDL initial states:

 1import os
 2from l2p.task_builder import TaskBuilder
 3from l2p.prompt_builder import PromptBuilder
 4from l2p.llm.openai import OPENAI
 5from l2p.utils import format_initial, load_file
 6
 7api_key = os.environ.get('OPENAI_API_KEY')
 8gpt_4o_mini = OPENAI(model="gpt-4o-mini", api_key=api_key)
 9
10task_builder = TaskBuilder()
11init_prompt = PromptBuilder(
12    role="You are a PDDL assistant that is helping me design :init problems",
13    format=load_file("templates/task_templates/formalize_initial.txt"),
14    task="{problem_desc}"
15)
16
17problem_desc = "There are four blocks currently. The blue block is on the red " \
18    "which is on the yellow. The yellow and the green are on the table. I want " \
19    "the red on top of the green."
20
21initial_states, llm_output, validation_info = task_builder.formalize_initial_state(
22    model = gpt_4o_mini,
23    problem_desc = problem_desc,
24    prompt_template = init_prompt.generate_prompt()
25)
26
27print(format_initial(initial_states=initial_states))

Generated initial states:

(on blue red)
(on red yellow)
(on yellow table)
(on green table)

Build LLM feedback components using the FeedbackBuilder class. This is an example of getting LLM feedback from types:

 1import os
 2from l2p.feedback_builder import FeedbackBuilder
 3from l2p.prompt_builder import PromptBuilder
 4from l2p.llm.openai import OPENAI
 5from l2p.utils import load_file
 6
 7api_key = os.environ.get('OPENAI_API_KEY')
 8gpt_4o_mini = OPENAI(model="gpt-4o-mini", api_key=api_key)
 9
10domain_desc = "The AI agent here is a mechanical robot arm that can pick and " \
11        "place the blocks. Only one block may be moved at a time: it may either " \
12        "be placed on the table or placed atop another block. Because of this, " \
13        "any blocks that are, at a given time, under another block cannot be moved."
14
15types = {
16    'block': '; A physical object that can be picked up and moved by the robot arm.',
17    'table': '; A flat surface where blocks can be placed.',
18    'robot_arm': '; The mechanical device capable of picking and placing blocks.',
19    'carpet': '; a carpet for a room.' # unnecessary type for domain
20    }
21
22feedback_builder = FeedbackBuilder()
23
24feedback_prompt = PromptBuilder(
25    role="You are a PDDL assistant that is providing feedback to :types.",
26    format=load_file("templates/feedback_templates/feedback.txt"),
27    task="{domain_desc} \n\n##Types\n{types}"
28)
29
30no_feedback, llm_output = feedback_builder.type_feedback(
31    model = gpt_4o_mini,
32    domain_desc = domain_desc,
33    feedback_template = feedback_prompt.generate_prompt(),
34    feedback_type = "llm",
35    types=types
36)
37
38print(no_feedback, llm_output)

Generated feedback:

[NO FEEDBACK]: False

[LLM OUTPUT]
### JUDGMENT
```
The type "carpet" seems unnecessary in the context of the task, as it does not relate to the actions of picking and placing blocks. Consider removing it to maintain focus on relevant types.
```

Below are actual runnable usage examples. This is the general setup to build domain predicates:

 1import os
 2from l2p.domain_builder import DomainBuilder
 3from l2p.llm.openai import OPENAI
 4from l2p.utils import format_expression, load_file
 5
 6domain_builder = DomainBuilder()
 7
 8api_key = os.environ.get('OPENAI_API_KEY')
 9gpt_4o_mini = OPENAI(model="gpt-4o-mini", api_key=api_key)
10
11# retrieve prompt information
12base_path='tests/usage/prompts/domain/'
13domain_desc = load_file(f'{base_path}blocksworld_domain.txt')
14predicates_prompt = load_file(f'{base_path}formalize_predicates.txt')
15types = load_file(f'{base_path}types.json')
16action = load_file(f'{base_path}action.json')
17
18# extract predicates via LLM
19predicates, llm_output, validation_info = domain_builder.formalize_predicates(
20    model=gpt_4o_mini,
21    domain_desc=domain_desc,
22    prompt_template=predicates_prompt,
23    types=types
24    )
25
26# format key info into PDDL strings
27predicate_str = "\n".join([pred["raw"].replace(":", " ; ") for pred in predicates])
28
29print(f"PDDL domain predicates:\n{predicate_str}")

The following output is:

PDDL domain predicates:
- (holding ?a - arm ?b - block) ;  true if the arm ?a is currently holding the block ?b
- (on_table ?b - block) ;  true if the block ?b is on the table
- (clear ?b - block) ;  true if the block ?b is clear (no block on top of it)
- (on_top ?b1 - block ?b2 - block) ;  true if the block ?b1 is on top of the block ?b2

Here is how you would setup a PDDL problem:

 1import os
 2from l2p.llm.openai import OPENAI
 3from l2p.utils.pddl_types import Predicate
 4from l2p.task_builder import TaskBuilder
 5from l2p.utils import load_file
 6
 7task_builder = TaskBuilder() # initialize task builder class
 8
 9api_key = os.environ.get('OPENAI_API_KEY')
10llm = OPENAI(model="gpt-4o-mini", api_key=api_key)
11
12# load in assumptions
13problem_desc = load_file(r'tests/usage/prompts/problem/blocksworld_problem.txt')
14task_prompt = load_file(r'tests/usage/prompts/problem/formalize_task.txt')
15types = load_file(r'tests/usage/prompts/domain/types.json')
16predicates_json = load_file(r'tests/usage/prompts/domain/predicates.json')
17predicates: list[Predicate] = [Predicate(**item) for item in predicates_json]
18
19# extract PDDL task specifications via LLM
20objects, init, goal, llm_response, validation_info = task_builder.formalize_task(
21    model=llm,
22    problem_desc=problem_desc,
23    prompt_template=task_prompt,
24    types=types,
25    predicates=predicates
26    )
27
28# generate task file
29pddl_problem = task_builder.generate_task(
30    domain_name="blocksworld",
31    problem_name="blocksworld_problem",
32    objects=objects,
33    initial=init,
34    goal=goal)
35
36print(f"PDDL problem:\n{pddl_problem}")

The following output is:

PDDL problem:
(define
    (problem blocksworld_problem)
    (:domain blocksworld)

    (:objects
        blue_block - block
        red_block - block
        yellow_block - block
        green_block - block
        table1 - table
    )

    (:init
        (on_top blue_block red_block)
        (on_top red_block yellow_block)
        (on_table yellow_block)
        (on_table green_block)
        (clear blue_block)
        (clear green_block)
    )

    (:goal
        (and
            (on_top red_block green_block)
            (clear green_block)
        )
    )
)

*IMPORTANT* It is highly recommended to use the base template found in Templates in your final prompt to properly extract LLM output into the designated Python formats from these methods.