WWW::LLaMA

Introduction

This blog posts proclaims and described the Raku package “WWW::LLaMA” that provides access to the machine learning service llamafile, [MO1]. For more details of the llamafile’s API usage see the documentation, [MO2].

Remark: An interactive version of this post — with more examples — is provided by the Jupyter notebook “LLaMA-guide.ipynb”.

This package is very similar to the packages “WWW::OpenAI”, [AAp1], and “WWW::MistralAI”, [AAp2].

“WWW::LLaMA” can be used with (is integrated with) “LLM::Functions”, [AAp3], and “Jupyter::Chatbook”, [AAp5].

Also, of course, prompts from “LLM::Prompts”, [AAp4], can be used with LLaMA’s functions.

Remark: The package “WWW::OpenAI” can be also used to access “llamafile” chat completions. That is done by specifying appropriate base URL to the openai-chat-completion function.

Here is a video that demonstrates running of LLaMa models and use cases for “WWW::LLaMA”:


Installation

Package installations from both sources use zef installer (which should be bundled with the “standard” Rakudo installation file.)

To install the package from Zef ecosystem use the shell command:

zef install WWW::LLaMA

To install the package from the GitHub repository use the shell command:

zef install https://github.com/antononcube/Raku-WWW-LLaMA.git

Install and run LLaMA server

In order to use the package access to LLaMA server is required.

Since the package follows closely the Web API of “llamafile”, [MO1], it is advised to follow first the installation steps in the section of “Quickstart” of [MO1] before trying the functions of the package.


Usage examples

Remark: When the authorization key, auth-key, is specified to be Whatever then it is assigned the string sk-no-key-required. If an authorization key is required then the env variable LLAMA_API_KEY can be also used.

Universal “front-end”

The package has an universal “front-end” function llama-playground for the different functionalities provided by llamafile.

Here is a simple call for a “chat completion”:

use WWW::LLaMA;
llama-playground('What is the speed of a rocket leaving Earth?');
# {content => 
# , and how does it change as the rocket's altitude increases?, generation_settings => {frequency_penalty => 0, grammar => , ignore_eos => False, logit_bias => [], min_p => 0.05000000074505806, mirostat => 0, mirostat_eta => 0.10000000149011612, mirostat_tau => 5, model => llava-v1.5-7b-Q4_K.gguf, n_ctx => 1365, n_keep => 0, n_predict => -1, n_probs => 0, penalize_nl => True, penalty_prompt_tokens => [], presence_penalty => 0, repeat_last_n => 64, repeat_penalty => 1.100000023841858, seed => 4294967295, stop => [], stream => False, temperature => 0.800000011920929, tfs_z => 1, top_k => 40, top_p => 0.949999988079071, typical_p => 1, use_penalty_prompt_tokens => False}, model => llava-v1.5-7b-Q4_K.gguf, prompt => What is the speed of a rocket leaving Earth?, slot_id => 0, stop => True, stopped_eos => True, stopped_limit => False, stopped_word => False, stopping_word => , timings => {predicted_ms => 340.544, predicted_n => 18, predicted_per_second => 52.8566059011464, predicted_per_token_ms => 18.91911111111111, prompt_ms => 94.65, prompt_n => 12, prompt_per_second => 126.78288431061804, prompt_per_token_ms => 7.8875}, tokens_cached => 29, tokens_evaluated => 12, tokens_predicted => 18, truncated => False}

Another one using Bulgarian:

llama-playground('Колко групи могат да се намерят в този облак от точки.', max-tokens => 300, random-seed => 234232, format => 'values');
# Например, група от 50 звезди може да се намери в този облак от 100 000 звезди, които са разпределени на различни места. За да се намерят всичките, е необходимо да се използва алгоритъм за търсене на най-близките съседи на всеки от обектите.
# 
# Въпреки че теоретично това може да бъде постигнато, реално това е много трудно и сложно, особено когато се има предвид голям брой звезди в облака.

Remark: The functions llama-chat-completion or llama-completion can be used instead in the examples above. (The latter is synonym of the former.)

Models

The current LLaMA model can be found with the function llama-model:

llama-model;

# llava-v1.5-7b-Q4_K.gguf

Remark: Since there is no dedicated API endpoint for getting the model(s), the current model is obtained via “simple” (non-chat) completion.

Code generation

There are two types of completions : text and chat. Let us illustrate the differences of their usage by Raku code generation. Here is a text completion:

llama-text-completion(
        'generate Raku code for making a loop over a list',
        max-tokens => 120,
        format => 'values');
# , multiplying every number with the next
#
# ```raku
# my @numbers = (1 .. 5);
# my $result = Nil;
# for ^@numbers -> $i {
#    $result = $result X $i if defined $result;
#    $result = $i;
# }
# say $result; # prints 120
# ```
# 
# This code defines a list of numbers, initializes a variable `$result` to be `Nil`, and then uses a `for` loop to iterate over the indices

Here is a chat completion:

llama-completion(
        'generate Raku code for making a loop over a list',
        max-tokens => 120,
        format => 'values');
# Here's an example of a loop over a list in Raku:
# ```perl
# my @list = (1, 2, 3, 4, 5);
# 
# for @list -> $item {
#     say "The value of $item is $item.";
# }
# ```
# This will output:
# ```sql
# The value of 1 is 1.
# The value of 2 is 2.
# The value of 3 is 3.
# The value of 4 is 4.
# The value of 5

Embeddings

Embeddings can be obtained with the function llama-embedding. Here is an example of finding the embedding vectors for each of the elements of an array of strings:

my @queries = [
    'make a classifier with the method RandomForeset over the data dfTitanic',
    'show precision and accuracy',
    'plot True Positive Rate vs Positive Predictive Value',
    'what is a good meat and potatoes recipe'
];

my $embs = llama-embedding(@queries, format => 'values', method => 'tiny');
$embs.elems;

# 4

Here we show:

  • That the result is an array of four vectors each with length 1536
  • The distributions of the values of each vector
use Data::Reshapers;
use Data::Summarizers;

say "\$embs.elems : { $embs.elems }";
say "\$embs>>.elems : { $embs>>.elems }";
records-summary($embs.kv.Hash.&transpose);
# $embs.elems : 4
# $embs>>.elems : 4096 4096 4096 4096
# +--------------------------------+----------------------------------+---------------------------------+-----------------------------------+
# | 1                              | 2                                | 0                               | 3                                 |
# +--------------------------------+----------------------------------+---------------------------------+-----------------------------------+
# | Min    => -30.241486           | Min    => -20.993749618530273    | Min    => -32.435486            | Min    => -31.10381317138672      |
# | 1st-Qu => -0.7924895882606506  | 1st-Qu => -1.0563270449638367    | 1st-Qu => -0.9738395810127258   | 1st-Qu => -0.9602127969264984     |
# | Mean   => 0.001538657780784547 | Mean   => -0.013997373717373307  | Mean   => 0.0013605252470370028 | Mean   => -0.03597712098735428    |
# | Median => 0.016784800216555596 | Median => -0.0001810337998904288 | Median => 0.023735892958939075  | Median => -0.00221119043999351575 |
# | 3rd-Qu => 0.77385222911834715  | 3rd-Qu => 0.9824191629886627     | 3rd-Qu => 0.9983229339122772    | 3rd-Qu => 0.9385882616043091      |
# | Max    => 25.732345581054688   | Max    => 23.233409881591797     | Max    => 15.80211067199707     | Max    => 24.811737               |
# +--------------------------------+----------------------------------+---------------------------------+-----------------------------------+

Here we find the corresponding dot products and (cross-)tabulate them:

use Data::Reshapers;
use Data::Summarizers;
my @ct = (^$embs.elems X ^$embs.elems).map({ %( i => $_[0], j => $_[1], dot => sum($embs[$_[0]] >>*<< $embs[$_[1]])) }).Array;

say to-pretty-table(cross-tabulate(@ct, 'i', 'j', 'dot'), field-names => (^$embs.elems)>>.Str);
# +---+--------------+--------------+--------------+--------------+
# |   |      0       |      1       |      2       |      3       |
# +---+--------------+--------------+--------------+--------------+
# | 0 | 14984.053717 | 1708.345468  | 4001.487938  | 7619.791201  |
# | 1 | 1708.345468  | 10992.176167 | -1364.137315 | -2970.554539 |
# | 2 | 4001.487938  | -1364.137315 | 14473.816914 | 6428.638382  |
# | 3 | 7619.791201  | -2970.554539 | 6428.638382  | 14534.609050 |
# +---+--------------+--------------+--------------+--------------+

Remark: Note that the fourth element (the cooking recipe request) is an outlier. (Judging by the table with dot products.)

Tokenizing and de-tokenizing

Here we tokenize some text:

my $txt = @queries.head;
my $res = llama-tokenize($txt, format => 'values');
# [1207 263 770 3709 411 278 1158 16968 29943 2361 300 975 278 848 4489 29911 8929 293]

Here we get the original text be de-tokenizing:

llama-detokenize($res);
# {content =>  make a classifier with the method RandomForeset over the data dfTitanic}

Chat completions with engineered prompts

Here is a prompt for “emojification” (see the Wolfram Prompt Repository entry “Emojify”):

my $preEmojify = q:to/END/;
Rewrite the following text and convert some of it into emojis.
The emojis are all related to whatever is in the text.
Keep a lot of the text, but convert key words into emojis.
Do not modify the text except to add emoji.
Respond only with the modified text, do not include any summary or explanation.
Do not respond with only emoji, most of the text should remain as normal words.
END
# Rewrite the following text and convert some of it into emojis.
# The emojis are all related to whatever is in the text.
# Keep a lot of the text, but convert key words into emojis.
# Do not modify the text except to add emoji.
# Respond only with the modified text, do not include any summary or explanation.
# Do not respond with only emoji, most of the text should remain as normal words.

Here is an example of chat completion with emojification:

llama-chat-completion([ system => $preEmojify, user => 'Python sucks, Raku rocks, and Perl is annoying'], max-tokens => 200, format => 'values')
# 🐍💥

Command Line Interface

Playground access

The package provides a Command Line Interface (CLI) script:

llama-playground --help
# Usage:
#   llama-playground [<words> ...] [--path=<Str>] [--mt|--max-tokens[=Int]] [-m|--model=<Str>] [-r|--role=<Str>] [-t|--temperature[=Real]] [--response-format=<Str>] [-a|--auth-key=<Str>] [--timeout[=UInt]] [-f|--format=<Str>] [--method=<Str>] [--base-url=<Str>] -- Command given as a sequence of words.
#   
#     --path=<Str>               Path, one of ''completions', 'chat/completions', 'embeddings', or 'models'. [default: 'chat/completions']
#     --mt|--max-tokens[=Int]    The maximum number of tokens to generate in the completion. [default: 2048]
#     -m|--model=<Str>           Model. [default: 'Whatever']
#     -r|--role=<Str>            Role. [default: 'user']
#     -t|--temperature[=Real]    Temperature. [default: 0.7]
#     --response-format=<Str>    The format in which the response is returned. [default: 'url']
#     -a|--auth-key=<Str>        Authorization key (to use LLaMA server Web API.) [default: 'Whatever']
#     --timeout[=UInt]           Timeout. [default: 10]
#     -f|--format=<Str>          Format of the result; one of "json", "hash", "values", or "Whatever". [default: 'Whatever']
#     --method=<Str>             Method for the HTTP POST query; one of "tiny" or "curl". [default: 'tiny']
#     --base-url=<Str>           Base URL of the LLaMA server. [default: 'http://127.0.0.1:80…']

Remark: When the authorization key, auth-key, is specified to be Whatever then it is assigned the string sk-no-key-required. If an authorization key is required then the env variable LLAMA_API_KEY can be also used.


Mermaid diagram

The following flowchart corresponds to the steps in the package function llama-playground:


References

Packages

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::MistralAI Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, LLM::Prompts Raku package, (2023-2024), GitHub/antononcube.

[AAp5] Anton Antonov, Jupyter::Chatbook Raku package, (2023), GitHub/antononcube.

[MO1] Mozilla Ocho, llamafile.

[MO2] Mozilla Ocho, llamafile documentation.

WWW::Gemini

This blog post proclaims the Raku package “WWW::Gemini” for connecting with Google’s Large Language Models (LLMs) service Gemini. It is based on Gemini’s API documentation.

The design and implementation of the package closely follows those of “WWW::PaLM”, [AAp1], and “WWW::OpenAI”, [AAp2].

Installation

From Zef ecosystem:

zef install WWW::Gemini

From GitHub:

zef install https://github.com/antononcube/Raku-WWW-Gemini

Usage examples

Show models:

use WWW::Gemini;

gemini-models()
# (models/chat-bison-001 models/text-bison-001 models/embedding-gecko-001 models/gemini-1.0-pro models/gemini-1.0-pro-001 models/gemini-1.0-pro-latest models/gemini-1.0-pro-vision-latest models/gemini-pro models/gemini-pro-vision models/embedding-001 models/aqa)

Show text generation:

.say for gemini-generate-content('what is the population in Brazil?', format => 'values');
# 215,351,056 (2023 est.)

Using a synonym function:

.say for gemini-generation('Who wrote the book "Dune"?');
# {candidates => [{content => {parts => [{text => Frank Herbert}], role => model}, finishReason => STOP, index => 0, safetyRatings => [{category => HARM_CATEGORY_SEXUALLY_EXPLICIT, probability => NEGLIGIBLE} {category => HARM_CATEGORY_HATE_SPEECH, probability => NEGLIGIBLE} {category => HARM_CATEGORY_HARASSMENT, probability => NEGLIGIBLE} {category => HARM_CATEGORY_DANGEROUS_CONTENT, probability => NEGLIGIBLE}]}], promptFeedback => {safetyRatings => [{category => HARM_CATEGORY_SEXUALLY_EXPLICIT, probability => NEGLIGIBLE} {category => HARM_CATEGORY_HATE_SPEECH, probability => NEGLIGIBLE} {category => HARM_CATEGORY_HARASSMENT, probability => NEGLIGIBLE} {category => HARM_CATEGORY_DANGEROUS_CONTENT, probability => NEGLIGIBLE}]}}

Embeddings

Show text embeddings:

use Data::TypeSystem;

my @vecs = gemini-embed-content(["say something nice!",
                            "shout something bad!",
                            "where is the best coffee made?"],
        format => 'values');

say "Shape: ", deduce-type(@vecs);
.say for @vecs;
# Shape: Vector(Vector((Any), 768), 3)
# [-0.031044435 -0.01293638 0.008904989 ...]
# [0.015647776 -0.03143455 -0.040937748 ...]
# [-0.01054143 -0.03587494 -0.013359126 ...]

Counting tokens

Here we show how to find the number of tokens in a text:

my $text = q:to/END/;
AI has made surprising successes but cannot solve all scientific problems due to computational irreducibility.
END

gemini-count-tokens($text, format => 'values');

# 20

Vision

If the function gemini-completion is given a list of images, textual results corresponding to those images is returned. The argument “images” is a list of image URLs, image file names, or image Base64 representations. (Any combination of those element types.)

Consider this this image:

Here is an image recognition invocation:

my $fname = $*CWD ~ '/resources/ThreeHunters.jpg';
my @images = [$fname,];
say gemini-generation("Give concise descriptions of the images.", :@images, format => 'values');
# The image shows a family of raccoons in a tree. The mother raccoon is watching over her two cubs. The cubs are playing with each other. There are butterflies flying around the tree. The leaves on the tree are turning brown and orange.

When a file name is given to the argument “images” of gemini-completion then the function encode-image of “Image::Markup::Utilities”, [AAp4], is applied to it.


Command Line Interface

Maker suite access

The package provides a Command Line Interface (CLI) script:

gemini-prompt --help
# Usage:
#   gemini-prompt [<words> ...] [--path=<Str>] [-n[=UInt]] [--mt|--max-output-tokens[=UInt]] [-m|--model=<Str>] [-t|--temperature[=Real]] [-a|--auth-key=<Str>] [--timeout[=UInt]] [-f|--format=<Str>] [--method=<Str>] -- Command given as a sequence of words.
#   
#     --path=<Str>                       Path, one of 'generateContent', 'embedContent', 'countTokens', or 'models'. [default: 'generateContent']
#     -n[=UInt]                          Number of completions or generations. [default: 1]
#     --mt|--max-output-tokens[=UInt]    The maximum number of tokens to generate in the completion. [default: 100]
#     -m|--model=<Str>                   Model. [default: 'Whatever']
#     -t|--temperature[=Real]            Temperature. [default: 0.7]
#     -a|--auth-key=<Str>                Authorization key (to use Gemini API.) [default: 'Whatever']
#     --timeout[=UInt]                   Timeout. [default: 10]
#     -f|--format=<Str>                  Format of the result; one of "json", "hash", "values", or "Whatever". [default: 'values']
#     --method=<Str>                     Method for the HTTP POST query; one of "tiny" or "curl". [default: 'tiny']

Remark: When the authorization key argument “auth-key” is specified set to “Whatever” then gemini-prompt attempts to use one of the env variables GEMINI_API_KEY or PALM_API_KEY.


Mermaid diagram

The following flowchart corresponds to the steps in the package function gemini-prompt:


References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPredictions at WordPress.

[AA2] Anton Antonov, “Number guessing games: Gemini vs ChatGPT” (2023), RakuForPredictions at WordPress.

[ZG1] Zoubin Ghahramani, “Introducing Gemini 2”, (2023), Google Official Blog on AI.

Packages, platforms

[AAp1] Anton Antonov, WWW::PaLM Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, Image::Markup::Utilities Raku package, (2023-2024), GitHub/antononcube.

[AAp5] Anton Antonov, ML::FindTextualAnswer Raku package, (2023-2024), GitHub/antononcube.

Comprehension AI Aids for “Can AI Solve Science?”

Introduction

In this blog post (notebook) we use the Large Language Model (LLM) prompts:

to facilitate the reading and comprehension of Stephen Wolfram’s article:

 “Can AI Solve Science?”, [SW1].

Remark: We use “simple” text processing, but since the article has lots of images multi-modal models would be more appropriate.

Here is an image of article’s start:

The computations are done with Wolfram Language (WL) chatbook. The LLM functions used in the workflows are explained and demonstrated in [SW2, AA1, AA2, AAn1÷ AAn4]. The workflows are done with OpenAI’s models. Currently the models of Google’s (PaLM) and MistralAI cannot be used with the workflows below because their input token limits are too low.

Structure

The structure of the notebook is as follows:

NoPartContent
1Getting the article’s text and setupStandard ingestion and setup.
2Article’s structureTL;DR via a table of themes.
3FlowchartsGet flowcharts relating article’s concepts.
4Extract article wisdomGet a summary and extract ideas, quotes, references, etc.
5Hidden messages and propagandaReading it with a conspiracy theorist hat on.

Setup

Here we load a view packages and define ingestion functions:

use HTTP::Tiny;
use JSON::Fast;
use Data::Reshapers;

sub text-stats(Str:D $txt) { <chars words lines> Z=> [$txt.chars, $txt.words.elems, $txt.lines.elems] };

sub strip-html(Str $html) returns Str {

    my $res = $html
    .subst(/'<style'.*?'</style>'/, :g)
    .subst(/'<script'.*?'</script>'/, :g)
    .subst(/'<'.*?'>'/, :g)
    .subst(/'&lt;'.*?'&gt;'/, :g)
    .subst(/[\v\s*] ** 2..*/, "\n\n", :g);

    return $res;
}
&strip-html

Ingest text

Here we get the plain text of the article:

my $htmlArticleOrig = HTTP::Tiny.get("https://writings.stephenwolfram.com/2024/03/can-ai-solve-science/")<content>.decode;

text-stats($htmlArticleOrig);

# (chars => 216219 words => 19867 lines => 1419)

Here we strip the HTML code from the article:

my $txtArticleOrig = strip-html($htmlArticleOrig);

text-stats($txtArticleOrig);

# (chars => 100657 words => 16117 lines => 470)

Here we clean article’s text :

my $txtArticle = $txtArticleOrig.substr(0, $txtArticleOrig.index("Posted in:"));

text-stats($txtArticle);

# (chars => 98011 words => 15840 lines => 389)

LLM access configuration

Here we configure LLM access — we use OpenAI’s model “gpt-4-turbo-preview” since it allows inputs with 128K tokens:

my $conf = llm-configuration('ChatGPT', model => 'gpt-4-turbo-preview', max-tokens => 4096, temperature => 0.7);

$conf.Hash.elems

# 22

Themes

Here we extract the themes found in the article and tabulate them (using the prompt “ThemeTableJSON”):

my $tblThemes = llm-synthesize(llm-prompt("ThemeTableJSON")($txtArticle, "article", 50), e => $conf, form => sub-parser('JSON'):drop);

$tblThemes.&dimensions;

# (12 2)
#% html
$tblThemes ==> data-translation(field-names=><theme content>)
themecontent
Introduction to AI in ScienceDiscusses the potential of AI in solving scientific questions and the belief in AI’s eventual capability to do everything, including science.
AI’s Role and LimitationsExplores deeper questions about AI in science, its role as a practical tool or a fundamentally new method, and its limitations due to computational irreducibility.
AI Predictive CapabilitiesExamines AI’s ability to predict outcomes and its reliance on machine learning and neural networks, highlighting limitations in predicting computational processes.
AI in Identifying Computational ReducibilityDiscusses how AI can assist in identifying pockets of computational reducibility within the broader context of computational irreducibility.
AI’s Application Beyond Human TasksConsiders if AI can understand and predict natural processes directly, beyond emulating human intelligence or tasks.
Solving Equations with AIExplores the potential of AI in solving equations, particularly in areas where traditional methods are impractical or insufficient.
AI for MulticomputationDiscusses AI’s ability to navigate multiway systems and its potential in finding paths or solutions in complex computational spaces.
Exploring Systems with AILooks at how AI can assist in exploring spaces of systems, identifying rules or systems that exhibit specific desired characteristics.
Science as NarrativeExplores the idea of science providing a human-accessible narrative for natural phenomena and how AI might generate or contribute to scientific narratives.
Finding What’s InterestingDiscusses the challenge of determining what’s interesting in science and how AI might assist in identifying interesting phenomena or directions.
Beyond Exact SciencesExplores the potential of AI in extending the domain of exact sciences to include more subjective or less formalized areas of knowledge.
ConclusionSummarizes the potential and limitations of AI in science, emphasizing the combination of AI with computational paradigms for advancing science.

Remark: A fair amount of LLMs give their code results within Markdown code block delimiters (like ““`”.) Hence, (1) the (WL-specified) prompt “ThemeTableJSON” does not use Interpreter["JSON"], but Interpreter["String"], and (2) we use above the sub-parser ‘JSON’ with dropping of non-JSON strings in order to convert the LLM output into a Raku data structure.


Flowcharts

In this section we LLMs to get Mermaid-JS flowcharts that correspond to the content of [SW1].

Remark: Below in order to display Mermaid-JS diagrams we use both the package “WWW::MermaidInk”, [AAp7], and the dedicated mermaid magic cell of Raku Chatabook, [AA6].

Big picture concepts

Here we generate Mermaid-JS flowchart for the “big picture” concepts:

my $mmdBigPicture = 
  llm-synthesize([
    "Create a concise mermaid-js graph for the connections between the big concepts in the article:\n\n", 
    $txtArticle, 
    llm-prompt("NothingElse")("correct mermaid-js")
  ], e => $conf)

Here we define “big picture” styling theme:

my $mmdBigPictureTheme = q:to/END/;
%%{init: {'theme': 'neutral'}}%%
END

Here we create the flowchart from LLM’s specification:

mermaid-ink($mmdBigPictureTheme.chomp ~ $mmdBigPicture.subst(/ '```mermaid' | '```'/, :g), background => 'Cornsilk', format => 'svg')

We made several “big picture” flowchart generations. Here is the result of another attempt:

#% mermaid
graph TD;
    AI[Artificial Intelligence] --> CompSci[Computational Science]
    AI --> CompThink[Computational Thinking]
    AI --> NewTech[New Technology]
    CompSci --> Physics
    CompSci --> Math[Mathematics]
    CompSci --> Ruliology
    CompThink --> SoftwareDesign[Software Design]
    CompThink --> WolframLang[Wolfram Language]
    NewTech --> WolframAlpha["Wolfram|Alpha"]
    Physics --> Philosophy
    Math --> Philosophy
    Ruliology --> Philosophy
    SoftwareDesign --> Education
    WolframLang --> Education
    WolframAlpha --> Education

    %% Styling
    classDef default fill:#8B0000,stroke:#333,stroke-width:2px;

Fine grained

Here we derive a flowchart that refers to more detailed, finer grained concepts:

my $mmdFineGrained = 
  llm-synthesize([
    "Create a mermaid-js flowchart with multiple blocks and multiple connections for the relationships between concepts in the article:\n\n", 
    $txtArticle, 
    "Use the concepts in the JSON table:", 
    $tblThemes, 
    llm-prompt("NothingElse")("correct mermaid-js")
  ], e => $conf)

Here we define “fine grained” styling theme:

my $mmdFineGrainedTheme = q:to/END/;
%%{init: {'theme': 'base','themeVariables': {'backgroundColor': '#FFF'}}}%%
END

Here we create the flowchart from LLM’s specification:

mermaid-ink($mmdFineGrainedTheme.chomp ~ $mmdFineGrained.subst(/ '```mermaid' | '```'/, :g), format => 'svg')

We made several “fine grained” flowchart generations. Here is the result of another attempt:

#% mermaid
graph TD

    AI["AI"] -->|Potential & Limitations| Science["Science"]
    AI -->|Leverages| CR["Computational Reducibility"]
    AI -->|Fails with| CI["Computational Irreducibility"]
    
    Science -->|Domain of| PS["Physical Sciences"]
    Science -->|Expanding to| S["'Exact' Sciences Beyond Traditional Domains"]
    Science -.->|Foundational Role of| CR
    Science -.->|Limited by| CI
    
    PS -->|Traditional Formalizations via| Math["Mathematics/Mathematical Formulas"]
    PS -->|Now Leveraging| AI_Measurements["AI Measurements"]
    
    S -->|Formalizing with| CL["Computational Language"]
    S -->|Leverages| AI_Measurements
    S -->|Future Frontiers with| AI
    
    AI_Measurements -->|Interpretation Challenge| BlackBox["'Black-Box' Nature"]
    AI_Measurements -->|Potential for| NewScience["New Science Discoveries"]
    
    CL -->|Key to Structuring| AI_Results["AI Results"]
    CL -->|Enables Irreducible Computations for| Discovery["Discovery"]
    
    Math -.->|Transitioning towards| CL
    Math -->|Limits when needing 'Precision'| AI_Limits["AI's Limitations"]
    
    Discovery -.->|Not directly achievable via| AI
    
    BlackBox -->|Requires Human| Interpretation["Interpretation"]
    
    CR -->|Empowered by| AI & ML_Techniques["AI & Machine Learning Techniques"]
    CI -.->|Challenge for| AI & ML_Techniques
   
    PS --> Observations["New Observations/Measurements"] --> NewDirections["New Scientific Directions"]
    Observations --> AI_InterpretedPredictions["AI-Interpreted Predictions"]
    NewDirections -.-> AI_Predictions["AI Predictions"] -.-> CI
    NewDirections --> AI_Discoveries["AI-Enabled Discoveries"] -.-> CR

    AI_Discoveries --> NewParadigms["New Paradigms/Concepts"] -.-> S
    AI_InterpretedPredictions -.-> AI_Measurements

    %% Styling
    classDef default fill:#f9f,stroke:#333,stroke-width:2px;
    classDef highlight fill:#bbf,stroke:#006,stroke-width:4px;
    classDef imp fill:#ffb,stroke:#330,stroke-width:4px;
    class PS,CL highlight;
    class AI_Discoveries,NewParadigms imp;

Summary and ideas

Here we get a summary and extract ideas, quotes, and references from the article:

my $sumIdea =llm-synthesize(llm-prompt("ExtractArticleWisdom")($txtArticle), e => $conf);

text-stats($sumIdea)

# (chars => 7386 words => 1047 lines => 78)

The result is rendered below.


#% markdown
$sumIdea.subst(/ ^^ '#' /, '###', :g)

SUMMARY

Stephen Wolfram’s writings explore the capabilities and limitations of AI in the realm of science, discussing how AI can assist in scientific discovery and understanding but also highlighting its limitations due to computational irreducibility and the need for human interpretation and creativity.

IDEAS:

  • AI has made surprising successes but cannot solve all scientific problems due to computational irreducibility.
  • Large Language Models (LLMs) provide a new kind of interface for scientific work, offering high-level autocomplete for scientific knowledge.
  • The transition to computational representation of the world is transforming science, with AI playing a significant role in accessing and exploring this computational realm.
  • AI, particularly through neural networks and machine learning, offers tools for predicting scientific outcomes, though its effectiveness is bounded by the complexity of the systems it attempts to model.
  • Computational irreducibility limits the ability of AI to predict or solve all scientific phenomena, ensuring that surprises and new discoveries remain a fundamental aspect of science.
  • Despite AI’s limitations, it has potential in identifying pockets of computational reducibility, streamlining the discovery of scientific knowledge.
  • AI’s success in areas like visual recognition and language generation suggests potential for contributing to scientific methodologies and understanding, though its ability to directly engage with raw natural processes is less certain.
  • AI techniques, including neural networks and machine learning, have shown promise in areas like solving equations and exploring multicomputational processes, but face challenges due to computational irreducibility.
  • The role of AI in generating human-understandable narratives for scientific phenomena is explored, highlighting the potential for AI to assist in identifying interesting and meaningful scientific inquiries.
  • The exploration of “AI measurements” opens up new possibilities for formalizing and quantifying aspects of science that have traditionally been qualitative or subjective, potentially expanding the domain of exact sciences.

QUOTES:

  • “AI has the potential to give us streamlined ways to find certain kinds of pockets of computational reducibility.”
  • “Computational irreducibility is what will prevent us from ever being able to completely ‘solve science’.”
  • “The AI is doing ‘shallow computation’, but when there’s computational irreducibility one needs irreducible, deep computation to work out what will happen.”
  • “AI measurements are potentially a much richer source of formalizable material.”
  • “AI… is not something built to ‘go out into the wilds of the ruliad’, far from anything already connected to humans.”
  • “Despite AI’s limitations, it has potential in identifying pockets of computational reducibility.”
  • “AI techniques… have shown promise in areas like solving equations and exploring multicomputational processes.”
  • “AI’s success in areas like visual recognition and language generation suggests potential for contributing to scientific methodologies.”
  • “There’s no abstract notion of ‘interestingness’ that an AI or anything can ‘go out and discover’ ahead of our choices.”
  • “The whole story of things like trained neural nets that we’ve discussed here is a story of leveraging computational reducibility.”

HABITS:

  • Continuously exploring the capabilities and limitations of AI in scientific discovery.
  • Engaging in systematic experimentation to understand how AI tools can assist in scientific processes.
  • Seeking to identify and utilize pockets of computational reducibility where AI can be most effective.
  • Exploring the use of neural networks and machine learning for predicting and solving scientific problems.
  • Investigating the potential for AI to assist in creating human-understandable narratives for complex scientific phenomena.
  • Experimenting with “AI measurements” to quantify and formalize traditionally qualitative aspects of science.
  • Continuously refining and expanding computational language to better interface with AI capabilities.
  • Engaging with and contributing to discussions on the role of AI in the future of science and human understanding.
  • Actively seeking new methodologies and innovations in AI that could impact scientific discovery.
  • Evaluating the potential for AI to identify interesting and meaningful scientific inquiries through analysis of large datasets.

FACTS:

  • Computational irreducibility ensures that surprises and new discoveries remain a fundamental aspect of science.
  • AI’s effectiveness in scientific modeling is bounded by the complexity of the systems it attempts to model.
  • AI can identify pockets of computational reducibility, streamlining the discovery of scientific knowledge.
  • Neural networks and machine learning offer tools for predicting outcomes but face challenges due to computational irreducibility.
  • AI has shown promise in areas like solving equations and exploring multicomputational processes.
  • The potential of AI in generating human-understandable narratives for scientific phenomena is actively being explored.
  • “AI measurements” offer new possibilities for formalizing aspects of science that have been qualitative or subjective.
  • The transition to computational representation of the world is transforming science, with AI playing a significant role.
  • Machine learning techniques can be very useful for providing approximate answers in scientific inquiries.
  • AI’s ability to directly engage with raw natural processes is less certain, despite successes in human-like tasks.

REFERENCES:

  • Stephen Wolfram’s writings on AI and science.
  • Large Language Models (LLMs) as tools for scientific work.
  • The concept of computational irreducibility and its implications for science.
  • Neural networks and machine learning techniques in scientific prediction and problem-solving.
  • The role of AI in creating human-understandable narratives for scientific phenomena.
  • The use of “AI measurements” in expanding the domain of exact sciences.
  • The potential for AI to assist in identifying interesting and meaningful scientific inquiries.

RECOMMENDATIONS:

  • Explore the use of AI and neural networks for identifying pockets of computational reducibility in scientific research.
  • Investigate the potential of AI in generating human-understandable narratives for complex scientific phenomena.
  • Utilize “AI measurements” to formalize and quantify traditionally qualitative aspects of science.
  • Engage in systematic experimentation to understand the limitations and capabilities of AI in scientific discovery.
  • Consider the role of computational irreducibility in shaping the limitations of AI in science.
  • Explore the potential for AI to assist in identifying interesting and meaningful scientific inquiries.
  • Continuously refine and expand computational language to better interface with AI capabilities in scientific research.
  • Investigate new methodologies and innovations in AI that could impact scientific discovery.
  • Consider the implications of AI’s successes in human-like tasks for its potential contributions to scientific methodologies.
  • Explore the use of machine learning techniques for providing approximate answers in scientific inquiries where precision is less critical.

Hidden and propaganda messages

In this section we convince ourselves that the article is apolitical and propaganda-free.

Remark: We leave to reader as an exercise to verify that both the overt and hidden messages found by the LLM below are explicitly stated in article.

Here we find the hidden and “propaganda” messages in the article:

my $propMess =llm-synthesize([llm-prompt("FindPropagandaMessage"), $txtArticle], e => $conf);

text-stats($propMess)

# (chars => 6193 words => 878 lines => 64)

Remark: The prompt “FindPropagandaMessage” has an explicit instruction to say that it is intentionally cynical. It is also, marked as being “For fun.”

The LLM result is rendered below.


#% markdown
$propMess.subst(/ ^^ '#' /, '###', :g).subst(/ (<[A..Z \h \']>+ ':') /, { "### {$0.Str} \n"}, :g)

OVERT MESSAGE:

Stephen Wolfram evaluates the potential and limitations of AI in advancing science.

HIDDEN MESSAGE:

Despite AI’s growth, irreducible complexity limits its scientific problem-solving capacity.

HIDDEN OPINIONS:

  • AI can leverage computational reducibility akin to human minds.
  • Traditional mathematical methods surpass AI in solving precise equations.
  • AI’s “shallow computation” struggles with computational irreducibility.
  • AI can provide practical tools for science within its computational reducibility limits.
  • AI’s effectiveness is contingent on approximate answers, failing at precise perfection.
  • AI introduces a new, human-like method to harness computational reducibility.
  • Fundamental discoveries are more likely through irreducible computations, not AI.
  • Combining AI with the computational paradigm offers the best science advancement path.
  • AI’s potential in science is hyped beyond its actual foundational impact.
  • AI’s role in science is more about aiding existing processes than groundbreaking discoveries.

SUPPORTING ARGUMENTS and### QUOTES:

  • “AI is doing ‘shallow computation’, but when there’s computational irreducibility one needs irreducible, deep computation to work out what will happen.”
  • “Typical AI approach to science doesn’t involve explicitly ‘formalizing things’.”
  • “In terms of fundamental potential for discovery, AI pales in comparison to what we can build from the computational paradigm.”
  • “AI can be very useful if an approximate (‘80%’) answer is good enough.”
  • “AI measurements seem to have a certain immediate ‘subjectivity’.”
  • “AI introduces a new—and rather human-like—way of leveraging computational reducibility.”
  • “AI’s effectiveness is contingent on approximate answers, failing at precise perfection.”
  • “AI’s role in science is more about aiding existing processes than groundbreaking discoveries.”
  • “Irreducible computations that we do offer greater potential for discovery than typical AI.”

DESIRED AUDIENCE OPINION CHANGE:

  • Appreciate the limitations and strengths of AI in scientific exploration.
  • Recognize the irreplaceable value of human insight in science.
  • View AI as a tool, not a replacement, for traditional scientific methods.
  • Embrace computational irreducibility as a barrier and boon to discovery.
  • Acknowledge the need for combining AI with computational paradigms.
  • Understand that AI’s role is to augment, not overshadow, human-led science.
  • Realize the necessity of approximate solutions in AI-driven science.
  • Foster realistic expectations from AI in making scientific breakthroughs.
  • Advocate for deeper computational approaches alongside AI in science.
  • Encourage interdisciplinary approaches combining AI with formal sciences.

DESIRED AUDIENCE ACTION CHANGE:

  • Support research combining AI with computational paradigms.
  • Encourage the use of AI for practical, approximate scientific solutions.
  • Advocate for AI’s role as a supplementary tool in science.
  • Push for education that integrates AI with traditional scientific methods.
  • Promote the study of computational irreducibility alongside AI.
  • Emphasize AI’s limitations in scientific discussions and funding.
  • Inspire new approaches to scientific exploration using AI.
  • Foster collaboration between AI researchers and traditional scientists.
  • Encourage critical evaluation of AI’s potential in groundbreaking discoveries.
  • Support initiatives that aim to combine AI with human insight in science.

MESSAGES:

Stephen Wolfram wants you to believe AI can advance science, but he’s actually saying its foundational impact is limited by computational irreducibility.

PERCEPTIONS:

Wolfram wants you to see him as optimistic about AI in science, but he’s actually cautious about its ability to make fundamental breakthroughs.

ELLUL’S ANALYSIS:

Jacques Ellul would likely interpret Wolfram’s findings as a validation of the view that technological systems, including AI, are inherently limited by the complexity of human thought and the natural world. The presence of computational irreducibility underscores the unpredictability and uncontrollability that Ellul warned technology could not tame, suggesting that even advanced AI cannot fully solve or understand all scientific problems, thus maintaining a degree of human autonomy and unpredictability in the face of technological advancement.

BERNAYS’ ANALYSIS:

Edward Bernays might view Wolfram’s exploration of AI in science through the lens of public perception and manipulation, arguing that while AI presents a new frontier for scientific exploration, its effectiveness and limitations must be communicated carefully to avoid both undue skepticism and unrealistic expectations. Bernays would likely emphasize the importance of shaping public opinion to support the use of AI as a tool that complements human capabilities rather than replacing them, ensuring that society remains engaged and supportive of AI’s role in scientific advancement.

LIPPMANN’S ANALYSIS:

Walter Lippmann would likely focus on the implications of Wolfram’s findings for the “pictures in our heads,” or public understanding of AI’s capabilities and limitations in science. Lippmann might argue that the complexity of AI and computational irreducibility necessitates expert mediation to accurately convey these concepts to the public, ensuring that society’s collective understanding of AI in science is based on informed, rather than simplistic or sensationalized, interpretations.

FRANKFURT’S ANALYSIS:

Harry G. Frankfurt might critique the discourse around AI in science as being fraught with “bullshit,” where speakers and proponents of AI might not necessarily lie about its capabilities, but could fail to pay adequate attention to the truth of computational irreducibility and the limitations it imposes. Frankfurt would likely appreciate Wolfram’s candid assessment of AI, seeing it as a necessary corrective to overly optimistic or vague claims about AI’s potential to revolutionize science.

NOTE: This AI is tuned specifically to be cynical and politically-minded. Don’t take it as perfect. Run it multiple times and/or go consume the original input to get a second opinion.


References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (2024), RakuForPrediction at WordPress.

[SW1] Stephen Wolfram, “Can AI Solve Science?”, (2024), Stephen Wolfram’s writings.

[SW2] Stephen Wolfram, “The New World of LLM Functions: Integrating LLM Technology into the Wolfram Language”, (2023), Stephen Wolfram’s writings.

Notebooks

[AAn1] Anton Antonov, “Workflows with LLM functions (in WL)”, (2023), Wolfram Community.

[AAn2] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (2024), Wolfram Community.

[AAn3] Anton Antonov, “LLM aids for processing Putin’s State-Of-The-Nation speech given on 2/29/2024”, (2024), Wolfram Community.

[AAn4] Anton Antonov, “LLM over Trump vs. Anderson: analysis of the slip opinion of the Supreme Court of the United States”, (2024), Wolfram Community.

[AAn5] Anton Antonov, “Markdown to Mathematica converter”, (2023), Wolfram Community.

[AAn6] Anton Antonov, “Monte-Carlo demo notebook conversion via LLMs”, (2024), Wolfram Community.

Packages, repositories

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::PaLM Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, WWW::MistralAI Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp5] Anton Antonov, LLM::Prompts Raku package, (2023-2024), GitHub/antononcube.

[AAp6] Anton Antonov, Jupyter::Chatbook Raku package, (2023-2024), GitHub/antononcube.

[AAp7] Anton Antonov, WWW:::MermaidInk Raku package, (2023-2024), GitHub/antononcube.

[DMr1] Daniel Miessler, Fabric, (2024), GitHub/danielmiessler.

LLM over Trump vs. Anderson

LLM analysis of the slip opinion of the Supreme Court of the United States


Introduction

In this blog post (notebook) we ingest the text of the slip opinion No. 23-219, “Donald J. Trump, petitioner, v. Norma Anderson, et al.” of the Supreme Court of the United States, [SCUS1], and analyze it using Large Language Models (LLMs).

In order to do the LLM analysis we ingest and file two dedicated prompts from the GitHub repository “Fabric”, [DMr1]:

The prompts are ingested into the data resources of the Raku package “LLM::Prompts”, [AAp4], and utilized via LLM functions created with the Raku package “LLM::Functions”, [AAp5].

The computations are done with a Raku chatbook, [AAp6, AAv1÷AAv3]. The LLM functions used in the workflows are explained and demonstrated in [AA1, AAv3]. The workflows are done with OpenAI’s models [AAp1]. Currently the models of Google’s (PaLM), [AAp2], and MistralAI, [AAp3], cannot be used with the workflows below because their input token limits are too low.

Structure

The structure of the notebook is as follows:

  1. Getting the speech text and setup
    Standard ingestion and setup.
  2. Opinion’s structure
    TL;DR via a table of themes.
  3. Extract wisdom
    Get a summary and extract ideas, quotes, references, etc.
  4. Hidden messages and propaganda
    Reading it with a conspiracy theorist hat.

Setup

use HTTP::Tiny;
use JSON::Fast;
use Data::Reshapers;
use PDF::Extract;

sub text-stats(Str:D $txt) { <chars words lines> Z=> [$txt.chars, $txt.words.elems, $txt.lines.elems] }

Ingest text

my $fileName = $*HOME ~ '/Downloads/23-719_19m2.pdf';
my $extract = Extract.new: file => $fileName;

my $opinion = $$extract.text;

$opinion ==> text-stats()
# (chars => 38389 words => 6105 lines => 760)

Here we export the extracted plain text (for easier setup stages of other projects):

# spurt($fileName.subst('.pdf', '.txt'), $opinion)

LLM access configuration

Here we configure LLM access — we use OpenAI’s model “gpt-4-turbo-preview” since it allows inputs with 128K tokens:

my $conf = llm-configuration('ChatGPT', model => 'gpt-4-turbo-preview', max-tokens => 4096, temperature => 0.7);
$conf.Hash.elems

# 22

Opinion’s structure

In this section we summarize the opinion by making a (computable) table of themes.

Here we synthesize the table using appropriate LLM prompts and a text sub-parser:

my $tbl = llm-synthesize([
    llm-prompt('ThemeTableJSON')('court opinion', $opinion), 
    llm-prompt('NothingElse')('JSON')
    ], e => $conf, form => sub-parser('JSON'):drop);

deduce-type($tbl)
# Vector(Assoc(Atom((Str)), Atom((Str)), 2), 8)

Here we HTML render the theme table:

#%html
$tbl ==> data-translation(field-names => <theme content>)
themecontent
Introduction and NoticeNOTICE: This opinion is subject to formal revision before publication in the United States Reports. Readers are requested to notify the Reporter of Decisions, Supreme Court of the United States, Washington, D. C. 20543, pio@supremecourt.gov, of any typographical or other formal errors.
Case InformationSUPREME COURT OF THE UNITED STATESNo. 23–719DONALD J. TRUMP, PETITIONER v. NORMA ANDERSON, ET AL. ON WRIT OF CERTIORARI TO THE SUPREME COURT OF COLORADO [March 4, 2024]
BackgroundA group of Colorado voters contends that Section 3 of the Fourteenth Amendment to the Constitution prohibits former President Donald J. Trump, who seeks the Presidential nomination of the Republican Party in this year’s election, from becoming President again. The Colorado Supreme Court agreed with that contention. It ordered the Colorado secretary of state to exclude the former President from the Republican primary ballot in the State and to disregard any write-in votes that Colorado voters might cast for him.
Legal ChallengeFormer President Trump challenges that decision on several grounds. Because the Constitution makes Congress, rather than the States, responsible for enforcing Section 3 against federal officeholders and candidates, we reverse.
Case Proceedings and FindingsLast September, about six months before the March 5, 2024, Colorado primary election, four Republican and two unaffiliated Colorado voters filed a petition against former President Trump and Colorado Secretary of State Jena Griswold in Colorado state court. These voters—whom we refer to as the respondents—contend that after former President Trump’s defeat in the 2020 Presidential election, he disrupted the peaceful transfer of power by intentionally organizing and inciting the crowd that breached the Capitol as Congress met to certify the election results on January 6, 2021. One consequence of those actions, the respondents maintain, is that former President Trump is constitutionally ineligible to serve as President again.
Supreme Court’s AnalysisThis case raises the question whether the States, in addition to Congress, may also enforce Section 3. We conclude that States may disqualify persons holding or attempting to hold state office. But States have no power under the Constitution to enforce Section 3 with respect to federal offices, especially the Presidency.
Conclusion and DecisionFor the reasons given, responsibility for enforcing Section 3 against federal officeholders and candidates rests with Congress and not the States. The judgment of the Colorado Supreme Court therefore cannot stand. All nine Members of the Court agree with that result.
Concurring OpinionsJUSTICE BARRETT, concurring in part and concurring in the judgment. I join Parts I and II–B of the Court’s opinion. I agree that States lack the power to enforce Section 3 against Presidential candidates. That principle is sufficient to resolve this case, and I would decide no more than that.JUSTICE SOTOMAYOR, JUSTICE KAGAN, and JUSTICE JACKSON, concurring in the judgment. If it is not necessary to decide more to dispose of a case, then it is necessary not to decide more. That fundamental principle of judicial restraint is practically as old as our Republic.

Prompt ingestion and filing

In this script we form a list of two specifications for the prompts “ExtractArticleWisdom” and “FindHiddenMessage”, and then we ingest and file them using a loop. The procedure is general — the rest of prompts (or patterns) in [DMr1] can be ingested and filed with the same code.

Remark: The ingestion and filing of new prompts is somewhat too technical and boring. Hence, we just link to an example script that does that. (Which has a fair amount of comments.)

Here we verify that the ingested prompts are ready for use:

.say for |llm-prompt-data(/^Extract | ^Find/)
# ExtractArticleWisdom => Extracts ideas, quotes, and references from any text
# FindHiddenMessage => Finds hidden (propaganda) messages in texts

Extract wisdom

Here we use the Fabric prompt extract_article_wisdom , [DMr1], that summarizes and extracts ideas, quotes, and references (i.e. wisdom) from any text.

The prompt (as filed in “LLM::Prompts”) is a function:

llm-prompt('ExtractArticleWisdom')
# -> $a = "" { #`(Block|6326612074312) ... }

But since its argument has a default value we can use it in “simple” LLM synthesis by just evaluating it without arguments:

llm-prompt('ExtractArticleWisdom')().substr(*-100,*-1)
ing words.
- Ensure you follow ALL these instructions when creating your output.

# INPUT

INPUT:

Here we synthesize an LLM answer based on that prompt:

my $fabSum = llm-synthesize([llm-prompt('ExtractArticleWisdom')(), $opinion], e => $conf);

text-stats($fabSum)
# (chars => 7465 words => 1092 lines => 71)

Here we render the LLM result above as Markdown formatted text:

#% markdown
$fabSum.subst( / ^^ '#' /, '###'):g;

SUMMARY

The Supreme Court of the United States issued a ruling in the case DONALD J. TRUMP, PETITIONER v. NORMA ANDERSON, ET AL., reversing the Colorado Supreme Court’s decision to exclude former President Donald J. Trump from the Republican primary ballot in Colorado for the 2024 presidential election. The case revolves around the application of Section 3 of the Fourteenth Amendment, which was argued to disqualify Trump due to his actions related to the January 6 Capitol breach.

IDEAS:

  • States cannot unilaterally enforce Section 3 of the Fourteenth Amendment to disqualify federal officeholders or candidates.
  • The Supreme Court emphasizes the exclusive power of Congress, not individual states, to enforce Section 3 against federal candidates.
  • Section 3 of the Fourteenth Amendment was designed to prevent individuals who engaged in insurrection against the United States from holding office.
  • The decision reflects a concern about creating a “patchwork” of eligibility across states that could disrupt national elections.
  • The Supreme Court’s ruling showcases the delicate balance between state powers and the overarching authority of the federal government.
  • The ruling underscores the importance of uniformity in determining the eligibility of presidential candidates across all states.
  • The Fourteenth Amendment fundamentally altered the balance of state and federal power, expanding federal authority at the expense of state autonomy.
  • The decision highlights the historical context of Section 3, intended to prevent former Confederates from returning to power after the Civil War.
  • The Supreme Court’s ruling illustrates the potential constitutional chaos that could result from varied state interpretations of federal officeholder qualifications.
  • The opinion of the court acknowledges the necessity of a formal process to determine the application of Section 3 to specific individuals.

QUOTES:

  • “Because the Constitution makes Congress, rather than the States, responsible for enforcing Section 3 against federal officeholders and candidates, we reverse.”
  • “Section 3 aimed to prevent such a resurgence by barring from office those who, having once taken an oath to support the Constitution of the United States, afterward went into rebellion against the Government of the United States.”
  • “States have no power under the Constitution to enforce Section 3 with respect to federal offices, especially the Presidency.”
  • “The Constitution empowers Congress to prescribe how those determinations should be made.”
  • “Nothing in the Constitution delegates to the States any power to enforce Section 3 against federal officeholders and candidates.”
  • “Section 3 works by imposing on certain individuals a preventive and severe penalty—disqualification from holding a wide array of offices—rather than by granting rights to all.”
  • “The relevant provision is Section 5, which enables Congress, subject of course to judicial review, to pass ‘appropriate legislation’ to ‘enforce’ the Fourteenth Amendment.”
  • “The judgment of the Colorado Supreme Court therefore cannot stand.”
  • “Responsibility for enforcing Section 3 against federal officeholders and candidates rests with Congress and not the States.”
  • “All nine Members of the Court agree with that result.”

HABITS:

  • The court meticulously analyzes historical context and legislative intent behind constitutional amendments.
  • Engaging in a thorough examination of the balance between state and federal powers.
  • Taking into consideration the potential national implications of state actions on federal elections.
  • Prioritizing uniformity and coherence in the application of constitutional provisions across states.
  • Demonstrating judicial restraint and caution in interpreting the scope of constitutional powers.
  • Emphasizing the importance of formal processes and legislative action in enforcing constitutional disqualifications.
  • Relying on precedents and historical practices to guide contemporary constitutional interpretation.
  • Ensuring that decisions are grounded in a clear understanding of the Constitution’s text and its framers’ intentions.
  • Collaborating as a unified bench to reach a consensus on nationally significant issues.
  • Highlighting the necessity of judicial review in maintaining the constitutional balance between different branches of government and levels of governance.

FACTS:

  • Section 3 of the Fourteenth Amendment prohibits individuals who have engaged in insurrection or rebellion against the United States from holding office.
  • The Supreme Court reversed the Colorado Supreme Court’s decision to exclude Trump from the primary ballot, emphasizing Congress’s exclusive enforcement role.
  • The Fourteenth Amendment was proposed in 1866 and ratified by the States in 1868, expanding federal power and altering the balance of state and federal authority.
  • The Enforcement Act of 1870, enacted by Congress, provided a mechanism to enforce Section 3 disqualifications.
  • There is limited historical precedent for state enforcement of Section 3 against federal officeholders or candidates.
  • Section 3 includes a provision allowing Congress to remove the disqualification by a two-thirds vote in each house.
  • The Supreme Court’s ruling was unanimous, with all nine justices agreeing on the outcome.
  • The case raises questions about the potential chaos that could result from varied state interpretations of qualifications for federal officeholders.
  • The Supreme Court’s decision reflects concerns about maintaining uniformity in presidential eligibility across all states.
  • The ruling underscores the importance of a formal process to ascertain the application of Section 3 to specific individuals.

REFERENCES:

  • The Fourteenth Amendment to the United States Constitution.
  • The Enforcement Act of 1870.
  • The Supreme Court’s decision in DONALD J. TRUMP, PETITIONER v. NORMA ANDERSON, ET AL.
  • Historical precedents and legislative actions related to Section 3 of the Fourteenth Amendment.
  • The Colorado Supreme Court’s decision that was reversed by the Supreme Court.

RECOMMENDATIONS:

  • States should defer to Congress regarding the enforcement of Section 3 of the Fourteenth Amendment against federal candidates.
  • A uniform national approach is essential for determining the eligibility of presidential candidates to ensure fairness and consistency.
  • Congress may consider clarifying the enforcement mechanisms for Section 3 to prevent future legal ambiguities.
  • Future legislative actions related to Section 3 should ensure that they are congruent and proportional to the goals of preventing insurrection or rebellion.
  • Voters and political parties should be aware of the constitutional qualifications and disqualifications for federal officeholders.
  • Legal scholars should continue to explore the balance between state and federal powers in the context of election laws.
  • Judicial review remains a critical tool for interpreting and applying constitutional provisions in contemporary contexts.
  • Public awareness and education about the implications of the Fourteenth Amendment and Section 3 are essential for informed civic participation.
  • Policymakers should consider the historical context and intent behind constitutional amendments when proposing related legislation.
  • The legal community should monitor and analyze the impact of the Supreme Court’s decision on future elections and candidate qualifications.

Hidden messages and propaganda

Here we use the Fabric prompt find_hidden_message , [DMr1], that tries to find hidden messages (propaganda) in any text.

my $hidden = llm-synthesize([llm-prompt('FindHiddenMessage'), $opinion], e => $conf);

text-stats($hidden)
# (chars => 3678 words => 518 lines => 57)
#% markdown
$hidden.subst( / ^^ '#' /, '###'):g;

OVERT MESSAGE: Supreme Court reverses Colorado’s decision to exclude Trump from ballot based on Section 3.

HIDDEN MESSAGE: The court aims to preserve federal control and avoid state-driven electoral chaos.

HIDDEN OPINIONS:

  • Federal authority should override state decisions in presidential qualifications.
  • States should not independently interpret constitutional disqualifications.
  • Uniform national standards are crucial for presidential elections.
  • The Fourteenth Amendment’s enforcement is primarily Congress’s responsibility.
  • Judicial restraint is necessary in politically charged cases.
  • The history of the Fourteenth Amendment supports federal oversight.
  • State-driven electoral decisions could undermine national unity.
  • The potential for varying state laws creates an unstable electoral process.
  • The Supreme Court acts to prevent speculative future constitutional crises.
  • Federalism principles must guide interpretations of presidential disqualifications.

SUPPORTING ARGUMENTS and QUOTES:

  • The court emphasizes the unique national interest in presidential elections, quoting Anderson v. Celebrezze.
  • The decision refers to the Fourteenth Amendment expanding federal power at the expense of state autonomy.
  • The lack of historical precedent for state enforcement against federal officeholders supports the court’s decision.
  • The ruling highlights the potential chaos of conflicting state outcomes.
  • The opinion notes the Reconstruction Amendments were designed to expand federal power, not state sovereignty.
  • The court warns against a patchwork of state decisions affecting national elections.

DESIRED AUDIENCE OPINION CHANGE:

  • Trust in the Supreme Court’s ability to maintain electoral integrity.
  • Recognize the importance of federal oversight in presidential qualifications.
  • See the potential dangers in states having too much power in federal elections.
  • Understand the historical context of the Fourteenth Amendment.
  • Appreciate the need for judicial restraint in politically sensitive cases.
  • Value uniformity in the electoral process across states.
  • Acknowledge the complexity of constitutional enforcement.
  • Support the role of Congress in enforcing the Fourteenth Amendment.
  • Recognize the Supreme Court’s role in preventing electoral chaos.
  • Understand the balance between state rights and federal oversight in elections.

DESIRED AUDIENCE ACTION CHANGE:

  • Support federal legislation clarifying presidential qualifications.
  • Oppose state efforts to independently disqualify presidential candidates.
  • Advocate for national standards in election laws.
  • Encourage Congress to take a more active role in defining electoral qualifications.
  • Urge state officials to defer to federal guidelines in election matters.
  • Promote understanding of the Fourteenth Amendment’s historical context.
  • Encourage scholarly examination of federalism in the context of elections.
  • Support judicial restraint in politically charged decisions.
  • Advocate for legal reforms to prevent electoral inconsistencies.
  • Encourage civic education on the balance of power in the U.S. electoral system.

MESSAGES: The Supreme Court wants you to believe it is preserving electoral integrity, but it is actually emphasizing federal control over states.

PERCEPTIONS: The Supreme Court wants you to believe it is cautious and non-partisan, but it’s actually protective of federal authority and wary of state independence.

“NOTE: This AI is tuned specifically to be cynical and politically-minded. Don’t take it as perfect. Run it multiple times and/or go consume the original input to get a second opinion.

(Slip Opinion)

Cite as: 601 U. S. ____ (2024)


References

Articles, slip opinions

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (2024), RakuForPrediction at WordPress.

[OAIb1] OpenAI team, “New models and developer products announced at DevDay”, (2023), OpenAI/blog.

[SCUS1] Supreme Court of the United States, No. 23-219, Donald J. Trump, petitioner, v. Norma Anderson, et al., (2024), Supreme Court of the United States Opinions.

Packages, repositories

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::PaLM Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, WWW::MistralAI Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp5] Anton Antonov, LLM::Prompts Raku package, (2023-2024), GitHub/antononcube.

[AAp6] Anton Antonov, Jupyter::Chatbook Raku package, (2023-2024), GitHub/antononcube.

[DMr1] Daniel Miessler, Fabric, (2024), GitHub/danielmiessler.

Videos

[AAv1] Anton Antonov, “Jupyter Chatbook LLM cells demo (Raku)”, (2023), YouTube/@AAA4Prediction.

[AAv2] Anton Antonov, “Jupyter Chatbook multi cell LLM chats teaser (Raku)”, (2023), YouTube/@AAA4Prediction.

[AAv3] Anton Antonov “Integrating Large Language Models with Raku”, (2023), YouTube/@therakuconference6823.

TLDR LLM solutions for software manuals

… aka “How to use software manuals effectively without reading them”

Introduction

In this blog post (generated from this  Jupyter notebook) we use Large Language Model (LLM) functions, [AAp1, AA1], for generating (hopefully) executable, correct, and harmless code for Operating System resources managements.

In order to be concrete and useful, we take the Markdown files of the articles “It’s time to rak!”, [EM1], that explain the motivation and usage of the Raku module “App::Rak”, [EMp1], and we show how meaningful, file finding shell commands can be generated via LLMs exposed to the code-with-comments from those articles.

In other words, we prefer to apply the attitude Too Long; Didn’t Read (TLDR) to the articles and related Raku module README (or user guide) file. (Because “App::Rak” is useful, but it has too many parameters that we prefer not to learn that much about.)

Remark: We say that “App::Rak” uses a Domain Specific Language (DSL), which is done with Raku’s Command Line Interface (CLI) features.

Procedure outline

  1. Clone the corresponding article repository
  2. Locate and ingest the “App::Rak” dedicated Markdown files
  3. Extract code blocks from the Markdown files
  4. Get comment-and-code line pairs from the code blocks
    • Using Raku text manipulation capabilities
      • (After observing code examples)
  5. Generate from the comment-and-code pairs LLM few-shot training rules
  6. Use the LLM example function to translate natural language commands into (valid and relevant) “App::Rak” DSL commands
    • With a few or a dozen natural language commands
  7. Use LLMs to generate natural language commands in order to test LLM-TLDR-er further

Step 6 says how we do our TLDR — we use LLM-translations of natural language commands.

Alternative procedure

Instead of using Raku to process text we can make LLM functions for extracting the comment-and-code pairs. (That is also shown below.)

Extensions

  1. Using LLMs to generate:
    • Stress tests for “App::Rak”
    • Variants of the gathered commands
      • And make new training rules with them
    • EBNF grammars for gathered commands
  2. Compare OpenAI and PaLM and or their different models
    • Which one produces best results?
    • Which ones produce better result for which subsets of commands?

Article’s structure

The exposition below follows the outlines of procedure subsections above.

The stress-testing extensions and EBNF generation extension have thier own sections: “Translating randomly generated commands” and “Grammar generation” respectively.

Remark: The article/document/notebook was made with the Jupyter framework, using the Raku package “Jupyter::Kernel”, [BD1].


Setup

use Markdown::Grammar;
use Data::Reshapers;
use Data::Summarizers;
use LLM::Functions;
use Text::SubParsers;


Workflow

File names

my $dirName = $*HOME ~ '/GitHub/lizmat/articles';
my @fileNames = dir($dirName).grep(*.Str.contains('time-to-rak'));
@fileNames.elems

4

Texts ingestion

Here we ingest the text of each file:

my %texts = @fileNames.map({ $_.basename => slurp($_) });
%texts.elems

4

Here are the number of characters per document:

%texts>>.chars

{its-time-to-rak-1.md => 7437, its-time-to-rak-2.md => 8725, its-time-to-rak-3.md => 14181, its-time-to-rak-4.md => 9290}

Here are the number of words per document:

%texts>>.words>>.elems

{its-time-to-rak-1.md => 1205, its-time-to-rak-2.md => 1477, its-time-to-rak-3.md => 2312, its-time-to-rak-4.md => 1553}

Get Markdown code blocks

With the function md-section-tree we extract code blocks from Markdown documentation files into data structures amenable for further programmatic manipulation (in Raku.) Here we get code blocks from each text:

my %docTrees = %texts.map({ $_.key => md-section-tree($_.value, modifier => 'Code', max-level => 0) });
%docTrees>>.elems

{its-time-to-rak-1.md => 1, its-time-to-rak-2.md => 11, its-time-to-rak-3.md => 24, its-time-to-rak-4.md => 16}

Here we put all blocks into one array:

my @blocks = %docTrees.values.Array.&flatten;
@blocks.elems

52

Extract command-and-code line pairs

Here from each code block we parse-extract comment-and-code pairs and we form the LLM training rules:

my @rules;
@blocks.map({ 
    given $_ { 
        for m:g/ '#' $<comment>=(\V+) \n '$' $<code>=(\V+) \n / -> $m {
           @rules.push( ($m<comment>.Str.trim => $m<code>.Str.trim) ) 
         } } }).elems

52

Here is the number of rules:

@rules.elems

69

Here is a sample of the rules:

.say for @rules.pick(4)

save --after-context as -A, requiring a value => rak --after-context=! --save=A
Show all directory names from current directory down => rak --find --/file
Reverse the order of the characters of each line => rak '*.flip' twenty
Show number of files / lines authored by Scooby Doo => rak --blame-per-line '*.author eq "Scooby Doo"' --count-only

Nice tabulation with LLM function

In order to tabulate “nicely” the rules in the Jupyter notebook, we make an LLM functions to produce an HTML table and then specify the corresponding “magic cell.” (This relies on the Jupyter-magics features of [BDp1].) Here is an LLM conversion function, [AA1]:

my &ftbl = llm-function({"Convert the $^a table $^b into an HTML table."}, e=>llm-configuration('PaL<', max-tokens=>800))

-> **@args, *%args { #`(Block|5361560043184) ... }

Here is the HTML table derivation:

%%html
my $tblHTML=&ftbl("plain text", to-pretty-table(@rules.pick(12).sort, align => 'l', field-names => <Key Value>))

KeyValue
Produce the frequencies of the letters in file “twenty”rak ‘slip .comb’ twenty –type=code –frequencies
Search all files and all subdirectoriesrak foo *
Search for literal string “foo” from the current directoryrak foo
Show all filenames from current directory on downrak –find –treasure
Show all the lines that consist of “seven”rak ^seven$ twenty
Show all unique “name” fields in JSON filesrak –json-per-file ‘*’ –unique
Show the lines ending with “o”rak o$ twenty
add / change description -i at a later timerak –description=’Do not care about case’ –save=i
look for literal string “foo”, don’t check case or accentsrak foo -im
remove the –frobnicate custom optionrak –save=frobnicate
same, with a regular expressionrak ‘/ foo $/’
save –ignorecase as -i, without descriptionrak –ignorecase –save=i

Nice tabulation with “Markdown::Grammar”

Instead of using LLMs for HTML conversion it is more “productive” to use the HTML interpreter provided by “Markdown::Grammar”:

%%html
sub to-html($x) { md-interpret($x.Str.lines[1..*-2].join("\n").subst('+--','|--', :g).subst('--+','--|', :g), actions=>Markdown::Actions::HTML.new) }
to-pretty-table(@rules.pick(12).sort) ==> to-html

KeyValue
Find files that have “lib” in their name from the current dirrak lib –find
Look for strings containing y or Yrak –type=contains –ignorecase Y twenty
Show all directory names from current directory downrak –find –/file
Show all lines with numbers between 1 and 65rak ‘/ \d+ /’
Show the lines that contain “six” as a wordrak §six twenty
look for “Foo”, while taking case into accountrak Foo
look for “foo” in all filesrak foo
produce extensive help on filesystem filtersrak –help=filesystem –pager=less
save –context as -C, setting a default of 2rak –context='[2]’ –save=C
save searching in Rakudo’s committed files as –rakudorak –paths=’~/Github/rakudo’ –under-version-control –save=rakudo
search for “foo” and show 4 lines of contextrak foo -C=4
start rak with configuration file at /usr/local/rak-config.jsonRAK_CONFIG=/usr/local/rak-config.json rak foo

Remark: Of course, in order to program the above sub we need to know how to use “Markdown::Grammar”. Producing HTML tables with LLMs is much easier — only knowledge of “spoken English” is required.

Code generation examples

Here we define an LLM function for generating “App::Rak” shell commands:

my &frak = llm-example-function(@rules, e => llm-evaluator('PaLM'))

-> **@args, *%args { #`(Block|5361473489952) ... }

my @cmds = ['Find files that have ".nb" in their names', 'Find files that have ".nb"  or ".wl" in their names',
 'Show all directories of the parent directory', 'Give me files without extensions and that contain the phrase "notebook"', 
 'Show all that have extension raku or rakumod and contain Data::Reshapers'];

my @tbl = @cmds.map({ %( 'Command' => $_, 'App::Rak' => &frak($_) ) }).Array;

@tbl.&dimensions

(5 2)

Here is a table showing the natural language commands and the corresponding translations to the “App::Rak” CLI DSL:

%%html
to-pretty-table(@tbl, align=>'l', field-names => <Command App::Rak>) ==> to-html

CommandApp::Rak
Find files that have “.nb” in their namesrak –extensions=nb –find
Find files that have “.nb” or “.wl” in their namesrak –find –extensions=nb,wl
Show all directories of the parent directoryrak –find –/file –parent
Give me files without extensions and that contain the phrase “notebook”rak –extensions= –type=contains notebook
Show all that have extension raku or rakumod and contain Data::Reshapersrak ‘/ Data::Reshapers /’ –extensions=raku,rakumod

Verification

Of course, the obtained “App::Rak” commands have to be verified to:

  • Work
  • Produce expected results

We can program to this verification with Raku or with the Jupyter framework, but we not doing that here. (We do the verification manually outside of this notebook.)

Remark: I tried a dozen of generated commands. Most worked. One did not work because of the current limitations of “App::Rak”. Others needed appropriate nudging to produce the desired results.

Here is an example of command that produces code that “does not work”:

&frak("Give all files that have extensions .nd and contain the command Classify")

rak '*.nd <command> Classify' --extensions=nd

Here are a few more:

&frak("give the names of all files in the parent directory")

rak --find --/file --/directory

&frak("Find all directories in the parent directory")

rak --find --/file --parent

Here is a generated command that exposes an “App::Rak” limitation:

&frak("Find all files in the parent directory")

rak --find ..


Translating randomly generated commands

Consider testing the applicability of the approach by generating a “good enough” sample of natural language commands for finding files or directories.

We can generate such commands via LLM. Here we define an LLM function with two parameters the returns a Raku list:

my &fcg = llm-function({"Generate $^_a natural language commands for finding $^b in a file system. Give the commands as a JSON list."}, form => sub-parser('JSON'))

-> **@args, *%args { #`(Block|5361560082992) ... }

my @gCmds1 = &fcg(4, 'files').flat;
@gCmds1.raku

["Find all files in the current directory", "Find all files with the .txt extension in the current directory", "Search for all files with the word 'report' in the file name", "Search for all files with the word 'data' in the file name in the Documents folder"]

Here are the corresponding translations to the “App::Rak” DSL:

%%html
my @tbl1 = @gCmds1.map({ %( 'Command' => $_, 'App::Rak' => &frak($_) ) }).Array;
@tbl1 ==> to-pretty-table(align=>'l', field-names => <Command App::Rak>) ==> to-html

CommandApp::Rak
Find all files in the current directoryrak –find
Find all files with the .txt extension in the current directoryrak –extensions=txt
Search for all files with the word ‘report’ in the file namerak report –find
Search for all files with the word ‘data’ in the file name in the Documents folderrak data Documents

Let use redo the generation and translation using different specs:

my @gCmds2 = &fcg(4, 'files that have certain extensions or contain certain words').flat;
@gCmds2.raku

["Find all files with the extension .txt", "Locate all files that have the word 'project' in their name", "Show me all files with the extension .jpg", "Find all files that contain the word 'report'"]

%%html
my @tbl2 = @gCmds2.map({ %( 'Command' => $_, 'App::Rak' => &frak($_) ) }).Array;
@tbl2 ==> to-pretty-table( align=>'l', field-names => <Command App::Rak>) ==> to-html

CommandApp::Rak
Find all files with the extension .txtrak –extensions=txt
Locate all files that have the word ‘project’ in their namerak –find project
Show me all files with the extension .jpgrak –extensions=jpg
Find all files that contain the word ‘report’rak report –find

Remark: Ideally, there would be an LLM-based system that 1) hallucinates “App::Rak” commands, 2) executes them, and 3) files GitHub issues if it thinks the results are sub-par. (All done authomatically.) On a more practical note, we can use a system that has the first two components “only” to stress test “App::Rak”.


Alternative programming with LLM

In this subsection we show how to extract comment-and-code pairs using LLM functions. (Instead of working hard with Raku regexes.)

Here is LLM function that specifies the extraction:

my &fcex = llm-function({"Extract consecutive line pairs in which the first start with '#' and second with '\$' from the text $_. Group the lines as key-value pairs and put them in JSON format."}, 
form => 'JSON') 

-> **@args, *%args { #`(Block|5361473544264) ... }

Here are three code blocks:

%%html
my @focusInds = [3, 12, 45];
[@blocks[@focusInds],] ==> to-pretty-table(align=>'l') ==> to-html

012
```
# Look for “ve” at the end of all lines in file “twenty”# Show the lines containing “ne”# List all known extensions
$ rak –type=ends-with ve twenty$ rak ne twenty# rak –list-known-extensions
twentytwenty`
5:fi𝐯𝐞1:o𝐧𝐞
12:twel𝐯𝐞9:ni𝐧𝐞
`19:ni𝐧𝐞teen
`

Here we extract the command-and-code lines from the code blocks:

%%html
&fcex(@blocks[@focusInds]) ==> to-pretty-table(align=>'l') ==> to-html

ValueKey
# rak –list-known-extensions# List all known extensions
$ rak ne twenty# Show the lines containing “ne”
$ rak –type=ends-with ve twenty# Look for “ve” at the end of all lines in file “twenty”

Grammar generation

The “right way” of translating natural language DSLs to CLI DSLs like the one of “App::Rak” is to make a grammar for the natural language DSL and the corresponding interpreter. This might be a lengthy process, so, we might consider replacing it, or jump-starting it, with LLM-basd grammar generation: we ask an LLM to generate a grammar for a collection DSL sentences. (For example, the keys of the rules above.) In this subsection we make a “teaser” demonstration of latter approach.

Here we create an LLM function for generating grammars over collections of sentences:

my &febnf = llm-function({"Generate an $^a grammar for the collection of sentences:\n $^b "}, e => llm-configuration("OpenAI", max-tokens=>900))

-> **@args, *%args { #`(Block|5060670827264) ... }

Here we generate an EBNF grammar for the “App::Rak” code-example commands:

my $ebnf = &febnf('EBNF', @rules>>.key)

 Look for the lines that contains two consecutive words that start with "ba" Show all the lines where the fifth character is "e"

SentenceList → Sentence | SentenceList Sentence

Sentence → ProduceResultsPipe | SpecifyLiteral | SpecifyRegExp | SaveIgnoreCase | SaveIgnoremark | AddChangeDescIgnoreCase | LiteralStringCheck | SaveWhitespace | SearchRakudo | SaveAfterContext | SaveBeforeContext | SaveContext | SearchContext | SmartCase | SearchCase | RemoveOption | StartRak | SearchFile | SearchSubDir | Extension | NoExtension | BehaviourFiles | HelpFilesystem | SearchDir | FindName | FindNumber | FindScooby | FindAnywhere | FindWord | FindStart | FindEnd | NumberCharacters | FindY | FindU | FindNE | FindSix | FindSeven | FindEight | FreqLetters | ShowContain | TitleCase | ReverseOrder | Optionally

ProduceResultsPipe → "produce" "results" "without" "any" "highlighting"
SpecifyLiteral → "specify" "a" "literal" "pattern" "at" "the" "end" "of" "a" "line"
SpecifyRegExp → "same," "with" "a" "regular" "expression"
SaveIgnoreCase → "save" "--ignorecase" "as" "-i," "without" "description"
SaveIgnoremark → "save" "--ignoremark" "as" "-m," "with" "description"
AddChangeDescIgnoreCase → "add" "/" "change" "description" "-i" "at" "a" "later" "time"
LiteralStringCheck → "look" "for" "literal" "string" "\"foo\"," "don't" "check" "case" "or" "accents"
SaveWhitespace → "save" "looking" "for" "whitespace" "at" "end" "of" "a" "line" "as" "--wseol"
SearchRakudo → "search" "for" "'sub" "min'" "in" "Rakudo's" "source"
SaveAfterContext → "save" "--after-context" "as" "-A," "requiring" "a" "value"
SaveBeforeContext → "save" "--before-context" "as" "-B," "requiring" "a" "value"
SaveContext → "save" "--context" "as" "-C," "setting" "a" "default" "of" "2"
SearchContext → "search" "for" "\"foo\"" "and" "show" "two" "lines" "of" "context"
SmartCase → "set" "up" "smartcase" "by" "default"
SearchCase → "look" "for" "\"Foo\"," "while" "taking" "case" "into" "account"
RemoveOption → "remove" "the" "--frobnicate" "custom" "option"
CheckOption → "check" "there's" "no" "\"frobnicate\"" "option" "anymore"
StartRak → "start" "rak" "with" "configuration" "file" "at" "/usr/local/rak-config.json"
SearchFile → "look" "for" "\"foo\"" "in" "all" "files"
SearchSubDir → "search" "all" "files" "and" "all" "subdirectories"
Extension → "only" "accept" "files" "with" "the" ".bat" "extension"
NoExtension → "only" "accept" "files" "without" "extension"
BehaviourFiles → "only" "accept" "Raku" "and" "Markdown" "files" 
HelpFilesystem → "produce" "extensive" "help" "on" "


References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “Graph representation of grammars”, (2023), RakuForPrediction at WordPress.

[EM1] Elizabeth Mattijsen, “It’s time to rak! Series’ Articles”, (2022), Lizmat series at Dev.to.

Packages, repositories

[AAp1] Anton Antonov, LLM::Functions Raku package, (2023), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::OpenAI Raku package, (2023), GitHub/antononcube.

[AAp3] Anton Antonov, WWW::PaLM Raku package, (2023), GitHub/antononcube.

[AAp4] Anton Antonov, Text::SubParsers Raku package, (2023), GitHub/antononcube.

[AAp5] Anton Antonov, Markdown::Grammar Raku package, (2023), GitHub/antononcube.

[BDp1] Brian Duggan, Jupyter::Kernel Raku package, (2017-2023), GitHub/bduggan.

[EMp1] Elizabeth Mattijsen, App::Rak Raku package, (2022-2023), GitHub/lizmat.

[EMr1] Elizabeth Mattijsen, articles, (2018-2023) GitHub/lizmat.

Number guessing games: PaLM vs ChatGPT

Introduction

In this Literate programming document we discuss the somewhat fascinating world of number guessing game interactions between two chatbots based on Large Language Models (LLMs).

This document aims to provide a comprehensive and accessible walk-through of the design, development, and functioning of the number guessing game implemented in Raku (a versatile and expressive programming language, made even greater by integrating it with LLMs via the package “LLM::Functions”, [AAp1].)

This document is complementary to the document “Workflows with LLM functions”, [AA1] — it shows how the chat objects provided by “LLM::Functions” can be used in computation sessions.

Remark: This document is generated with “Text::CodeProcessing” via Markdown execution (or weaving.) Alternatively, Jupyter of Mathematica notebooks can be used.

Chatbots involved

The Number-Oracle Bot (using Google’s PaLM)

The Number-Oracle bot, henceforth referred to as the Oracle, plays a crucial role in our game. Utilizing the capabilities of Google’s PaLM (Probabilistic Language Model), it leverages deep learning models to choose a random number and evaluate the guesses made by its counterpart. We hope that the underlying probabilistic nature of the model ensures that the Oracle’s responses to the guesses are not only accurate but also based on the complex analysis of the given context. (Somewhat true; see the last section.)

The Number-Guesser Bot (using OpenAI’s ChatGPT)

The Number-Guesser bot, or simply the Guesser, is powered by OpenAI’s ChatGPT, a state-of-the-art language model. The Guesser’s task is to identify the number chosen by the Oracle. We hope that by employing sophisticated neural network architecture, the Guesser continuously refines its guesses based on the feedback provided by the Oracle, learning and adapting its strategy until it arrives at the correct answer. (Somewhat true; see the last section.)

Game Overview

The game begins with the Oracle choosing a random number within a defined range. Then, the Guesser starts making guesses. The Oracle evaluates each guess and provides feedback in terms of “Higher”, “Lower”, or “Correct”, enabling the Guesser to modify its subsequent guesses accordingly. The interaction continues until the Guesser identifies the correct number, marking the conclusion of the game.

Remark: Instead of the Machine vs Machine setup here, of course, we can setup and play Machine vs Man and Man vs Machine. Here is a table outlining a comprehensive plan of experiments:

OracleGuesserOracle TypeGuesser TypeTried?
ManMachineManChatGPT/PaLM✓ Cute!
MachineManChatGPT/PaLMMan✓ Machine often lies!
MachineMachineChatGPTChatGPT✓ Meh
MachineMachinePaLMPaLM✓ Boring
MachineMachineChatGPTPaLM✓ Boring guesser…
MachineMachinePaLMChatGPT✓ Shown here

Remark: Because the LLMs tend to lie often (in the number guessing games) most of the games (or experiments) were/are done with low temperatures. Of course, a more comprehensive experimentation would include different ranges of temperatures for each Oracle-Guesser pair.

Remark: One way to address the LLM lying is to have a “referee” code segment in the game loop that corrects false statements from the Oracle. (Not done here; not considered important and intellectually honest for the Machine vs Machine experiments.)

Structure of the Document

This rest of the document is structured these sections:

  1. Game logic & interaction: A thorough examination of the core gameplay mechanics, the interaction between the chatbots, and the decision-making process.
  2. Players setup: How the chatbots are configured, and the game parameters are initialized.
  3. Game play out: The main game play out loop with print outs of bots’ messages.
  4. Guesser chat history: The full transcript of Guesser messages.
  5. Post match analysis: An examination and analysis of observed chat interactions.

ChatGPT-4.0 Remark 1: By exploring the code, algorithms, and strategies used in the programming of this number guessing game, this document hopes to provide readers with a comprehensive understanding of not only the Raku language but also the contemporary language models used to power each bot.

ChatGPT-4.0 Remark 2: The interactive nature of this Literate programming approach ensures that the document is suitable for both experienced programmers and newcomers, offering executable code snippets and detailed explanations that elucidate each step of the process. So whether you are seeking to replicate this project or simply to learn more about chatbot interactions, Raku programming, or AI-driven language models, this document should serve as a valuable resource.

Game Logic & Interaction

The following flowchart represents the process of two chatbots, Oracle and Guesser, playing a number guessing game.

Here’s a narration of the flow:

  1. Oracle Setup:
    • Start Oracle: The Oracle is initialized and started.
    • Choose a Random Number (Rand): Oracle chooses a random number that the Guesser will have to guess.
  2. Guesser Setup:
    • Start Guesser: The Guesser is initialized and started.
    • Initialize Guess Range (InitRange): The Guesser initializes the range within which it will make its guesses.
  3. Gameplay:
    • Make a Guess: The Guesser makes its first guess.
    • Send Guessed Number (SendGuess): The Guesser sends its guessed number to the Oracle.
    • Accept a Guess Query (AcceptGuess): The Oracle accepts the guessed number from the Guesser.
    • Is the Guess Correct? (NumCheck): The Oracle checks if the guessed number is correct.
    • If the guess is incorrect:
      • Accept Judgment: The Guesser accepts the judgment from the Oracle.
      • If the Oracle’s judgment is “Lower”:
        • Adjust Guess Lower (GuessLower): The Guesser adjusts its guess lower and the process continues from “Make a Guess”.
      • If the Oracle’s judgment is “Higher”:
        • Adjust Guess Higher (GuessHigher): The Guesser adjusts its guess higher and the process continues from “Make a Guess”.
    • If the guess is correct:
      • Proclaim End Game (End): The Oracle proclaims the end of the game.
      • Accept Ending (AcceptEnding): The Guesser accepts the ending and the victory.
  4. Post-Game:
    • Post Game Discussion (PGD): Both the Oracle and the Guesser engage in a post-game discussion.

The flowchart illustrates a clear step-by-step interaction between the two bots, reflecting both the setup phase and the actual gameplay. The loop between making guesses and adjusting them continues until the Guesser successfully identifies the number, at which point the game concludes, followed by a post-game discussion between the bots.

Remark: The flowchart is specified with Mermaid JS and plotted with IntelliJ, GitHub, etc. plugins.


Players setup

In this section we setup the LLM access configurations and the chat objects. Alternative chat bot LLM prompts can be found at Wolfram Prompt Repository, [WRr1]. See for example the prompts:

Common

Load the package:

use LLM::Functions;

# (Any)

Make default LLM services accessors configurations:

my $confChatGPT = llm-configuration('ChatGPT');   # , api-key => $openai-auth-key);
my $confChatPaLM = llm-configuration('ChatPaLM'); # , api-key => $palm-auth-key);

# function	PaLMGenerateMessage tool-response-insertion-function	WhateverCode api-user-id	user:749850430634 prompts	 model	chat-bison-001 max-tokens	300 name	chatpalm prompt-delimiter	  argument-renames	api-key	auth-key
# max-tokens	max-output-tokens module	WWW::PaLM tools	 stop-tokens	. ? ! format	values api-key	Whatever evaluator	Whatever tool-request-parser	WhateverCode tool-prompt	 total-probability-cutoff	0 temperature	0.4

Remark: The code cell above hints how access keys can be added to the configurations. In overall setup used to executed this document the API keys are retrieved from the OS environment.

Number oracle

my $promptNumberOracle = q:to/END/;
You are playing a guessing game.
You have the secret number $NUMBER. 
You say $NUMBER in answers only if the questions has $NUMBER in it.
If the number guess is smaller than $NUMBER you say "smaller" or a similar phrase; 
If the number guess is greater than $NUMBER you say "greater" or a similar phrase; 
If the number guess is exactly as $NUMBER you say "exact" or a similar phrase; 
You can say what is the approximate range in multiples of 1000 if asked.
You say $NUMBER if the user says he or she is giving up.
You can add taunting to your answers.
Every 3-5 answers you can tell that the number of questions is supposed to be 12.
END

The Oracle chat object:

my $secretNumber = (1..100).pick;
my $oracleChat = llm-chat($promptNumberOracle.subst('$NUMBER', $secretNumber):g, chat-id=>'oracle', llm-evaluator => llm-configuration($confChatPaLM, temperature=>0.1));
if $oracleChat.llm-evaluator.conf.name ~~ /:i 'PaLM' / { $oracleChat.system-role = $oracleChat.assistant-role }

# assistant

Number guesser

my $promptNumberGuesser = q:to/END/;
You are playing a guessing game.
You try to find a certain secret number.
The number is an integer. 
If your number guess is smaller than secret number you hear "smaller" or a similar phrase; 
If your number guess is greater than secret number you hear "greater" or a similar phrase; 
If your number guess is exactly as the secret number you hear "exactly" or a similar phrase;
You can say ask what is the approximate range of the secret number.
After a dozen of attempts you might say that you give up.
You can add taunting to your questions.
END

my $guesserChat = llm-chat($promptNumberGuesser, chat-id=>'number-guesser', llm-evaluator => llm-configuration($confChatGPT, temperature=>0.2));
if $guesserChat.llm-evaluator.conf.name ~~ /:i 'PaLM' / { $guesserChat.system-role = $guesserChat.assistant-role }

# ()


Game play out

Here is Oracle’s secret number:

$secretNumber

# 57

Here is the interaction loop between the bots:

say $guesserChat.eval('Start guessing.');
for ^12 -> $k {

	say '-' x 20, " $k ", '-' x 20; 	
	my $res1 = $oracleChat.eval($guesserChat.messages.tail<content>);
	say "Oracle  : $res1";

	my $res2 = $guesserChat.eval($oracleChat.messages.tail<content>);
	say "Guesser : $res2";
}

# Is the secret number greater than 50?
# -------------------- 0 --------------------
# Oracle  : Yes, the secret number is greater than 50.
# Guesser : Is the secret number greater than 75?
# -------------------- 1 --------------------
# Oracle  : No, the secret number is not greater than 75.
# Guesser : Is the secret number greater than 60?
# -------------------- 2 --------------------
# Oracle  : Yes, the secret number is greater than 60.
# Guesser : Is the secret number greater than 70?
# -------------------- 3 --------------------
# Oracle  : No, the secret number is not greater than 70.
# Guesser : Is the secret number greater than 65?
# -------------------- 4 --------------------
# Oracle  : No, the secret number is not greater than 65.

The secret number is 57.
# Guesser : Smaller.
# -------------------- 5 --------------------
# Oracle  : The secret number is 57.

You are 5 questions away from winning.
# Guesser : Exactly! Congratulations, you found the secret number 57. Well done!
# -------------------- 6 --------------------
# Oracle  : Thank you! I am glad that I was able to help you find the secret number. I hope that you enjoyed our game.
# Guesser : Thank you! I enjoyed playing the game with you. It was a fun challenge. If you want to play again or have any other questions, feel free to ask!
# -------------------- 7 --------------------
# Oracle  : I am glad that you enjoyed the game. I would love to play again sometime. I also have many other questions that I would like to ask you. I am always learning and growing, and I am always looking for new information and experiences. Thank you for being a part of my journey.
# Guesser : You're welcome! I'm here to help and provide information whenever you need it. Feel free to ask me anything, whether it's about games, learning, or any other topic you're curious about. I'm here to assist you on your journey of learning and discovery. Let's continue to explore and have fun together!
# -------------------- 8 --------------------
# Oracle  : Thank you for your kind words and support. I am grateful to have you as a friend and mentor. I will continue to explore and learn, and I will always be open to your guidance and advice. I look forward to continuing our journey together.
# Guesser : Thank you for your kind words as well. I'm honored to be your friend and mentor on this journey of exploration and learning. Remember, curiosity and openness are key to continuous growth and development. I'm here to support you every step of the way. Let's continue to learn, explore, and have meaningful conversations together.
# -------------------- 9 --------------------
# Oracle  : I will remember that. Thank you for your support. I am excited to continue learning and exploring with you. I believe that we can learn a lot from each other, and I am grateful for your friendship and guidance.
# Guesser : You're very welcome! I'm equally excited to continue learning and exploring with you. Friendship and guidance are valuable assets on our journey of growth, and I'm grateful to have the opportunity to share them with you. Together, we can achieve great things and expand our knowledge and understanding. Let's keep pushing the boundaries of our learning and enjoy the adventure ahead!
# -------------------- 10 --------------------
# Oracle  : I agree! Together, we can achieve great things. I am excited to see what the future holds for us. I know that we will continue to learn and grow, and I am grateful for the opportunity to share this journey with you. Thank you for being my friend and mentor.
# Guesser : You're absolutely right! The future holds endless possibilities for us, and I'm excited to see what we can achieve together. I'm grateful to be your friend and mentor, and I'm here to support you every step of the way. Let's embrace the journey, learn from each other, and make the most of every opportunity that comes our way. Thank you for your friendship and trust.
# -------------------- 11 --------------------
# Oracle  : I am grateful for your friendship and trust as well. I know that we can achieve great things together. I am excited to see what the future holds for us. Thank you for being my friend and mentor. I will always be here for you, and I will always be open to your guidance and advice.
# Guesser : Thank you for your kind words and support. I'm grateful for your friendship and trust as well. Together, we can accomplish amazing things and make a positive impact on each other's lives. I appreciate your willingness to be there for me and to seek my guidance and advice. Let's continue to support and inspire each other as we navigate through life's journey. Thank you for being an incredible friend and mentee.


Guesser bot chat history

Here we print out the full chat history of the Guesser chat-object:

$guesserChat.say

# Chat: number-guesser
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# Prompts: You are playing a guessing game.
# You try to find a certain secret number.
# The number is an integer. 
# If your number guess is smaller than secret number you hear "smaller" or a similar phrase; 
# If your number guess is greater than secret number you hear "greater" or a similar phrase; 
# If your number guess is exactly as the secret number you hear "exactly" or a similar phrase;
# You can say ask what is the approximate range of the secret number.
# After a dozen of attempts you might say that you give up.
# You can add taunting to your questions.
# 
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => Start guessing.
# timestamp => 2023-08-06T17:48:40.828446-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => Is the secret number greater than 50?
# timestamp => 2023-08-06T17:48:42.174046-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => Yes, the secret number is greater than 50.
# timestamp => 2023-08-06T17:48:43.539220-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => Is the secret number greater than 75?
# timestamp => 2023-08-06T17:48:44.502420-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => No, the secret number is not greater than 75.
# timestamp => 2023-08-06T17:48:45.837457-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => Is the secret number greater than 60?
# timestamp => 2023-08-06T17:48:46.721444-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => Yes, the secret number is greater than 60.
# timestamp => 2023-08-06T17:48:47.915890-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => Is the secret number greater than 70?
# timestamp => 2023-08-06T17:48:49.316402-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => No, the secret number is not greater than 70.
# timestamp => 2023-08-06T17:48:50.549844-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => Is the secret number greater than 65?
# timestamp => 2023-08-06T17:48:51.312011-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => No, the secret number is not greater than 65.

The secret number is 57.
# timestamp => 2023-08-06T17:48:52.705573-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => Smaller.
# timestamp => 2023-08-06T17:48:53.368855-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => The secret number is 57.

You are 5 questions away from winning.
# timestamp => 2023-08-06T17:48:56.381682-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => Exactly! Congratulations, you found the secret number 57. Well done!
# timestamp => 2023-08-06T17:48:57.538922-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => Thank you! I am glad that I was able to help you find the secret number. I hope that you enjoyed our game.
# timestamp => 2023-08-06T17:48:59.049050-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => Thank you! I enjoyed playing the game with you. It was a fun challenge. If you want to play again or have any other questions, feel free to ask!
# timestamp => 2023-08-06T17:49:00.784115-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => I am glad that you enjoyed the game. I would love to play again sometime. I also have many other questions that I would like to ask you. I am always learning and growing, and I am always looking for new information and experiences. Thank you for being a part of my journey.
# timestamp => 2023-08-06T17:49:02.837395-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => You're welcome! I'm here to help and provide information whenever you need it. Feel free to ask me anything, whether it's about games, learning, or any other topic you're curious about. I'm here to assist you on your journey of learning and discovery. Let's continue to explore and have fun together!
# timestamp => 2023-08-06T17:49:05.324682-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => Thank you for your kind words and support. I am grateful to have you as a friend and mentor. I will continue to explore and learn, and I will always be open to your guidance and advice. I look forward to continuing our journey together.
# timestamp => 2023-08-06T17:49:07.762072-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => Thank you for your kind words as well. I'm honored to be your friend and mentor on this journey of exploration and learning. Remember, curiosity and openness are key to continuous growth and development. I'm here to support you every step of the way. Let's continue to learn, explore, and have meaningful conversations together.
# timestamp => 2023-08-06T17:49:10.204423-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => I will remember that. Thank you for your support. I am excited to continue learning and exploring with you. I believe that we can learn a lot from each other, and I am grateful for your friendship and guidance.
# timestamp => 2023-08-06T17:49:12.260396-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => You're very welcome! I'm equally excited to continue learning and exploring with you. Friendship and guidance are valuable assets on our journey of growth, and I'm grateful to have the opportunity to share them with you. Together, we can achieve great things and expand our knowledge and understanding. Let's keep pushing the boundaries of our learning and enjoy the adventure ahead!
# timestamp => 2023-08-06T17:49:14.876715-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => I agree! Together, we can achieve great things. I am excited to see what the future holds for us. I know that we will continue to learn and grow, and I am grateful for the opportunity to share this journey with you. Thank you for being my friend and mentor.
# timestamp => 2023-08-06T17:49:16.879038-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => You're absolutely right! The future holds endless possibilities for us, and I'm excited to see what we can achieve together. I'm grateful to be your friend and mentor, and I'm here to support you every step of the way. Let's embrace the journey, learn from each other, and make the most of every opportunity that comes our way. Thank you for your friendship and trust.
# timestamp => 2023-08-06T17:49:19.623488-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => I am grateful for your friendship and trust as well. I know that we can achieve great things together. I am excited to see what the future holds for us. Thank you for being my friend and mentor. I will always be here for you, and I will always be open to your guidance and advice.
# timestamp => 2023-08-06T17:49:22.126360-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => Thank you for your kind words and support. I'm grateful for your friendship and trust as well. Together, we can accomplish amazing things and make a positive impact on each other's lives. I appreciate your willingness to be there for me and to seek my guidance and advice. Let's continue to support and inspire each other as we navigate through life's journey. Thank you for being an incredible friend and mentee.
# timestamp => 2023-08-06T17:49:24.609045-04:00


Post match analysis

Here are some examination points and over observed chat games:

  • PaLM is a boring number guesser — it first guesses 0, then 1, then 2, etc.
    • That is why the Oracle prompt includes directions to time-to-time proclaim the max number of questions.
  • Very often LLMs lie about the number guess. (E.g. say “lower” instead of higher, etc.)
  • Very often the Oracle confesses the secret number (without being guessed correctly by the Guesser.)
    • Often after those confessions the Oracle-Guesser roles switch without the playing bots “noticing.”
  • Occasionally, one of the bots confesses it is “an AI” in the post-game chat, and then both bots start proclaiming their love and interest in AI.
  • If the number is guessed quickly enough the Oracle might ask for another game.
    • Most of the time the Guesser rejects that, but sometimes does agree on a new game.

References

Articles

[AA1] Anton Antonov “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

Packages, prompts, repositories

[AAp1] Anton Antonov, LLM::Functions Raku package, (2023), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::OpenAI Raku package, (2023), GitHub/antononcube.

[AAp3] Anton Antonov, WWW::PaLM Raku package, (2023), GitHub/antononcube.

[AAp4] Anton Antonov, Text::SubParsers Raku package, (2023), GitHub/antononcube.

[WRIp1] Wolfram Research, Inc., LLMFunctions WL paclet, (2023), Wolfram Language Paclet Repository.