Chatbook New Magic Cells

Introduction

In this blog post (notebook), we showcase the recently added “magic” cells (in May 2024) to the notebooks of “Jupyter::Chatbook”, [AA1, AAp5, AAv1].

“Jupyter::Chatbook” gives “LLM-ready” notebooks and it is built on “Jupyter::Kernel”, [BDp1], created by Brian Duggan. “Jupyter::Chatbook” has the general principle that Raku packages used for implementing interactive service access cells are also pre-loaded into the notebooks Raku contexts. (I.e. at the beginning of notebooks’ Raku sessions.)

Here is a mind-map that shows the Raku packages that are “pre-loaded” and the available interactive cells:

#% mermaid, format=svg, background=SlateGray
mindmap
(**Chatbook**)
(Direct **LLM** access)
OpenAI
ChatGPT
DALL-E
Google
PaLM
Gemini
MistralAI
LLaMA
(Direct **DeepL** access)
Plain text result
JSON result
(**Notebook-wide chats**)
Chat objects
Named
Anonymous
Chat meta cells
Prompt DSL expansion
(Direct **MermaidInk** access)
SVG result
PNG result
(Direct **Wolfram|Alpha** access)
wa1["Plain text result"]
wa2["Image result"]
wa3["Pods result"]
(**Pre-loaded packages**)
LLM::Functions
LLM::Prompts
Text::SubParsers
Data::Translators
Data::TypeSystem
Clipboard
Text::Plot
Image::Markup::Utilities
WWW::LLaMA
WWW::MermaidInk
WWW::OpenAI
WWW::PaLM
WWW::Gemini
WWW::WolframAlpha
Lingua::Translation::DeepL

Remark: Recent improvement is Mermaid-JS cells to have argument for output format and background. Since two months aga (beginning of March, 2024) by default the output format is SVG. In that way diagrams are obtained 2-3 times faster. Before March 9, 2023, “PNG” was the default format (and the only one available.)

The structure of the rest of the notebook:

  • DeepL
    Translation from multiple languages into multiple other languages
  • Google’s Gemini
    Replaces both PaLM and Bard
  • Wolfram|Alpha
    Computational search engine

DeepL

In this section we show magic cells for direct access of the translation service DeepL. The API key can be set as a magic cell argument; without such key setting the env variable DEEPL_AUTH_KEY is used. See “Lingua::Translation::DeepL”, [AAp1], for more details.

#% deepl, to-lang=German, formality=less, format=text
I told you to get the frames from the other warehouse!
# Ich habe dir gesagt, du sollst die Rahmen aus dem anderen Lager holen!

#% deepl, to-lang=Russian, formality=more, format=text
I told you to get the frames from the other warehouse!
# Я же просил Вас взять рамки с другого склада!

DeepL’s source languages:

#% html
deepl-source-languages().pairs>>.Str.sort.List

==> to-html(:multicolumn, columns => 4)
bulgarian BGfinnish FIjapanese JAslovak SK
chinese ZHfrench FRlatvian LVslovenian SL
czech CSgerman DElithuanian LTspanish ES
danish DAgreek ELpolish PLswedish SV
dutch NLhungarian HUportuguese PTturkish TR
english ENindonesian IDromanian ROukrainian UK
estonian ETitalian ITrussian RU(Any)

DeepL’s target languages:

#% html
deepl-target-languages().pairs>>.Str.sort.List

==> to-html(:multicolumn, columns => 4)
bulgarian BGestonian ETjapanese JArussian RU
chinese simplified ZHfinnish FIlatvian LVslovak SK
czech CSfrench FRlithuanian LTslovenian SL
danish DAgerman DEpolish PLspanish ES
dutch NLgreek ELportuguese PTswedish SV
english ENhungarian HUportuguese brazilian PT-BRturkish TR
english american EN-USindonesian IDportuguese non-brazilian PT-PTukrainian UK
english british EN-GBitalian ITromanian RO(Any)

Google’s Gemini

In this section we show magic cells for direct access of the LLM service Gemini by Google. The API key can be set as a magic cell argument; without such key setting the env variable GEMINI_API_KEY is used. See “WWW::Gemini”, [AAp2], for more details.

Using the default model

#% gemini
Which LLM you are and what is your model?
I am Gemini, a multi-modal AI language model developed by Google.
#% gemini
Up to which date you have been trained?
I have been trained on a massive dataset of text and code up until April 2023. However, I do not have real-time access to the internet, so I cannot access information beyond that date. If you have any questions about events or information after April 2023, I recommend checking a reliable, up-to-date source.

Using a specific model

In this subsection we repeat the questions above, and redirect the output to formatted as Markdown.

#% gemini > markdown, model=gemini-1.5-pro-latest
Which LLM are you? What is the name of the model you use?
I'm currently running on the Gemini Pro model.

I can't share private information that could identify me specifically, but I can tell you that I am a large language model created by Google AI.
#% gemini > markdown, model=gemini-1.5-pro-latest
Up to which date you have been trained?
I can access pretty up-to-date information, which means I don't really have a "knowledge cut-off" date like some older models.

However, it’s important to remember:

  • I am not constantly updating. My knowledge is based on a snapshot of the internet taken at a certain point in time.
  • I don’t have access to real-time information. I can’t tell you what happened this morning, or what the stock market is doing right now.
  • The world is constantly changing. Even if I had information up to a very recent date, things would still be outdated quickly!

If you need very specific and current information, it’s always best to consult reliable and up-to-date sources.


Wolfram|Alpha

In this section we show magic cells for direct access to Wolfram|Alpha (W|A) by Wolfram Research, Inc. The API key can be set as a magic cell argument; without such key setting the env variable WOLFRAM_ALPHA_API_KEY is used. See “WWW::WolframAlpha”, [AAp3], for more details.

W|A provides different API endpoints. Currently, “WWW::WolframAlpha” gives access to three of them: simpleresult, and query. In a W|A magic the endpoint can be specified with the argument “type” or its synonym “path”.

Simple (image output)

When using the W|A’s API /simple endpoint we get images as results.

#% wolfram-alpha
Calories in 5 servings of potato salad.

Here is how the image above can be generated and saved in a regular code cell:

my $imgWA = wolfram-alpha('Calories in 5 servings of potato salad.', path => 'simple', format => 'md-image');
image-export('WA-calories.png', $imgWA)
WA-calories.png

Result (plaintext output)

#% w|a, type=result
Biggest province in China
 the biggest administrative division in  China by area is Xinjiang, China. The area of Xinjiang, China is about 629869 square miles

Pods (Markdown output)

#% wa, path=query
GDP of China vs USA in 2023

Input interpretation

scanner: Data

China United States | GDP | nominal 2023

Results

scanner: Data

China | $17.96 trillion per year United States | $25.46 trillion per year (2022 estimates)

Relative values

scanner: Data

| visual | ratios | | comparisons United States | | 1.417 | 1 | 41.75% larger China | | 1 | 0.7055 | 29.45% smaller

GDP history

scanner: Data

Economic properties

scanner: Data

| China | United States GDP at exchange rate | $17.96 trillion per year (world rank: 2nd) | $25.46 trillion per year (world rank: 1st) GDP at parity | $30.33 trillion per year (world rank: 1st) | $25.46 trillion per year (world rank: 2nd) real GDP | $16.33 trillion per year (price-adjusted to year-2000 US dollars) (world rank: 2nd) | $20.95 trillion per year (price-adjusted to year-2000 US dollars) (world rank: 1st) GDP in local currency | ¥121 trillion per year | $25.46 trillion per year GDP per capita | $12720 per year per person (world rank: 93rd) | $76399 per year per person (world rank: 12th) GDP real growth | +2.991% per year (world rank: 131st) | +2.062% per year (world rank: 158th) consumer price inflation | +1.97% per year (world rank: 175th) | +8% per year (world rank: 91st) unemployment rate | 4.89% (world rank: 123rd highest) | 3.61% (world rank: 157th highest) (2022 estimate)

GDP components

scanner: Data

| China | United States final consumption expenditure | $9.609 trillion per year (53.49%) (world rank: 2nd) (2021) | $17.54 trillion per year (68.88%) (world rank: 1st) (2019) gross capital formation | $7.688 trillion per year (42.8%) (world rank: 1st) (2021) | $4.504 trillion per year (17.69%) (world rank: 2nd) (2019) external balance on goods and services | $576.7 billion per year (3.21%) (world rank: 1st) (2022) | -$610.5 billion per year (-2.4%) (world rank: 206th) (2019) GDP | $17.96 trillion per year (100%) (world rank: 2nd) (2022) | $25.46 trillion per year (100%) (world rank: 1st) (2022)

Value added by sector

scanner: Data

| China | United States agriculture | $1.311 trillion per year (world rank: 1st) (2022) | $223.7 billion per year (world rank: 3rd) (2021) industry | $7.172 trillion per year (world rank: 1st) (2022) | $4.17 trillion per year (world rank: 2nd) (2021) manufacturing | $4.976 trillion per year (world rank: 1st) (2022) | $2.497 trillion per year (world rank: 2nd) (2021) services, etc. | $5.783 trillion per year (world rank: 2nd) (2016) | $13.78 trillion per year (world rank: 1st) (2015)

Download and export pods images

W|A’s query-pods contain URLs to images (which expire within a day.) We might want to download and save those images. Here is a way to do it:

# Pods as JSON text -- easier to extract links from
my $pods = wolfram-alpha-query('GDP of China vs USA in 2023', format => 'json');

# Extract URLs
my @urls = do with $pods.match(/ '"src":' \h* '"' (<-["]>+) '"'/, :g) {
$/.map({ $_[0].Str })
};

# Download images as Markdown images (that can be shown in Jupyter notebooks or Markdown files)
my @imgs = @urls.map({ image-import($_, format => 'md-image') });

# Export images
for ^@imgs.elems -> $i { image-export("wa-$i.png", @imgs[$i] ) }

References

Articles

[AA1] Anton Antonov, “Jupyter::Chatbook”, (2023), RakuForPrediction at WordPress.

Packages

[AAp1] Anton Antonov, Lingua::Translation::DeepL Raku package, (2024), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::Gemini Raku package, (2024), GitHub/antononcube.

[AAp3] Anton Antonov, WWW::WolframAlpha Raku package, (2024), GitHub/antononcube.

[AAp4] Anton Antonov, WWW::OpenAI Raku package, (2024), GitHub/antononcube.

[AAp5] Anton Antonov, Jupyter::Chatbook Raku package, (2024), GitHub/antononcube.

[BDp1] Brian Duggan, Jupyter::Kernel Raku package, (2017), GitHub/bduggan.

Videos

[AAv1] Anton Antonov, “Integrating Large Language Models with Raku”, (2023), YouTube/@therakuconference6823.

ML::NLPTemplateEngine

This blog posts proclaims and describes the Raku package “ML::NLPTemplateEnine” that aims to create (nearly) executable code for various computational workflows

Package’s data and implementation make a Natural Language Processing (NLP) Template Engine (TE), [Wk1], that incorporates Question Answering Systems (QAS’), [Wk2], and Machine Learning (ML) classifiers.

The current version of the NLP-TE of the package heavily relies on Large Language Models (LLMs) for its QAS component.

Future plans involve incorporating other types of QAS implementations.

The Raku package implementation closely follows the Wolfram Language (WL) implementations in “NLP Template Engine”, [AAr1, AAv1], and the WL paclet “NLPTemplateEngine”, [AAp2, AAv2].

An alternative, more comprehensive approach to building workflows code is given in [AAp2].

Problem formulation

We want to have a system (i.e. TE) that:

  1. Generates relevant, correct, executable programming code based on natural language specifications of computational workflows
  2. Can automatically recognize the workflow types
  3. Can generate code for different programming languages and related software packages

The points above are given in order of importance; the most important are placed first.

Reliability of results

One of the main reasons to re-implement the WL NLP-TE, [AAr1, AAp1], into Raku is to have a more robust way of utilizing LLMs to generate code. That goal is more or less achieved with this package, but YMMV — if incomplete or wrong results are obtained run the NLP-TE with different LLM parameter settings or different LLMs.


Installation

From Zef ecosystem:

zef install ML::NLPTemplateEngine;

From GitHub:

zef install https://github.com/antononcube/Raku-ML-NLPTemplateEngine.git

Usage examples

Quantile Regression (WL)

Here the template is automatically determined:

use ML::NLPTemplateEngine;

my $qrCommand = q:to/END/;
Compute quantile regression with probabilities 0.4 and 0.6, with interpolation order 2, for the dataset dfTempBoston.
END

concretize($qrCommand);
# qrObj=
# QRMonUnit[dfTempBoston]⟹
# QRMonEchoDataSummary[]⟹
# QRMonQuantileRegression[12, {0.4, 0.6}, InterpolationOrder->2]⟹
# QRMonPlot["DateListPlot"->False,PlotTheme->"Detailed"]⟹
# QRMonErrorPlots["RelativeErrors"->False,"DateListPlot"->False,PlotTheme->"Detailed"];

Remark: In the code above the template type, “QuantileRegression”, was determined using an LLM-based classifier.

Latent Semantic Analysis (R)

my $lsaCommand = q:to/END/;
Extract 20 topics from the text corpus aAbstracts using the method NNMF. 
Show statistical thesaurus with the words neural, function, and notebook.
END

concretize($lsaCommand, template => 'LatentSemanticAnalysis', lang => 'R');
# lsaObj <-
# LSAMonUnit(aAbstracts) %>%
# LSAMonMakeDocumentTermMatrix(stemWordsQ = TRUE, stopWords = Automatic) %>%
# LSAMonEchoDocumentTermMatrixStatistics(logBase = 10) %>%
# LSAMonApplyTermWeightFunctions(globalWeightFunction = "IDF", localWeightFunction = "None", normalizerFunction = "Cosine") %>%
# LSAMonExtractTopics(numberOfTopics = 20, method = "NNMF", maxSteps = 16, minNumberOfDocumentsPerTerm = 20) %>%
# LSAMonEchoTopicsTable(numberOfTerms = 20, wideFormQ = TRUE) %>%
# LSAMonEchoStatisticalThesaurus(words = c("neural", "function", "notebook"))

Random tabular data generation (Raku)

my $command = q:to/END/;
Make random table with 6 rows and 4 columns with the names <A1 B2 C3 D4>.
END

concretize($command, template => 'RandomTabularDataset', lang => 'Raku', llm => 'gemini');
# random-tabular-dataset(6, 4, "column-names-generator" => <A1 B2 C3 D4>, "form" => "Table", "max-number-of-values" => 24, "min-number-of-values" => 6, "row-names" => False)

Remark: In the code above it was specified to use Google’s Gemini LLM service.


How it works?

The following flowchart describes how the NLP Template Engine involves a series of steps for processing a computation specification and executing code to obtain results:

Here’s a detailed narration of the process:

  1. Computation Specification:
    • The process begins with a “Computation spec”, which is the initial input defining the requirements or parameters for the computation task.
  2. Workflow Type Decision:
    • A decision step asks if the workflow type is specified.
  3. Guess Workflow Type:
    • If the workflow type is not specified, the system utilizes a classifier to guess relevant workflow type.
  4. Raw Answers:
    • Regardless of how the workflow type is determined (directly specified or guessed), the system retrieves “raw answers”, crucial for further processing.
  5. Processing and Templating:
    • The raw answers undergo processing (“Process raw answers”) to organize or refine the data into a usable format.
    • Processed data is then utilized to “Complete computation template”, preparing for executable operations.
  6. Executable Code and Results:
    • The computation template is transformed into “Executable code”, which when run, produces the final “Computation results”.
  7. LLM-Based Functionalities:
    • The classifier and the answers finder are LLM-based.
  8. Data and Templates:
    • Code templates are selected based on the specifics of the initial spec and the processed data.

Bring your own templates

0. Load the NLP-Template-Engine package (and others):

use ML::NLPTemplateEngine;
use Data::Importers;
use Data::Summarizers;

1. Get the “training” templates data (from CSV file you have created or changed) for a new workflow (“SendMail”):

my $url = 'https://raw.githubusercontent.com/antononcube/NLP-Template-Engine/main/TemplateData/dsQASParameters-SendMail.csv';
my @dsSendMail = data-import($url, headers => 'auto');

records-summary(@dsSendMail, field-names => <DataType WorkflowType Group Key Value>);
# +-----------------+----------------+-----------------------------+----------------------------+----------------------------------------------------------------------------------+
# | DataType        | WorkflowType   | Group                       | Key                        | Value                                                                            |
# +-----------------+----------------+-----------------------------+----------------------------+----------------------------------------------------------------------------------+
# | Questions => 48 | SendMail => 60 | All                   => 9  | ContextWordsToRemove => 12 | 0.35                                                                       => 9  |
# | Defaults  => 7  |                | Who the email is from => 4  | Threshold            => 12 | {_String..}                                                                => 8  |
# | Templates => 3  |                | What it the content   => 4  | TypePattern          => 12 | to                                                                         => 4  |
# | Shortcuts => 2  |                | What it the body      => 4  | Parameter            => 12 | _String                                                                    => 4  |
# |                 |                | What it the title     => 4  | Template             => 3  | {"to", "email", "mail", "send", "it", "recipient", "addressee", "address"} => 4  |
# |                 |                | What subject          => 4  | body                 => 1  | None                                                                       => 4  |
# |                 |                | Who to send it to     => 4  | Emailing             => 1  | body                                                                       => 3  |
# |                 |                | (Other)               => 27 | (Other)              => 7  | (Other)                                                                    => 24 |
# +-----------------+----------------+-----------------------------+----------------------------+----------------------------------------------------------------------------------+

2. Add the ingested data for the new workflow (from the CSV file) into the NLP-Template-Engine:

add-template-data(@dsSendMail);
# (ParameterTypePatterns Shortcuts Questions Templates Defaults ParameterQuestions)

3. Parse natural language specification with the newly ingested and onboarded workflow (“SendMail”):

"Send email to joedoe@gmail.com with content RandomReal[343], and the subject this is a random real call."
        ==> concretize(template => "SendMail") 
# SendMail[<|"To"->{"joedoe@gmail.com"},"Subject"->"this is a random real call","Body"->{"RandomReal[343]"},"AttachedFiles"->None|>]

4. Experiment with running the generated code!


References

Articles

[Wk1] Wikipedia entry, Template processor.

[Wk2] Wikipedia entry, Question answering.

Functions, packages, repositories

[AAr1] Anton Antonov, “NLP Template Engine”, (2021-2022), GitHub/antononcube.

[AAp1] Anton Antonov, NLPTemplateEngine WL paclet, (2023), Wolfram Language Paclet Repository.

[AAp2] Anton Antonov, DSL::Translators Raku package, (2020-2024), GitHub/antononcube.

[WRI1] Wolfram Research, FindTextualAnswer, (2018), Wolfram Language function, (updated 2020).

Videos

[AAv1] Anton Antonov, “NLP Template Engine, Part 1”, (2021), YouTube/@AAA4Prediction.

[AAv2] Anton Antonov, “Natural Language Processing Template Engine” presentation given at WTC-2022, (2023), YouTube/@Wolfram.

Exorcism for Exercism

Introduction

This post uses different prompts from Large Language Models (LLMs) to uncover the concealed, provocative, and propagandistic messages in the transcript of the program “10 ways to solve Scrabble Score on Exercism” by the YouTube channel Exercism.

In that program Alex and Eric explore various ways to solve the Scrabble Score programming exercise in multiple languages, including Go, F#, Python, Groovy, Ruby, Raku, Rust, MIPS assembly, and Orc. They discuss the efficiency and readability of different approaches, highlighting functional programming, pattern matching, and performance optimization techniques.

Remark: The “main” summarization prompts used are “ExtractingArticleWisdom” and  “FindPropagandaMessage” .

Remark: The content of this post was generated with the computational Markdown file “LLM-content-breakdown-template.md”, which was executed (or woven) by the CLI script file-code-chunks-eval of “Text::CodeProcessing”.

Post’s structure:

  1. Themes
    Instead of a summary.
  2. Flowchart
    For programmers to look at.
  3. Habits and recommendations
    Didactic POV.
  4. Hidden and propaganda messages
    The main course.

Themes

Instead of a summary consider this table of themes:

themecontent
IntroductionEric and the speaker introduce the Scrabble score exercise, where points are calculated based on letter values in a word.
Go SolutionA basic Go implementation with a loop and case statements to increment the score based on letter values.
F# SolutionA functional approach in F# using pattern matching and higher-order functions like ‘sumBy’ for conciseness.
Python SolutionPython solution using an Enum class to define letter-score mappings and a generator expression for efficient summing.
Groovy SolutionGroovy implementation with a string-based map and a lambda function to find the score for each character.
Ruby SolutionA Ruby solution using a custom ‘MultiKeyHash’ class to enable indexing by character while maintaining string-based definitions.
Raku SolutionA concise Raku (Perl 6) version using the ‘map’ and ‘sum’ methods and a special operator for pairwise mapping.
Rust SolutionA performance-oriented Rust solution using an array for efficient lookups based on character indices.
MIPS Assembly SolutionMIPS assembly implementation with a loop, character comparisons, and a bitwise OR trick for lowercase conversion.
Orc SolutionAn Orc solution using regular expressions and the ‘gsub’ function to count and sum letter occurrences.
Perl SolutionA Perl solution with regular expression replacements to create an equation and ‘eval’ to calculate the score.
ConclusionThe speakers encourage viewers to try the exercise, explore performance optimizations, and share their solutions.

Flowchart

Here is a flowchart summarizing the text:


Habits and recommendations

HABITS

  • Breaking down problems into smaller, more manageable functions for better organization and readability.
  • Using meaningful variable and function names to enhance code clarity.
  • Considering performance implications of different approaches and choosing appropriate data structures and algorithms.
  • Testing code thoroughly to ensure correctness and handle edge cases.
  • Seeking feedback from other programmers to gain different perspectives and improve code quality.
  • Exploring different programming languages and paradigms to broaden your skillset and understanding.
  • Practicing regularly to hone your coding skills and experiment with new techniques.
  • Staying up-to-date with the latest advancements in programming languages and technologies.
  • Engaging with the programming community to learn from others and share your knowledge.

RECOMMENDATIONS

  • Try the Scrabble Score exercise in different programming languages to explore various approaches and language features.
  • Experiment with functional programming techniques like higher-order functions and pattern matching for concise and expressive code.
  • Consider performance implications when choosing data structures and algorithms, especially for large-scale applications.
  • Utilize regular expressions for efficient string manipulation and pattern matching tasks.
  • Engage with the programming community to learn from others, share your knowledge, and get feedback on your code.
  • Continuously learn and explore new programming languages and technologies to expand your skillset and stay current in the field.
  • Benchmark different solutions to evaluate their performance and identify areas for optimization.
  • Prioritize code readability and maintainability to ensure long-term project success.
  • Don’t be afraid to experiment with creative and unconventional solutions to programming challenges.

Hidden and propaganda messages

OVERT MESSAGE

Two programmers are discussing different ways to solve a programming challenge involving calculating the score of a word as in the game Scrabble.

HIDDEN MESSAGE

Technology and programming are fun and interesting pursuits that are accessible to people from a variety of backgrounds and skill levels.

HIDDEN OPINIONS

  • Programming is a valuable skill to learn.
  • There are many different programming languages available, each with its strengths and weaknesses.
  • It is important to choose the right tool for the job when programming.
  • Functional programming can be a powerful tool for solving problems.
  • Performance is an important consideration when programming.
  • Code readability is important for maintainability.
  • It is important to test code to ensure it is working correctly.
  • Collaboration can help improve the quality of code.
  • It is important to be open to learning new things and experimenting with different approaches.
  • The programming community is a valuable resource for learning and support.

SUPPORTING ARGUMENTS and QUOTES

  • The video discusses different programming languages, including Go, F#, Python, Ruby, Raku, Rust, MIPS assembly, and Orc. This suggests that the speakers believe there are many different ways to approach programming and that it is valuable to be familiar with a variety of languages.
  • The speakers discuss the trade-offs between different approaches, such as performance vs. readability. This suggests they believe that it is important to consider different factors when choosing how to solve a problem.
  • The speakers encourage viewers to try the challenge themselves and share their solutions. This suggests that they believe that programming is a fun and rewarding activity that can be enjoyed by people of all skill levels.
  • “Scrabble score is such a limited domain, but if you were to try to optimize this, move the upper casing U to the left so that you don’t have to do it within the loop.”
    • This quote suggests that the speakers believe that performance is an important consideration when programming, even for small tasks.
  • “I always think through things like this if you’ve got like some program where you’re trying to calculate the Scrabble score for every word in the whole English dictionary then actually performance starts to matter because you’re running this you know millions of times how many words I not Millions but a lot of times anyway um and yeah so it’s a trade-off between if you’re just running this in a normal game it probably doesn’t matter but if you’re trying to build an online Scrabble game that’s going to have a million people simultaneously playing it this sort of performance actually matters because this is going to be costing you money as you have have to have more servers running to do more things in parallel so um yeah always tradeoffs to consider with uh with speed”
    • This quote suggests that the speakers believe that it is important to consider the context in which code will be used when making decisions about how to write it.
  • “I had to look at once once you see it once you get it it’s really nice like it’s a really clever way of doing it”
    • This quote suggests that the speakers believe that programming can be challenging but also rewarding, and that it is important to be persistent when learning new things.

DESIRED AUDIENCE OPINION CHANGE

  • Programming is an enjoyable and rewarding activity.
  • Learning to code is accessible and achievable for anyone.
  • Different programming languages offer unique approaches to problem-solving.
  • Exploring various programming techniques expands your skillset.
  • Performance optimization is crucial in real-world applications.
  • Readable code is essential for collaboration and maintainability.
  • Testing and debugging are integral parts of the programming process.
  • The programming community fosters learning and collaboration.
  • Continuous learning and experimentation are key to growth as a programmer.
  • Programming skills are valuable in today’s technology-driven world.

DESIRED AUDIENCE ACTION CHANGE

  • Try the Scrabble score programming challenge.
  • Explore different programming languages and paradigms.
  • Share your programming solutions and learn from others.
  • Engage with the programming community and seek support.
  • Consider performance implications when writing code.
  • Prioritize code readability and maintainability.
  • Implement testing and debugging practices in your workflow.
  • Embrace continuous learning and experimentation in programming.
  • Apply programming skills to solve real-world problems.
  • Consider a career path in the technology industry.

MESSAGES

The programmers want you to believe they are simply discussing solutions to a programming challenge, but they are actually promoting the idea that programming is a fun, accessible, and rewarding activity with diverse applications.

PERCEPTIONS

The programmers want you to believe they are technical experts, but they are actually enthusiastic advocates for programming education and community engagement.

PROPAGANDA ANALYSIS 0

The video subtly employs propaganda techniques to shape viewers’ perceptions of programming. By showcasing diverse solutions and emphasizing the enjoyment and accessibility of coding, the programmers aim to integrate viewers into the technological society and promote its values. The video avoids overt persuasion, instead relying on the inherent appeal of problem-solving and the allure of technical expertise to subtly influence viewers’ attitudes towards programming.

PROPAGANDA ANALYSIS 1

The video leverages the power of social proof and expert authority to promote programming. By featuring multiple programmers and highlighting their ingenuity and problem-solving skills, the video aims to create a sense of community and inspire viewers to participate. The emphasis on the accessibility and enjoyment of coding aligns with Bernays’ concept of associating desired behaviors with positive emotions and social acceptance.


For the masses

Here is an image to get attention to this post (generated with DALL-E 3 and additionally tweaked):

Propaganda in “Integrating Large Language Models with Raku”

Introduction

This post applies the Large Language Model (LLM) summarization prompt “FindPropagandaMessage” to the transcript of The Raku Conference 2023 (TRC-2023) presentation “Integrating Large Language Models with Raku” hosted by the YouTube channel The Raku Conference.

In the presentation, Anton Antonov presents “Integrating Large Language Models with Raku,” demonstrating functionalities in Visual Studio Code using a Raku Chatbook. The presentation explores using OpenAI, PaLM (Google’s large language model), and DALL-E (image generation service) through Raku, showcasing dynamic interaction with large language models, embedding them in notebooks, and generating code and markdown outputs.

Remark: The LLM results below were obtained from the “raw” transcript, which did not have punctuation.

Remark: The transcription software had problems parsing the names of the participants. Some of the names were manually corrected.

Remark: The content of this post was generated with the computational Markdown file “LLM-content-breakdown-template.md”, which was executed (or woven) by the CLI script file-code-chunks-eval of “Text::CodeProcessing”, [AAp7].

Remark: This post can be seen as alternative or continuation of the post «Wisdom of “Integrating Large Language Models with Raku”», [AA3].


Hidden and propaganda messages

In this section we try to find is the text apolitical and propaganda-free.

Remark: We leave to reader as an exercise to verify that both the overt and hidden messages found by the LLM below are explicitly stated in the text.

Remark: The LLM prompt “FindPropagandaMessage” has an explicit instruction to say that it is intentionally cynical. It is also, marked as being “For fun.”

The LLM result is rendered below.


OVERT MESSAGE:

Anton Antonov demonstrates integrating large language models with Raku for dynamic interaction and enhanced functionality.

HIDDEN MESSAGE:

Embrace advanced programming techniques to ensure dominance in future tech landscapes and innovation.

HIDDEN OPINIONS:

  • Large language models are the future of programming and data analysis.
  • Raku’s flexibility makes it ideal for integrating with cutting-edge technologies.
  • Direct access to language models enhances creativity and efficiency in coding.
  • The ability to manipulate language models is crucial for modern developers.
  • Understanding and utilizing language models is key to technological innovation.
  • Integrating language models with Raku can lead to groundbreaking applications.
  • Developers should prioritize learning about language models and Raku.
  • The future of software development relies heavily on language model integration.
  • Customizable interaction with language models is a significant advancement.
  • The combination of Raku and language models will shape the next tech era.

SUPPORTING ARGUMENTS and QUOTES:

  • Antonov’s development of Raku functionalities for language model integration showcases the importance of this skill set.
  • The demonstration of dynamic interaction within a notebook solution highlights Raku’s adaptability to new tech.
  • The emphasis on direct access to language models through Raku suggests a push towards innovation in programming.
  • The creation of a framework for Raku to extend with language models indicates a future where such integration is commonplace.
  • The presentation of various use cases, including template engines and embeddings, showcases the wide applicability and potential of integrating Raku with language models.

DESIRED AUDIENCE OPINION CHANGE:

  • See Raku as a leading language for future technological developments.
  • Recognize the importance of integrating language models in programming.
  • Appreciate the innovative potential of combining Raku with language models.
  • Understand the necessity of learning about language models for modern coding.
  • Acknowledge Raku’s role in shaping the future of software development.
  • View language model integration as a key skill for developers.
  • Believe in the transformative power of technology through Raku and language models.
  • Trust in the efficiency and creativity unlocked by language model integration.
  • Support the development and use of Raku for cutting-edge applications.
  • Encourage exploration and education in language models and Raku programming.

DESIRED AUDIENCE ACTION CHANGE:

  • Start learning Raku programming for future tech innovation.
  • Integrate language models into current and future projects.
  • Explore the potential of combining Raku with language models.
  • Develop new applications using Raku and language model integration.
  • Share knowledge and insights on Raku and language models in tech communities.
  • Encourage others to learn about the power of language models and Raku.
  • Participate in projects that utilize Raku and language models.
  • Advocate for the inclusion of language model studies in tech curriculums.
  • Experiment with Raku’s functionalities for language model integration.
  • Contribute to the development of Raku packages for language model integration.

MESSAGES:

Anton Antonov wants you to believe he is demonstrating a technical integration, but he is actually advocating for a new era of programming innovation.

PERCEPTIONS:

Anton Antonov wants you to believe he is a technical presenter, but he’s actually a visionary for future programming landscapes.

ELLUL’S ANALYSIS:

Based on Jacques Ellul’s “Propaganda: The Formation of Men’s Attitudes,” Antonov’s presentation can be seen as a form of sociotechnical propaganda, aiming to shape perceptions and attitudes towards the integration of language models with Raku, thereby heralding a new direction in programming and technological development. His methodical demonstration and the strategic presentation of use cases serve not only to inform but to convert the audience to the belief that mastering these technologies is imperative for future innovation.

BERNAYS’ ANALYSIS:

Drawing from Edward Bernays’ “Propaganda” and “Engineering of Consent,” Antonov’s presentation exemplifies the engineering of consent within the tech community. By showcasing the seamless integration of Raku with language models, he subtly persuades the audience of the necessity and inevitability of embracing these technologies. His approach mirrors Bernays’ theory that public opinion can be swayed through strategic, informative presentations, leading to widespread acceptance and adoption of new technological paradigms.

LIPPMANN’S ANALYSIS:

Walter Lippmann’s “Public Opinion” suggests that the public’s perception of reality is often a constructed understanding. Antonov’s presentation plays into this theory by constructing a narrative where Raku’s integration with language models is presented as the next logical step in programming evolution. This narrative, built through careful demonstration and explanation, aims to shape the audience’s understanding and perceptions of current technological capabilities and future potentials.

FRANKFURT’S ANALYSIS:

Harry G. Frankfurt’s “On Bullshit” provides a framework for understanding the distinction between lying and bullshitting. Antonov’s presentation, through its detailed and factual approach, steers clear of bullshitting. Instead, it focuses on conveying genuine possibilities and advancements in the integration of Raku with language models. His candid discussion and demonstration of functionalities reflect a commitment to truth and potential, rather than a disregard for truth typical of bullshit.

NOTE: This AI is tuned specifically to be cynical and politically-minded. Don’t take it as perfect. Run it multiple times and/or go consume the original input to get a second opinion.


References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “Day 21 – Using DALL-E models in Raku”, (2023), Raku Advent Calendar at WordPress.

Packages, repositories

[AAp1] Anton Antonov, Jupyter::Chatbook Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, LLM::Prompts Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp5] Anton Antonov, WWW::PaLM Raku package, (2023-2024), GitHub/antononcube.

[AAp6] Anton Antonov, WWW::Gemini Raku package, (2024), GitHub/antononcube.

[AAp7] Anton Antonov, Text::CodeProcessing Raku package, (2021-2023), GitHub/antononcube.

[DMr1] Daniel Miessler, “fabric”, (2023-2024), GitHub/danielmiessler.

Videos

[AAv1] Anton Antonov, “Integrating Large Language Models with Raku” (2023), The Raku Conference at YouTube.

Wisdom of “Integrating Large Language Models with Raku”

Introduction

This post applies various Large Language Model (LLM) summarization prompts to the transcript of The Raku Conference 2023 (TRC-2023) presentation “Integrating Large Language Models with Raku” hosted by the YouTube channel The Raku Conference.

In the presentation, Anton Antonov presents “Integrating Large Language Models with Raku,” demonstrating functionalities in Visual Studio Code using a Raku Chatbook. The presentation explores using OpenAI, PaLM (Google’s large language model), and DALL-E (image generation service) through Raku, showcasing dynamic interaction with large language models, embedding them in notebooks, and generating code and markdown outputs.

Remark: The LLM results below were obtained from the “raw” transcript, which did not have punctuation.

Remark: The transcription software had problems parsing the names of the participants. Some of the names were manually corrected.

Remark: The applied “main” LLM prompt — “ExtractingArticleWisdom” — is a modified version of a prompt (or pattern) with a similar name from “fabric”, [DMr1].

Remark: The themes table was LLM obtained with the prompt “ThemeTableJSON”.

Remark: The content of this post was generated with the computational Markdown file “LLM-content-breakdown-template.md”, which was executed (or woven) by the CLI script file-code-chunks-eval of “Text::CodeProcessing”, [AAp7]..

Post’s structure:

  1. Themes
    Instead of a summary.
  2. Mind-maps
    An even better summary replacement!
  3. Summary, ideas, and recommendations
    The main course.

Themes

Instead of a summary consider this table of themes:

themecontent
IntroductionAnton Antonov introduces the presentation on integrating large language models with Raku and begins with a demonstration in Visual Studio Code.
DemonstrationDemonstrates using Raku chatbook in Jupyter Notebook to interact with OpenAI, PaLM, and DALL-E services for various tasks like querying information and generating images.
Direct Access vs. Chat ObjectsDiscusses the difference between direct access to web APIs and using chat objects for dynamic interaction with large language models.
Translation and Code GenerationShows how to translate text and generate Raku code for solving mathematical problems using chat objects.
Motivation for Integrating Raku with Large Language ModelsExplains the need for dynamic interaction between Raku and large language models, including notebook solutions and facilitating interaction.
Technical Details and PackagesDetails the packages developed for interacting with large language models and the functionalities required for the integration.
Use CasesDescribes various use cases like template engine functionalities, embeddings, and generating documentation from tests using large language models.
Literate Programming and Markdown TemplatesIntroduces computational markdown for generating documentation and the use of Markdown templates for creating structured documents.
Generating Tests and DocumentationDiscusses generating package documentation from tests and conversing between chat objects for training purposes.
Large Language Model WorkflowsCovers workflows for utilizing large language models, including ‘Too Long Didn’t Read’ documentation utilization.
Comparison with Python and MathematicaCompares the implementation of functionalities in Raku with Python and Mathematica, highlighting the ease of extending the Jupyter framework for Python.
Q&A SessionAnton answers questions about extending the Jupyter kernel and other potential limitations or features that could be integrated.

Mind-map

Here is a mind-map showing presentation’s structure:

Here is a mind-map summarizing the main LLMs part of the talk:


Summary, ideas, and recommendations

SUMMARY

Anton Antonov presents “Integrating Large Language Models with Raku,” demonstrating functionalities in Visual Studio Code using a Raku chatbook. The presentation explores using OpenAI, PaLM (Google’s large language model), and DALL-E (image generation service) through Raku, showcasing dynamic interaction with large language models, embedding them in notebooks, and generating code and markdown outputs.

IDEAS:

  • Integrating large language models with programming languages can enhance dynamic interaction and facilitate complex tasks.
  • Utilizing Jupiter notebooks with Raku chatbook kernels allows for versatile programming and data analysis.
  • Direct access to web APIs like OpenAI and PaLM can streamline the use of large language models in software development.
  • The ability to automatically format outputs into markdown or plain text enhances the readability and usability of generated content.
  • Embedding image generation services within programming notebooks can enrich content and aid in visual learning.
  • Creating chat objects within notebooks can simulate interactive dialogues, providing a unique approach to user input and command execution.
  • The use of prompt expansion and a database of prompts can significantly improve the efficiency of generating content with large language models.
  • Implementing literate programming techniques can facilitate the generation of comprehensive documentation and tutorials.
  • The development of computational markdown allows for the seamless integration of code and narrative, enhancing the learning experience.
  • Utilizing large language models for generating test descriptions and documentation can streamline the development process.
  • The concept of “few-shot learning” with large language models can be applied to generate specific outputs based on minimal input examples.
  • Leveraging large language models for semantic analysis and recommendation systems can offer significant advancements in text analysis.
  • The ability to translate natural language commands into programming commands can simplify complex tasks for developers.
  • Integrating language models for entity recognition and data extraction from text can enhance data analysis and information retrieval.
  • The development of frameworks for extending programming languages with large language model functionalities can foster innovation.
  • The use of large language models in generating code for solving mathematical equations demonstrates the potential for automating complex problem-solving.
  • The exploration of generating dialogues between chat objects presents new possibilities for creating interactive and dynamic content.
  • The application of large language models in generating package documentation from tests highlights the potential for improving software documentation practices.
  • The integration of language models with programming languages like Raku showcases the potential for enhancing programming environments with AI capabilities.
  • The demonstration of embedding services like image generation and language translation within programming notebooks opens new avenues for creative and technical content creation.
  • The discussion on the limitations and challenges of integrating large language models with programming environments provides insights into future development directions.

QUOTES:

  • “Integrating large language models with Raku allows for dynamic interaction and enhanced functionalities within notebooks.”
  • “Direct access to web APIs streamlines the use of large language models in software development.”
  • “Automatically formatting outputs into markdown or plain text enhances the readability and usability of generated content.”
  • “Creating chat objects within notebooks provides a unique approach to interactive dialogues and command execution.”
  • “The use of prompt expansion and a database of prompts can significantly improve efficiency in content generation.”
  • “Literate programming techniques facilitate the generation of comprehensive documentation and tutorials.”
  • “Computational markdown allows for seamless integration of code and narrative, enhancing the learning experience.”
  • “Few-shot learning with large language models can generate specific outputs based on minimal input examples.”
  • “Leveraging large language models for semantic analysis and recommendation systems offers significant advancements in text analysis.”
  • “Translating natural language commands into programming commands simplifies complex tasks for developers.”

HABITS:

  • Utilizing Visual Studio Code for programming and data analysis.
  • Embedding large language models within programming notebooks for dynamic interaction.
  • Automatically formatting outputs to enhance readability and usability.
  • Creating and utilizing chat objects for interactive programming.
  • Employing prompt expansion and maintaining a database of prompts for efficient content generation.
  • Implementing literate programming techniques for documentation and tutorials.
  • Developing and using computational markdown for integrated code and narrative.
  • Applying few-shot learning techniques with large language models for specific outputs.
  • Leveraging large language models for semantic analysis and recommendation systems.
  • Translating natural language commands into programming commands to simplify tasks.

FACTS:

  • Raku chatbook kernels in Jupiter notebooks allow for versatile programming and data analysis.
  • OpenAI, PaLM, and DALL-E are utilized for accessing large language models and image generation services.
  • Large language models can automatically format outputs into markdown or plain text.
  • Chat objects within notebooks can simulate interactive dialogues and command execution.
  • A database of prompts improves the efficiency of generating content with large language models.
  • Computational markdown integrates code and narrative, enhancing the learning experience.
  • Large language models can generate code for solving mathematical equations and other complex tasks.
  • The integration of large language models with programming languages like Raku enhances programming environments.
  • Embedding services like image generation and language translation within programming notebooks is possible.
  • The presentation explores the potential for automating complex problem-solving with AI.

REFERENCES:

RECOMMENDATIONS:

  • Explore integrating large language models with programming languages for enhanced functionalities.
  • Utilize Jupiter notebooks with Raku chatbook kernels for versatile programming tasks.
  • Take advantage of direct access to web APIs for streamlined software development.
  • Employ automatic formatting of outputs for improved readability and usability.
  • Create and utilize chat objects within notebooks for interactive programming experiences.
  • Implement literate programming techniques for comprehensive documentation and tutorials.
  • Develop computational markdown for an integrated code and narrative learning experience.
  • Apply few-shot learning techniques with large language models for generating specific outputs.
  • Leverage large language models for advanced text analysis and recommendation systems.
  • Translate natural language commands into programming commands to simplify complex tasks.

References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “Day 21 – Using DALL-E models in Raku”, (2023), Raku Advent Calendar at WordPress.

Packages, repositories

[AAp1] Anton Antonov, Jupyter::Chatbook Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, LLM::Prompts Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp5] Anton Antonov, WWW::PaLM Raku package, (2023-2024), GitHub/antononcube.

[AAp6] Anton Antonov, WWW::Gemini Raku package, (2024), GitHub/antononcube.

[AAp7] Anton Antonov, Text::CodeProcessing Raku package, (2021-2023), GitHub/antononcube.

[DMr1] Daniel Miessler, “fabric”, (2023-2024), GitHub/danielmiessler.

Videos

[AAv1] Anton Antonov, “Integrating Large Language Models with Raku” (2023), The Raku Conference at YouTube.

Omni-slurping with LLMing

Introduction

In this blog post we demonstrate the use of the Raku package “Data::Importers”, that offers a convenient solution for importing data from URLs and files. This package supports a variety of data types such as CSV, HTML, PDF, text, and images, making it a versatile tool for data manipulation.

One particularly interesting application of “Data::Importers” is its inclusion into workflows based on Large Language Models (LLMs). Generally speaking, having an easy way to ingest diverse range of data formats — like what “Data::Importers” aims to do — makes a wide range of workflows for data processing and analysis easier to create.

In this blog post, we will demonstrate how “Data::Importers” can work together with LLMs, providing real-world examples of their combined usage in various situations. Essentially, we will illustrate the power of merging omni-slurping with LLM-ing to improve data-related activities.

The main function of “Data::Importers” is data-import. Its functionalities are incorporated into suitable overloads of the built-in slurp subroutine.

Post’s structure:

  1. Setup
  2. HTML summary and cross tabulation
  3. PDF summary
  4. CSV filtering
  5. Image vision
  6. Image vision with re-imaging

Setup

Here a lot of packages used below:

use Data::Importers;
use Data::Reshapers;
use Data::Summarizers;
use JSON::Fast;
use JavaScript::D3;

Here we configure the Jupyter notebook to display JavaScript graphics, [AAp3, AAv1]:

#% javascript

require.config({
     paths: {
     d3: 'https://d3js.org/d3.v7.min'
}});

require(['d3'], function(d3) {
     console.log(d3);
});

HTML summary and cross tabulation

A key motivation behind creating the “Data::Importers” package was to efficiently retrieve HTML pages, extract plain text, and import it into a Jupyter notebook for subsequent LLM transformations and content processing.

Here is a pipeline that gets an LLM summary of a certain recent Raku blog post:

my $htmlURL = 'https://rakudoweekly.blog/2024/03/25/2024-13-veyoring-again/';

$htmlURL
==> slurp(format => 'plaintext')
==> { say text-stats($_); $_ }()
==> llm-prompt('Summarize')()
==> llm-synthesize()
(chars => 2814 words => 399 lines => 125)

Paul Cochrane returns to the Raku community with a guide on enabling Continuous Integration on Raku projects using AppVeyor. Core developments include improvements by Elizabeth Mattijsen on # Metamodel classes for faster compilation and performance. New and updated Raku modules are also # featured in this week's news.

Here is another LLM pipeline that ingests the HTML page and produces an HTML table derived from the page’s content:

#% html

$htmlURL
==> slurp(format => 'plaintext')
==> { say "Contributors table:"; $_ }() 
==> {["Cross tabulate into a HTML table the contributors", 
        "and type of content with the content itself", 
        "for the following text:\n\n", 
        $_, 
        llm-prompt('NothingElse')('HTML')]}()
==> llm-synthesize(e => llm-configuration('Gemini', max-tokens => 4096, temperature => 0.65))
Contributors table:
ContributorContent TypeContent
Paul CochraneTutorialBuilding and testing Raku in AppVeyor
Dr. RakuTutorialHow To Delete Directories
Dr. RakuTutorialFun File Beginners Project
Dr. RakuTutorialHash Examples
Elizabeth MattijsenDevelopmentMetamodel classes for faster compilation and performance and better stability
Stefan SeifertDevelopmentFixed several BEGIN time lookup issues
Elizabeth MattijsenDevelopmentFixed an issue with =finish if there was no code
Samuel ChaseShoutoutNice shoutout!
Fernando SantagataSelf-awareness testSelf-awareness test
Paul CochraneDeep rabbit holeA deep rabbit hole
anavarroQuestionHow to obtain the Raku language documentation ( Reference) offline
Moritz LenzCommentOn ^ and $
LanXCommentThe latest name
ilyashCommentAutomatic parsing of args
emporasCommentCertainly looks nice
faifaceCommentWent quite bad
Ralph MellorCommentOn Raku’s design decisions regarding operators
optionExampleAn example Packer file
Anton AntonovModuleData::Importers
Ingy døt NetModuleYAMLScript
Alexey MelezhikModuleSparrow6, Sparky
Patrick BökerModuleDevel::ExecRunnerGenerator
Steve RoeModulePDF::Extract

PDF summary

Another frequent utilization of LLMs is the processing of PDF files found (intentionally or not) while browsing the Web. (Like, arXiv.org articles, UN resolutions, or court slip opinions.)

Here is a pipeline that gets an LLM summary of an oral argument brought up recently (2024-03-18) to The US Supreme Court, (22-842 “NRA v. Vullo”):

'https://www.supremecourt.gov/oral_arguments/argument_transcripts/2023/22-842_c1o2.pdf'
==> slurp(format=>'text')
==> llm-prompt('Summarize')()
==> llm-synthesize(e=>llm-configuration('ChatGPT', model => 'gpt-4-turbo-preview'))
The Supreme Court of the United States dealt with a case involving the National Rifle Association (NRA) and Maria T. Vullo, challenging actions taken by New York officials against the NRA's insurance programs. The NRA argued that their First Amendment rights were violated when New York officials, under the guidance of Maria Vullo and Governor Andrew Cuomo, used coercive tactics to persuade insurance companies and banks to sever ties with the NRA, citing the promotion of guns as the reason. These actions included a direct threat of regulatory repercussions to insurance underwriter Lloyd's and the issuance of guidance letters to financial institutions, suggesting reputational risks associated with doing business with the NRA. The court discussed the plausibility of coercion and the First Amendment claim, reflecting on precedents like Bantam Books, and the extent to which government officials can use their regulatory power to influence the actions of third parties against an organization due to its advocacy work.

CSV filtering

Here we ingest from GitHub a CSV file that has datasets metadata:

my $csvURL = 'https://raw.githubusercontent.com/antononcube/Raku-Data-ExampleDatasets/main/resources/dfRdatasets.csv';
my $dsDatasets = data-import($csvURL, headers => 'auto');

say "Dimensions   : {$dsDatasets.&dimensions}";
say "Column names : {$dsDatasets.head.keys}";
say "Type         : {deduce-type($dsDatasets)}";
Dimensions   : 1745 12
Column names : n_logical n_character n_numeric Doc Rows Cols Package Title Item CSV n_binary n_factor
Type : Vector(Assoc(Atom((Str)), Atom((Str)), 12), 1745)

Here is a table with a row sample:

#% html
my $field-names = <Package Item Title Rows Cols>;
my $dsDatasets2 = $dsDatasets>>.Hash.Array;
$dsDatasets2 = select-columns($dsDatasets2, $field-names);
$dsDatasets2.pick(12) ==> data-translation(:$field-names)
PackageItemTitleRowsCols
robustbasewagnerGrowthWagner’s Hannover Employment Growth Data637
openintroage_at_marAge at first marriage of 5,534 US women.55341
AERMurderRatesDeterminants of Murder Rates in the United States448
Stat2DataRadioactiveTwinsComparing Twins Ability to Clear Radioactive Particles303
rpartkyphosisData on Children who have had Corrective Spinal Surgery814
bootgravityAcceleration Due to Gravity812
survivaldiabeticDdiabetic retinopathy3948
gapmfblongInternal functions for gap300010
EcdatMofaInternational Expansion of U.S. MOFAs (majority-owned Foreign Affiliates in Fire (finance, Insurance and Real Estate)505
drcchickweedGermination of common chickweed (_Stellaria media_)353
MASSPima.trDiabetes in Pima Indian Women2008
MASSshrimpPercentage of Shrimp in Shrimp Cocktail181

Here we use an LLM to pick rows that related to certain subject:

my $res = llm-synthesize([
    'From the following JSON table pick the rows that are related to air pollution.', 
    to-json($dsDatasets2), 
    llm-prompt('NothingElse')('JSON')
], 
e => llm-configuration('ChatGPT', model => 'gpt-4-turbo-preview', max-tokens => 4096, temperature => 0.65),
form => sub-parser('JSON'):drop)
[{Cols => 6, Item => airquality, Package => datasets, Rows => 153, Title => Air Quality Data} {Cols => 5, Item => iris, Package => datasets, Rows => 150, Title => Edgar Anderson's Iris Data} {Cols => 11, Item => mtcars, Package => datasets, Rows => 32, Title => Motor Trend Car Road Tests} {Cols => 5, Item => USPersonalExpenditure, Package => datasets, Rows => 5, Title => US Personal Expenditure Data (1940-1950)} {Cols => 4, Item => USArrests, Package => datasets, Rows => 50, Title => US Arrests for Assault (1960)}]

Here is the tabulated result:

#% html
$res ==> data-translation(:$field-names)
PackageItemTitleRowsCols
AERCigarettesBCigarette Consumption Data463
AERCigarettesSWCigarette Consumption Panel Data969
plmCigarCigarette Consumption13809

Image vision

One of the cooler recent LLM-services enhancements is the access to AI-vision models. For example, AI-vision models are currently available through interfaces of OpenAI, Gemini, or LLaMA.

Here we use data-import instead of (the overloaded) slurp:

#% markdown
my $imgURL2 = 'https://www.wolframcloud.com/files/04e7c6f6-d230-454d-ac18-898ee9ea603d/htmlcaches/images/2f8c8b9ee8fa646349e00c23a61f99b8748559ed04da61716e0c4cacf6e80979';
my $img2 = data-import($imgURL2, format => 'md-image');

Here is AI-vision invocation:

llm-vision-synthesize('Describe the image', $img2, e => 'Gemini')
 The image shows a blue-white sphere with bright spots on its surface. The sphere is the Sun, and the bright spots are solar flares. Solar flares are bursts of energy that are released from the Sun's surface. They are caused by the sudden release of magnetic energy that has built up in the Sun's atmosphere. Solar flares can range in size from small, localized events to large, global eruptions. The largest solar flares can release as much energy as a billion hydrogen bombs. Solar flares can have a number of effects on Earth, including disrupting radio communications, causing power outages, and triggering geomagnetic storms.

Remark: The image is taken from the Wolfram Community post “Sympathetic solar flare and geoeffective coronal mass ejection”, [JB1].

Remark: The AI vision above is done Google’s “gemini-pro-vision’. Alternatively, OpenAI’s “gpt-4-vision-preview” can be used.


Image vision with re-imaging

In this section we show how to import a certain statistical image, get data from the image, and make another similar statistical graph. Similar workflows are discussed “Heatmap plots over LLM scraped data”, [AA1]. The plots are made with “JavaScript::D3”, [AAp3].

Here we ingest an image with statistics of fuel exports:

#% markdown
my $imgURL = 'https://pbs.twimg.com/media/GG44adyX0AAPqVa?format=png&name=medium';
my $img = data-import($imgURL, format => 'md-image')

Here is a fairly non-trivial request for data extraction from the image:

my $resFuel = llm-vision-synthesize([
    'Give JSON dictionary of the Date-Country-Values data in the image', 
    llm-prompt('NothingElse')('JSON')
    ], 
    $img, form => sub-parser('JSON'):drop)
[Date-Country-Values => {Jan-22 => {Bulgaria => 56, China => 704, Croatia => 22, Denmark => 118, Finland => 94, France => 140, Germany => 94, Greece => 47, India => 24, Italy => 186, Lithuania => 142, Netherlands => 525, Poland => 165, Romania => 122, South Korea => 327, Spain => 47, Sweden => 47, Total => 3192, Turkey => 170, UK => 68, USA => 24}, Jan-24 => {Brunei => 31, China => 1151, Egypt => 70, Ghana => 35, Greece => 103, India => 1419, Korea => 116, Myanmar => 65, Netherlands => 33, Oman => 23, Total => 3381, Turkey => 305, Unknown => 30}}]

Here is we modify the prompt above in order to get a dataset (an array of hashes):

my $resFuel2 = llm-vision-synthesize([
    'For data in the image give the corresponding JSON table that is an array of dictionaries each with the keys "Date", "Country", "Value".', 
    llm-prompt('NothingElse')('JSON')
    ],
    $img, 
    max-tokens => 4096,
    form => sub-parser('JSON'):drop)
[{Country => USA, Date => Jan-22, Value => 24} {Country => Turkey, Date => Jan-22, Value => 170} {Country => Croatia, Date => Jan-22, Value => 22} {Country => Sweden, Date => Jan-22, Value => 47} {Country => Spain, Date => Jan-22, Value => 47} {Country => Greece, Date => Jan-22, Value => 47} {Country => Bulgaria, Date => Jan-22, Value => 56} {Country => UK, Date => Jan-22, Value => 68} {Country => Germany, Date => Jan-22, Value => 94} {Country => Finland, Date => Jan-22, Value => 94} {Country => Denmark, Date => Jan-22, Value => 118} {Country => Romania, Date => Jan-22, Value => 122} {Country => France, Date => Jan-22, Value => 140} {Country => Lithuania, Date => Jan-22, Value => 142} {Country => Poland, Date => Jan-22, Value => 165} {Country => Italy, Date => Jan-22, Value => 186} {Country => Netherlands, Date => Jan-22, Value => 525} {Country => India, Date => Jan-22, Value => 24} {Country => Japan, Date => Jan-22, Value => 70} {Country => South Korea, Date => Jan-22, Value => 327} {Country => China, Date => Jan-22, Value => 704} {Country => Unknown, Date => Jan-24, Value => 30} {Country => Ghana, Date => Jan-24, Value => 35} {Country => Egypt, Date => Jan-24, Value => 70} {Country => Oman, Date => Jan-24, Value => 23} {Country => Turkey, Date => Jan-24, Value => 305} {Country => Netherlands, Date => Jan-24, Value => 33} {Country => Greece, Date => Jan-24, Value => 103} {Country => Brunei, Date => Jan-24, Value => 31} {Country => Myanmar, Date => Jan-24, Value => 65} {Country => Korea, Date => Jan-24, Value => 116} {Country => China, Date => Jan-24, Value => 1151} {Country => India, Date => Jan-24, Value => 1419}]

Here is how the obtained dataset looks like:

#% html
$resFuel2>>.Hash ==> data-translation()
ValueDateCountry
24Jan-22USA
170Jan-22Turkey
22Jan-22Croatia
47Jan-22Sweden
47Jan-22Spain
47Jan-22Greece
56Jan-22Bulgaria
68Jan-22UK
94Jan-22Germany
94Jan-22Finland
118Jan-22Denmark
122Jan-22Romania
140Jan-22France
142Jan-22Lithuania
165Jan-22Poland
186Jan-22Italy
525Jan-22Netherlands
24Jan-22India
70Jan-22Japan
327Jan-22South Korea
704Jan-22China
30Jan-24Unknown
35Jan-24Ghana
70Jan-24Egypt
23Jan-24Oman
305Jan-24Turkey
33Jan-24Netherlands
103Jan-24Greece
31Jan-24Brunei
65Jan-24Myanmar
116Jan-24Korea
1151Jan-24China
1419Jan-24India

Here we rename or otherwise transform the columns of the dataset above in order to prepare it for creating a heatmap plot (we also show the deduced type):

my $k = 1;
my @fuelDataset = $resFuel2.map({ 
    my %h = $_.clone; 
    %h<z> = log10(%h<Value>); 
    %h<y> = %h<Country>; 
    %h<x> = %h<Date>; 
    %h<label> = %h<Value>;
    %h.grep({ $_.key ∈ <x y z label> }).Hash }).Array;

deduce-type(@fuelDataset);
Vector(Struct([label, x, y, z], [Int, Str, Str, Num]), 33)

Here is the heatmap plot:

#%js
js-d3-heatmap-plot(@fuelDataset,
                width => 700,
                height => 500,
                color-palette => 'Reds',
                plot-labels-color => 'White',
                plot-labels-font-size => 18,
                tick-labels-color => 'steelblue',
                tick-labels-font-size => 12,
                low-value => 0,
                high-value => 3.5,
                margins => {left => 100, right => 0},
                mesh => 0.01,
                title => 'Russia redirecting seaborne crude amid sanctions, 1000 b/d')

Here are the corresponding totals:

group-by($resFuel2, 'Date').map({ $_.key => $_.value.map(*<Value>).sum })
(Jan-24 => 3381 Jan-22 => 3192)

References

Articles

[AA1] Anton Antonov, “Heatmap plots over LLM scraped data”, (2024), RakuForPrediction at WordPress.

[JB1] Jeffrey Bryant​, “Sympathetic solar flare and geoeffective coronal mass ejection​”, (2024), Wolfram Community.

Packages

[AAp1] Anton Antonov, Data::Importers Raku package, (2024), GitHub/antononcube.

[AAp2] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, JavaScript::D3 Raku package, (2022-2024), GitHub/antononcube.

Videos

[AAv1] Anton Antonov, “Random mandalas generation (with D3.js via Raku)”, (2022), Anton Antonov’s YouTube channel.

WWW::LLaMA

Introduction

This blog posts proclaims and described the Raku package “WWW::LLaMA” that provides access to the machine learning service llamafile, [MO1]. For more details of the llamafile’s API usage see the documentation, [MO2].

Remark: An interactive version of this post — with more examples — is provided by the Jupyter notebook “LLaMA-guide.ipynb”.

This package is very similar to the packages “WWW::OpenAI”, [AAp1], and “WWW::MistralAI”, [AAp2].

“WWW::LLaMA” can be used with (is integrated with) “LLM::Functions”, [AAp3], and “Jupyter::Chatbook”, [AAp5].

Also, of course, prompts from “LLM::Prompts”, [AAp4], can be used with LLaMA’s functions.

Remark: The package “WWW::OpenAI” can be also used to access “llamafile” chat completions. That is done by specifying appropriate base URL to the openai-chat-completion function.

Here is a video that demonstrates running of LLaMa models and use cases for “WWW::LLaMA”:


Installation

Package installations from both sources use zef installer (which should be bundled with the “standard” Rakudo installation file.)

To install the package from Zef ecosystem use the shell command:

zef install WWW::LLaMA

To install the package from the GitHub repository use the shell command:

zef install https://github.com/antononcube/Raku-WWW-LLaMA.git

Install and run LLaMA server

In order to use the package access to LLaMA server is required.

Since the package follows closely the Web API of “llamafile”, [MO1], it is advised to follow first the installation steps in the section of “Quickstart” of [MO1] before trying the functions of the package.


Usage examples

Remark: When the authorization key, auth-key, is specified to be Whatever then it is assigned the string sk-no-key-required. If an authorization key is required then the env variable LLAMA_API_KEY can be also used.

Universal “front-end”

The package has an universal “front-end” function llama-playground for the different functionalities provided by llamafile.

Here is a simple call for a “chat completion”:

use WWW::LLaMA;
llama-playground('What is the speed of a rocket leaving Earth?');
# {content => 
# , and how does it change as the rocket's altitude increases?, generation_settings => {frequency_penalty => 0, grammar => , ignore_eos => False, logit_bias => [], min_p => 0.05000000074505806, mirostat => 0, mirostat_eta => 0.10000000149011612, mirostat_tau => 5, model => llava-v1.5-7b-Q4_K.gguf, n_ctx => 1365, n_keep => 0, n_predict => -1, n_probs => 0, penalize_nl => True, penalty_prompt_tokens => [], presence_penalty => 0, repeat_last_n => 64, repeat_penalty => 1.100000023841858, seed => 4294967295, stop => [], stream => False, temperature => 0.800000011920929, tfs_z => 1, top_k => 40, top_p => 0.949999988079071, typical_p => 1, use_penalty_prompt_tokens => False}, model => llava-v1.5-7b-Q4_K.gguf, prompt => What is the speed of a rocket leaving Earth?, slot_id => 0, stop => True, stopped_eos => True, stopped_limit => False, stopped_word => False, stopping_word => , timings => {predicted_ms => 340.544, predicted_n => 18, predicted_per_second => 52.8566059011464, predicted_per_token_ms => 18.91911111111111, prompt_ms => 94.65, prompt_n => 12, prompt_per_second => 126.78288431061804, prompt_per_token_ms => 7.8875}, tokens_cached => 29, tokens_evaluated => 12, tokens_predicted => 18, truncated => False}

Another one using Bulgarian:

llama-playground('Колко групи могат да се намерят в този облак от точки.', max-tokens => 300, random-seed => 234232, format => 'values');
# Например, група от 50 звезди може да се намери в този облак от 100 000 звезди, които са разпределени на различни места. За да се намерят всичките, е необходимо да се използва алгоритъм за търсене на най-близките съседи на всеки от обектите.
# 
# Въпреки че теоретично това може да бъде постигнато, реално това е много трудно и сложно, особено когато се има предвид голям брой звезди в облака.

Remark: The functions llama-chat-completion or llama-completion can be used instead in the examples above. (The latter is synonym of the former.)

Models

The current LLaMA model can be found with the function llama-model:

llama-model;

# llava-v1.5-7b-Q4_K.gguf

Remark: Since there is no dedicated API endpoint for getting the model(s), the current model is obtained via “simple” (non-chat) completion.

Code generation

There are two types of completions : text and chat. Let us illustrate the differences of their usage by Raku code generation. Here is a text completion:

llama-text-completion(
        'generate Raku code for making a loop over a list',
        max-tokens => 120,
        format => 'values');
# , multiplying every number with the next
#
# ```raku
# my @numbers = (1 .. 5);
# my $result = Nil;
# for ^@numbers -> $i {
#    $result = $result X $i if defined $result;
#    $result = $i;
# }
# say $result; # prints 120
# ```
# 
# This code defines a list of numbers, initializes a variable `$result` to be `Nil`, and then uses a `for` loop to iterate over the indices

Here is a chat completion:

llama-completion(
        'generate Raku code for making a loop over a list',
        max-tokens => 120,
        format => 'values');
# Here's an example of a loop over a list in Raku:
# ```perl
# my @list = (1, 2, 3, 4, 5);
# 
# for @list -> $item {
#     say "The value of $item is $item.";
# }
# ```
# This will output:
# ```sql
# The value of 1 is 1.
# The value of 2 is 2.
# The value of 3 is 3.
# The value of 4 is 4.
# The value of 5

Embeddings

Embeddings can be obtained with the function llama-embedding. Here is an example of finding the embedding vectors for each of the elements of an array of strings:

my @queries = [
    'make a classifier with the method RandomForeset over the data dfTitanic',
    'show precision and accuracy',
    'plot True Positive Rate vs Positive Predictive Value',
    'what is a good meat and potatoes recipe'
];

my $embs = llama-embedding(@queries, format => 'values', method => 'tiny');
$embs.elems;

# 4

Here we show:

  • That the result is an array of four vectors each with length 1536
  • The distributions of the values of each vector
use Data::Reshapers;
use Data::Summarizers;

say "\$embs.elems : { $embs.elems }";
say "\$embs>>.elems : { $embs>>.elems }";
records-summary($embs.kv.Hash.&transpose);
# $embs.elems : 4
# $embs>>.elems : 4096 4096 4096 4096
# +--------------------------------+----------------------------------+---------------------------------+-----------------------------------+
# | 1                              | 2                                | 0                               | 3                                 |
# +--------------------------------+----------------------------------+---------------------------------+-----------------------------------+
# | Min    => -30.241486           | Min    => -20.993749618530273    | Min    => -32.435486            | Min    => -31.10381317138672      |
# | 1st-Qu => -0.7924895882606506  | 1st-Qu => -1.0563270449638367    | 1st-Qu => -0.9738395810127258   | 1st-Qu => -0.9602127969264984     |
# | Mean   => 0.001538657780784547 | Mean   => -0.013997373717373307  | Mean   => 0.0013605252470370028 | Mean   => -0.03597712098735428    |
# | Median => 0.016784800216555596 | Median => -0.0001810337998904288 | Median => 0.023735892958939075  | Median => -0.00221119043999351575 |
# | 3rd-Qu => 0.77385222911834715  | 3rd-Qu => 0.9824191629886627     | 3rd-Qu => 0.9983229339122772    | 3rd-Qu => 0.9385882616043091      |
# | Max    => 25.732345581054688   | Max    => 23.233409881591797     | Max    => 15.80211067199707     | Max    => 24.811737               |
# +--------------------------------+----------------------------------+---------------------------------+-----------------------------------+

Here we find the corresponding dot products and (cross-)tabulate them:

use Data::Reshapers;
use Data::Summarizers;
my @ct = (^$embs.elems X ^$embs.elems).map({ %( i => $_[0], j => $_[1], dot => sum($embs[$_[0]] >>*<< $embs[$_[1]])) }).Array;

say to-pretty-table(cross-tabulate(@ct, 'i', 'j', 'dot'), field-names => (^$embs.elems)>>.Str);
# +---+--------------+--------------+--------------+--------------+
# |   |      0       |      1       |      2       |      3       |
# +---+--------------+--------------+--------------+--------------+
# | 0 | 14984.053717 | 1708.345468  | 4001.487938  | 7619.791201  |
# | 1 | 1708.345468  | 10992.176167 | -1364.137315 | -2970.554539 |
# | 2 | 4001.487938  | -1364.137315 | 14473.816914 | 6428.638382  |
# | 3 | 7619.791201  | -2970.554539 | 6428.638382  | 14534.609050 |
# +---+--------------+--------------+--------------+--------------+

Remark: Note that the fourth element (the cooking recipe request) is an outlier. (Judging by the table with dot products.)

Tokenizing and de-tokenizing

Here we tokenize some text:

my $txt = @queries.head;
my $res = llama-tokenize($txt, format => 'values');
# [1207 263 770 3709 411 278 1158 16968 29943 2361 300 975 278 848 4489 29911 8929 293]

Here we get the original text be de-tokenizing:

llama-detokenize($res);
# {content =>  make a classifier with the method RandomForeset over the data dfTitanic}

Chat completions with engineered prompts

Here is a prompt for “emojification” (see the Wolfram Prompt Repository entry “Emojify”):

my $preEmojify = q:to/END/;
Rewrite the following text and convert some of it into emojis.
The emojis are all related to whatever is in the text.
Keep a lot of the text, but convert key words into emojis.
Do not modify the text except to add emoji.
Respond only with the modified text, do not include any summary or explanation.
Do not respond with only emoji, most of the text should remain as normal words.
END
# Rewrite the following text and convert some of it into emojis.
# The emojis are all related to whatever is in the text.
# Keep a lot of the text, but convert key words into emojis.
# Do not modify the text except to add emoji.
# Respond only with the modified text, do not include any summary or explanation.
# Do not respond with only emoji, most of the text should remain as normal words.

Here is an example of chat completion with emojification:

llama-chat-completion([ system => $preEmojify, user => 'Python sucks, Raku rocks, and Perl is annoying'], max-tokens => 200, format => 'values')
# 🐍💥

Command Line Interface

Playground access

The package provides a Command Line Interface (CLI) script:

llama-playground --help
# Usage:
#   llama-playground [<words> ...] [--path=<Str>] [--mt|--max-tokens[=Int]] [-m|--model=<Str>] [-r|--role=<Str>] [-t|--temperature[=Real]] [--response-format=<Str>] [-a|--auth-key=<Str>] [--timeout[=UInt]] [-f|--format=<Str>] [--method=<Str>] [--base-url=<Str>] -- Command given as a sequence of words.
#   
#     --path=<Str>               Path, one of ''completions', 'chat/completions', 'embeddings', or 'models'. [default: 'chat/completions']
#     --mt|--max-tokens[=Int]    The maximum number of tokens to generate in the completion. [default: 2048]
#     -m|--model=<Str>           Model. [default: 'Whatever']
#     -r|--role=<Str>            Role. [default: 'user']
#     -t|--temperature[=Real]    Temperature. [default: 0.7]
#     --response-format=<Str>    The format in which the response is returned. [default: 'url']
#     -a|--auth-key=<Str>        Authorization key (to use LLaMA server Web API.) [default: 'Whatever']
#     --timeout[=UInt]           Timeout. [default: 10]
#     -f|--format=<Str>          Format of the result; one of "json", "hash", "values", or "Whatever". [default: 'Whatever']
#     --method=<Str>             Method for the HTTP POST query; one of "tiny" or "curl". [default: 'tiny']
#     --base-url=<Str>           Base URL of the LLaMA server. [default: 'http://127.0.0.1:80…']

Remark: When the authorization key, auth-key, is specified to be Whatever then it is assigned the string sk-no-key-required. If an authorization key is required then the env variable LLAMA_API_KEY can be also used.


Mermaid diagram

The following flowchart corresponds to the steps in the package function llama-playground:


References

Packages

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::MistralAI Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, LLM::Prompts Raku package, (2023-2024), GitHub/antononcube.

[AAp5] Anton Antonov, Jupyter::Chatbook Raku package, (2023), GitHub/antononcube.

[MO1] Mozilla Ocho, llamafile.

[MO2] Mozilla Ocho, llamafile documentation.

WWW::Gemini

This blog post proclaims the Raku package “WWW::Gemini” for connecting with Google’s Large Language Models (LLMs) service Gemini. It is based on Gemini’s API documentation.

The design and implementation of the package closely follows those of “WWW::PaLM”, [AAp1], and “WWW::OpenAI”, [AAp2].

Installation

From Zef ecosystem:

zef install WWW::Gemini

From GitHub:

zef install https://github.com/antononcube/Raku-WWW-Gemini

Usage examples

Show models:

use WWW::Gemini;

gemini-models()
# (models/chat-bison-001 models/text-bison-001 models/embedding-gecko-001 models/gemini-1.0-pro models/gemini-1.0-pro-001 models/gemini-1.0-pro-latest models/gemini-1.0-pro-vision-latest models/gemini-pro models/gemini-pro-vision models/embedding-001 models/aqa)

Show text generation:

.say for gemini-generate-content('what is the population in Brazil?', format => 'values');
# 215,351,056 (2023 est.)

Using a synonym function:

.say for gemini-generation('Who wrote the book "Dune"?');
# {candidates => [{content => {parts => [{text => Frank Herbert}], role => model}, finishReason => STOP, index => 0, safetyRatings => [{category => HARM_CATEGORY_SEXUALLY_EXPLICIT, probability => NEGLIGIBLE} {category => HARM_CATEGORY_HATE_SPEECH, probability => NEGLIGIBLE} {category => HARM_CATEGORY_HARASSMENT, probability => NEGLIGIBLE} {category => HARM_CATEGORY_DANGEROUS_CONTENT, probability => NEGLIGIBLE}]}], promptFeedback => {safetyRatings => [{category => HARM_CATEGORY_SEXUALLY_EXPLICIT, probability => NEGLIGIBLE} {category => HARM_CATEGORY_HATE_SPEECH, probability => NEGLIGIBLE} {category => HARM_CATEGORY_HARASSMENT, probability => NEGLIGIBLE} {category => HARM_CATEGORY_DANGEROUS_CONTENT, probability => NEGLIGIBLE}]}}

Embeddings

Show text embeddings:

use Data::TypeSystem;

my @vecs = gemini-embed-content(["say something nice!",
                            "shout something bad!",
                            "where is the best coffee made?"],
        format => 'values');

say "Shape: ", deduce-type(@vecs);
.say for @vecs;
# Shape: Vector(Vector((Any), 768), 3)
# [-0.031044435 -0.01293638 0.008904989 ...]
# [0.015647776 -0.03143455 -0.040937748 ...]
# [-0.01054143 -0.03587494 -0.013359126 ...]

Counting tokens

Here we show how to find the number of tokens in a text:

my $text = q:to/END/;
AI has made surprising successes but cannot solve all scientific problems due to computational irreducibility.
END

gemini-count-tokens($text, format => 'values');

# 20

Vision

If the function gemini-completion is given a list of images, textual results corresponding to those images is returned. The argument “images” is a list of image URLs, image file names, or image Base64 representations. (Any combination of those element types.)

Consider this this image:

Here is an image recognition invocation:

my $fname = $*CWD ~ '/resources/ThreeHunters.jpg';
my @images = [$fname,];
say gemini-generation("Give concise descriptions of the images.", :@images, format => 'values');
# The image shows a family of raccoons in a tree. The mother raccoon is watching over her two cubs. The cubs are playing with each other. There are butterflies flying around the tree. The leaves on the tree are turning brown and orange.

When a file name is given to the argument “images” of gemini-completion then the function encode-image of “Image::Markup::Utilities”, [AAp4], is applied to it.


Command Line Interface

Maker suite access

The package provides a Command Line Interface (CLI) script:

gemini-prompt --help
# Usage:
#   gemini-prompt [<words> ...] [--path=<Str>] [-n[=UInt]] [--mt|--max-output-tokens[=UInt]] [-m|--model=<Str>] [-t|--temperature[=Real]] [-a|--auth-key=<Str>] [--timeout[=UInt]] [-f|--format=<Str>] [--method=<Str>] -- Command given as a sequence of words.
#   
#     --path=<Str>                       Path, one of 'generateContent', 'embedContent', 'countTokens', or 'models'. [default: 'generateContent']
#     -n[=UInt]                          Number of completions or generations. [default: 1]
#     --mt|--max-output-tokens[=UInt]    The maximum number of tokens to generate in the completion. [default: 100]
#     -m|--model=<Str>                   Model. [default: 'Whatever']
#     -t|--temperature[=Real]            Temperature. [default: 0.7]
#     -a|--auth-key=<Str>                Authorization key (to use Gemini API.) [default: 'Whatever']
#     --timeout[=UInt]                   Timeout. [default: 10]
#     -f|--format=<Str>                  Format of the result; one of "json", "hash", "values", or "Whatever". [default: 'values']
#     --method=<Str>                     Method for the HTTP POST query; one of "tiny" or "curl". [default: 'tiny']

Remark: When the authorization key argument “auth-key” is specified set to “Whatever” then gemini-prompt attempts to use one of the env variables GEMINI_API_KEY or PALM_API_KEY.


Mermaid diagram

The following flowchart corresponds to the steps in the package function gemini-prompt:


References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPredictions at WordPress.

[AA2] Anton Antonov, “Number guessing games: Gemini vs ChatGPT” (2023), RakuForPredictions at WordPress.

[ZG1] Zoubin Ghahramani, “Introducing Gemini 2”, (2023), Google Official Blog on AI.

Packages, platforms

[AAp1] Anton Antonov, WWW::PaLM Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, Image::Markup::Utilities Raku package, (2023-2024), GitHub/antononcube.

[AAp5] Anton Antonov, ML::FindTextualAnswer Raku package, (2023-2024), GitHub/antononcube.

LLM aids for processing Biden’s State of the Union address

… given on March, 7, 2024


Introduction

In this blog post (notebook) we provide aids and computational workflows for the analysis of Joe Biden’s State of the Union address given on March 7th, 2024. We use Large Language Model (LLM) workflows and prompts for examining and understanding the speech in a systematic and reproducible manner.

The speech transcript is taken from whitehouse.gov.

The computations are done with a Raku chatbook, [AAp6, AAv1÷AAv3]. The LLM functions used in the workflows are explained and demonstrated in [AA1, AAv3]. The workflows are done with OpenAI’s models [AAp1]. Currently the models of Google’s (PaLM, Gemini), [AAp2], and MistralAI, [AAp4], cannot be used with the workflows below because their input token limits are too low.

Similar set of workflows and prompts are described in:

The prompts of the latter are used below.

The following table — derived from Biden’s address — has the most important or provocative statements (found by an LLM):

subjectstatement
Global politics and securityOverseas, Putin of Russia is on the march, invading Ukraine and sowing chaos throughout Europe and beyond.
Support for UkraineBut Ukraine can stop Putin if we stand with Ukraine and provide the weapons it needs to defend itself.
Domestic politicsA former American President actually said that, bowing down to a Russian leader.
NATOToday, we’ve made NATO stronger than ever.
Democracy and January 6thJanuary 6th and the lies about the 2020 election, and the plots to steal the election, posed the gravest threat to our democracy since the Civil War.
Reproductive rightsGuarantee the right to IVF nationwide!
Economic recovery and policies15 million new jobs in just three years – that’s a record!
Healthcare and prescription drug costsInstead of paying $400 a month for insulin seniors with diabetes only have to pay $35 a month!
Education and tuitionLet’s continue increasing Pell Grants for working- and middle-class families.
Tax reformIt’s time to raise the corporate minimum tax to at least 21% so every big corporation finally begins to pay their fair share.
Gun violence preventionI’m demanding a ban on assault weapons and high-capacity magazines!
ImmigrationSend me the border bill now!
Climate actionI am cutting our carbon emissions in half by 2030.
Israel and Gaza conflictIsrael has a right to go after Hamas.
Vision for America’s futureI see a future where the middle class finally has a fair shot and the wealthy finally have to pay their fair share in taxes.

Structure

The structure of the notebook is as follows:

  1. Getting the speech text and setup
    Standard ingestion and setup.
  2. Themes
    TL;DR via a table of themes.
  3. Most important or provocative statements
    What are the most important or provocative statements?
  4. Summary and recommendations
    Extracting speech wisdom.
  5. Hidden and propaganda messages
    For people living in USA.

Remark: This blog post (and corresponding notebook) are made “for completeness” of the set [AA2, AA3]. Ideally, that would be the last of those.

Remark: I strongly consider making a Cro app, and/or non-Cro one, that apply the described workflows and prompts to user uploaded texts.


Getting the speech text and setup

Here we load packages and define a text statistics function and HTML stripping function:

use HTTP::Tiny;
use JSON::Fast;
use Data::Reshapers;

sub text-stats(Str:D $txt) { <chars words lines> Z=> [$txt.chars, $txt.words.elems, $txt.lines.elems] }

sub strip-html(Str $html) returns Str {

    my $res = $html
    .subst(/'<style'.*?'</style>'/, :g)
    .subst(/'<script'.*?'</script>'/, :g)
    .subst(/'<'.*?'>'/, :g)
    .subst(/'&lt;'.*?'&gt;'/, :g)
    .subst(/'&nbsp;'/, ' ', :g)
    .subst(/[\v\s*] ** 2..*/, "\n\n", :g);

    return $res;
}

Ingest text

Here we ingest the text of the speech:

my $url = 'https://www.whitehouse.gov/briefing-room/speeches-remarks/2024/03/07/remarks-of-president-joe-biden-state-of-the-union-address-as-prepared-for-delivery-2/';
my $htmlEN = HTTP::Tiny.new.get($url)<content>.decode;

$htmlEN .= subst(/ \v+ /, "\n", :g);

my $txtEN = strip-html($htmlEN);

$txtEN .= substr($txtEN.index('March 07, 2024') .. ($txtEN.index("Next Post:") - 1));

text-stats($txtEN)
# (chars => 37702 words => 6456 lines => 453)

LLM access configuration

Here we configure LLM access — we use OpenAI’s model “gpt-4-turbo-preview” since it allows inputs with 128K tokens:

my $conf = llm-configuration('ChatGPT', model => 'gpt-4-turbo-preview', max-tokens => 4096, temperature => 0.7);
$conf.Hash.elems

Themes

Here we extract the themes found in the speech and tabulate them (using the prompt “ThemeTableJSON”):

my $tblThemes = llm-synthesize(llm-prompt("ThemeTableJSON")($txtEN, "article", 50), e => $conf, form => sub-parser('JSON'):drop);

$tblThemes.&dimensions;
# (16 2)

Here we tabulate the found themes:

#% html
$tblThemes ==> data-translation(field-names=><theme content>)
themecontent
IntroductionPresident Joe Biden begins by reflecting on historical challenges to freedom and democracy, comparing them to current threats both domestically and internationally.
Foreign Policy and National SecurityBiden addresses the situation in Ukraine, NATO’s strength, and the necessity of bipartisan support to confront Putin and Russia’s aggression.
Domestic Challenges and DemocracyHe discusses the January 6th insurrection, the ongoing threats to democracy in the U.S., and the need for unity to defend democratic values.
Reproductive RightsBiden criticizes the overturning of Roe v. Wade, shares personal stories to highlight the impact, and calls for legislative action to protect reproductive freedoms.
Economic Recovery and PolicyThe President outlines his administration’s achievements in job creation, economic growth, and efforts to reduce inflation, emphasizing a middle-out economic approach.
Infrastructure and ManufacturingBiden highlights investments in infrastructure, clean energy, and manufacturing, including specific projects and acts that have contributed to job creation and economic development.
HealthcareHe details achievements in healthcare reform, including prescription drug pricing, expansion of Medicare, and initiatives focused on women’s health research.
Housing and EducationThe President proposes solutions for the housing crisis and outlines plans for improving education from preschool to college affordability.
Tax Reform and Fiscal ResponsibilityBiden proposes tax reforms targeting corporations and the wealthy to ensure fairness and fund his policy initiatives, contrasting his approach with the previous administration.
Climate Change and Environmental PolicyHe discusses significant actions taken to address climate change, emphasizing job creation in clean energy and conservation efforts.
Immigration and Border SecurityBiden contrasts his immigration policy with his predecessor’s, advocating for a bipartisan border security bill and a compassionate approach to immigration.
Voting Rights and Civil RightsThe President calls for the passage of voting rights legislation and addresses issues around diversity, equality, and the protection of fundamental rights.
Gun Violence PreventionHe shares personal stories to underscore the urgency of gun violence prevention, celebrating past achievements and calling for further action.
Foreign ConflictsBiden addresses the conflict in the Middle East, emphasizing the need for humanitarian aid and a two-state solution for Israel and Palestine.
Global Leadership and CompetitionThe President discusses the U.S.’s economic competition with China, American leadership on the global stage, and the importance of alliances.
Vision for America’s FutureBiden concludes with an optimistic vision for America, focusing on unity, democracy, and a future where America leads by example.

Most important or provocative statements

Here we find important or provocative statements in the speech via an LLM synthesis:

my $imp = llm-synthesize([
    "Give the most important or provocative statements in the following speech.\n\n", 
    $txtEN,
    "Give the results as a JSON array with subject-statement pairs.",
    llm-prompt('NothingElse')('JSON')
    ], e => $conf, form => sub-parser('JSON'):drop);

$imp.&dimensions
# (15 2)

Show the important or provocative statements in Markdown format:

#% html
$imp ==> data-translation(field-names => <subject statement>)
subjectstatement
Global politics and securityOverseas, Putin of Russia is on the march, invading Ukraine and sowing chaos throughout Europe and beyond.
Support for UkraineBut Ukraine can stop Putin if we stand with Ukraine and provide the weapons it needs to defend itself.
Domestic politicsA former American President actually said that, bowing down to a Russian leader.
NATOToday, we’ve made NATO stronger than ever.
Democracy and January 6thJanuary 6th and the lies about the 2020 election, and the plots to steal the election, posed the gravest threat to our democracy since the Civil War.
Reproductive rightsGuarantee the right to IVF nationwide!
Economic recovery and policies15 million new jobs in just three years – that’s a record!
Healthcare and prescription drug costsInstead of paying $400 a month for insulin seniors with diabetes only have to pay $35 a month!
Education and tuitionLet’s continue increasing Pell Grants for working- and middle-class families.
Tax reformIt’s time to raise the corporate minimum tax to at least 21% so every big corporation finally begins to pay their fair share.
Gun violence preventionI’m demanding a ban on assault weapons and high-capacity magazines!
ImmigrationSend me the border bill now!
Climate actionI am cutting our carbon emissions in half by 2030.
Israel and Gaza conflictIsrael has a right to go after Hamas.
Vision for America’s futureI see a future where the middle class finally has a fair shot and the wealthy finally have to pay their fair share in taxes.

Summary and recommendations

Here we get a summary and extract ideas, quotes, and recommendations from the speech:

my $sumIdea =llm-synthesize(llm-prompt("ExtractArticleWisdom")($txtEN), e => $conf);

text-stats($sumIdea)
# (chars => 6166 words => 972 lines => 95)

The result is rendered below.


#% markdown
$sumIdea.subst(/ ^^ '#' /, '###', :g)

SUMMARY

President Joe Biden delivered the State of the Union Address on March 7, 2024, focusing on the challenges and opportunities facing the United States. He discussed the assault on democracy, the situation in Ukraine, domestic policies including healthcare and economy, and the need for unity and progress in addressing national and international issues.

IDEAS:

  • The historical context of challenges to freedom and democracy, drawing parallels between past and present threats.
  • The role of the United States in supporting Ukraine against Russian aggression.
  • The importance of bipartisan support for national security and democracy.
  • The need for America to take a leadership role in NATO and support its allies.
  • The dangers of political figures undermining democratic values for personal or political gain.
  • The connection between domestic policies and the strength of democracy, including reproductive rights and healthcare.
  • The economic recovery and growth under the Biden administration, emphasizing job creation and infrastructure improvements.
  • The focus on middle-class prosperity and the role of unions in economic recovery.
  • The commitment to addressing climate change and promoting clean energy jobs.
  • The significance of education, from pre-school access to college affordability, in securing America’s future.
  • The need for fair taxation and closing loopholes for the wealthy and corporations.
  • The importance of healthcare reform, including lowering prescription drug costs and expanding Medicare.
  • The commitment to protecting Social Security and Medicare from cuts.
  • The approach to immigration reform and border security as humanitarian and security issues.
  • The stance on gun violence prevention and the need for stricter gun control laws.
  • The emphasis on America’s resilience and optimism for the future.
  • The call for unity in defending democracy and building a better future for all Americans.
  • The vision of America as a land of possibilities, with a focus on progress and inclusivity.
  • The acknowledgment of America’s role in the world, including support for Israel and a two-state solution with Palestine.
  • The importance of scientific research and innovation in solving major challenges like cancer.

QUOTES:

  • “Not since President Lincoln and the Civil War have freedom and democracy been under assault here at home as they are today.”
  • “America is a founding member of NATO the military alliance of democratic nations created after World War II to prevent war and keep the peace.”
  • “History is watching, just like history watched three years ago on January 6th.”
  • “You can’t love your country only when you win.”
  • “Inflation has dropped from 9% to 3% – the lowest in the world!”
  • “The racial wealth gap is the smallest it’s been in 20 years.”
  • “I’ve been delivering real results in a fiscally responsible way.”
  • “Restore the Child Tax Credit because no child should go hungry in this country!”
  • “No billionaire should pay a lower tax rate than a teacher, a sanitation worker, a nurse!”
  • “We are the only nation in the world with a heart and soul that draws from old and new.”

HABITS:

  • Advocating for bipartisan cooperation in Congress.
  • Emphasizing the importance of education in personal growth and national prosperity.
  • Promoting the use of clean energy and sustainable practices to combat climate change.
  • Prioritizing healthcare reform to make it more affordable and accessible.
  • Supporting small businesses and entrepreneurship as engines of economic growth.
  • Encouraging scientific research and innovation, especially in healthcare.
  • Upholding the principles of fair taxation and economic justice.
  • Leveraging diplomacy and international alliances for global stability.
  • Committing to the protection of democratic values and institutions.
  • Fostering community engagement and civic responsibility.

FACTS:

  • The United States has welcomed Finland and Sweden into NATO, strengthening the alliance.
  • The U.S. economy has created 15 million new jobs in three years, a record number.
  • Inflation in the United States has decreased from 9% to 3%.
  • More people have health insurance in the U.S. today than ever before.
  • The racial wealth gap is the smallest it has been in 20 years.
  • The United States is investing more in research and development than ever before.
  • The Biden administration has made the largest investment in public safety ever through the American Rescue Plan.
  • The murder rate saw the sharpest decrease in history last year.
  • The United States is leading international efforts to provide humanitarian assistance to Gaza.
  • The U.S. has revitalized its partnerships and alliances in the Pacific region.

REFERENCES:

  • NATO military alliance.
  • Bipartisan National Security Bill.
  • Chips and Science Act.
  • Bipartisan Infrastructure Law.
  • Affordable Care Act (Obamacare).
  • Voting Rights Act.
  • Freedom to Vote Act.
  • John Lewis Voting Rights Act.
  • PRO Act for worker’s rights.
  • PACT Act for veterans exposed to toxins.

RECOMMENDATIONS:

  • Stand with Ukraine and support its defense against Russian aggression.
  • Strengthen NATO and support new member states.
  • Pass the Bipartisan National Security Bill to enhance U.S. security.
  • Guarantee reproductive rights nationwide and protect healthcare decisions.
  • Continue economic policies that promote job creation and infrastructure development.
  • Implement fair taxation for corporations and the wealthy to ensure economic justice.
  • Expand Medicare and lower prescription drug costs for all Americans.
  • Protect and strengthen Social Security and Medicare.
  • Pass comprehensive immigration reform and secure the border humanely.
  • Address climate change through significant investments in clean energy and jobs.
  • Promote education access from pre-school to college to ensure a competitive workforce.
  • Implement stricter gun control laws, including bans on assault weapons and universal background checks.
  • Support a two-state solution for Israel and Palestine and work towards peace in the Middle East.
  • Harness the promise of artificial intelligence while protecting against its perils.

Hidden and propaganda messages

In this section we try to find is the speech apolitical and propaganda-free.

Remark: We leave to reader as an exercise to verify that both the overt and hidden messages found by the LLM below are explicitly stated in article.

Here we find the hidden and “propaganda” messages in the article:

my $propMess =llm-synthesize([llm-prompt("FindPropagandaMessage"), $txtEN], e => $conf);

text-stats($propMess)
# (chars => 6441 words => 893 lines => 83)

Remark: The prompt “FindPropagandaMessage” has an explicit instruction to say that it is intentionally cynical. It is also, marked as being “For fun.”

The LLM result is rendered below.


#% markdown
$propMess.subst(/ ^^ '#' /, '###', :g).subst(/ ^^ (<[A..Z \h \']>+ ':') /, { "### {$0.Str} \n"}, :g)

OVERT MESSAGE

President Biden emphasizes democracy, support for Ukraine, and domestic advancements in his address.

HIDDEN MESSAGE

Biden seeks to consolidate Democratic power by evoking fear of Republican governance and foreign threats.

HIDDEN OPINIONS

  • Democratic policies ensure national and global security effectively.
  • Republican opposition jeopardizes both national unity and international alliances.
  • Historical comparisons highlight current threats to democracy as unprecedented.
  • Support for Ukraine is a moral and strategic imperative for global democracy.
  • Criticism of the Supreme Court’s decisions reflects a push for legislative action on contentious issues.
  • Emphasis on job creation and economic policies aims to showcase Democratic governance success.
  • Investments in infrastructure and technology are crucial for future American prosperity.
  • Health care reforms and education investments underscore a commitment to social welfare.
  • Climate change initiatives are both a moral obligation and economic opportunity.
  • Immigration reforms are positioned as essential to American identity and values.

SUPPORTING ARGUMENTS and QUOTES

  • Comparisons to past crises underscore the urgency of current threats.
  • Criticism of Republican predecessors and Congress members suggests a need for Democratic governance.
  • References to NATO and Ukraine highlight a commitment to international democratic principles.
  • Mention of Supreme Court decisions and calls for legislative action stress the importance of Democratic control.
  • Economic statistics and policy achievements are used to argue for the effectiveness of Democratic governance.
  • Emphasis on infrastructure, technology, and climate investments showcases forward-thinking policies.
  • Discussion of health care and education reforms highlights a focus on social welfare.
  • The portrayal of immigration reforms reflects a foundational American value under Democratic leadership.

DESIRED AUDIENCE OPINION CHANGE

  • See Democratic policies as essential for both national and global security.
  • View Republican opposition as a threat to democracy and unity.
  • Recognize the urgency of supporting Ukraine against foreign aggression.
  • Agree with the need for legislative action on Supreme Court decisions.
  • Appreciate the success of Democratic economic and infrastructure policies.
  • Support Democratic initiatives on climate change as crucial for the future.
  • Acknowledge the importance of health care and education investments.
  • Value immigration reforms as core to American identity and values.
  • Trust in Democratic leadership for navigating global crises.
  • Believe in the effectiveness of Democratic governance for social welfare.

DESIRED AUDIENCE ACTION CHANGE

  • Support Democratic candidates in elections.
  • Advocate for legislative action on contentious Supreme Court decisions.
  • Endorse and rally for Democratic economic and infrastructure policies.
  • Participate in initiatives supporting climate change action.
  • Engage in advocacy for health care and education reforms.
  • Embrace and promote immigration reforms as fundamental to American values.
  • Voice opposition to Republican policies perceived as threats to democracy.
  • Mobilize for international solidarity, particularly regarding Ukraine.
  • Trust in and amplify the successes of Democratic governance.
  • Actively defend democratic principles both nationally and internationally.

MESSAGES

President Biden wants you to believe he is advocating for democracy and progress, but he is actually seeking to consolidate Democratic power and diminish Republican influence.

PERCEPTIONS

President Biden wants you to believe he is a unifier and protector of democratic values, but he’s actually a strategic politician emphasizing Democratic successes and Republican failures.

ELLUL’S ANALYSIS

According to Jacques Ellul’s “Propaganda: The Formation of Men’s Attitudes,” Biden’s address exemplifies modern political propaganda through its strategic framing of issues, historical comparisons, and appeals to democratic ideals. Ellul would likely note the address’s dual function: to solidify in-group unity (among Democratic supporters) and to subtly influence the broader public’s perceptions of domestic and international challenges. The speech leverages crises as opportunities for reinforcing the necessity of Democratic governance, illustrating Ellul’s observation that effective propaganda exploits existing tensions to achieve political objectives.

BERNAYS’ ANALYSIS

Based on Edward Bernays’ “Propaganda” and “Engineering of Consent,” Biden’s speech can be seen as an exercise in shaping public opinion towards Democratic policies and leadership. Bernays would recognize the sophisticated use of symbols (e.g., references to historical events and figures) and emotional appeals to construct a narrative that positions Democratic governance as essential for the nation’s future. The speech’s emphasis on bipartisan achievements and calls for legislative action also reflect Bernays’ insights into the importance of creating a perception of consensus and societal progress.

LIPPMANN’S ANALYSIS

Walter Lippmann’s “Public Opinion” offers a perspective on how Biden’s address attempts to manufacture consent for Democratic policies by presenting a carefully curated version of reality. Lippmann would likely point out the strategic selection of facts, statistics, and stories designed to reinforce the audience’s existing preconceptions and to guide them towards desired conclusions. The address’s focus on bipartisan accomplishments and urgent challenges serves to create an environment where Democratic solutions appear both reasonable and necessary.

FRANKFURT’S ANALYSIS

Harry G. Frankfurt’s “On Bullshit” provides a lens for criticizing the speech’s relationship with truth and sincerity. Frankfurt might argue that while the address purports to be an honest assessment of the nation’s state, it strategically blurs the line between truth and falsehood to serve political ends. The speech’s selective presentation of facts and omission of inconvenient truths could be seen as indicative of a broader political culture where the distinction between lying and misleading is increasingly irrelevant.

NOTE: This AI is tuned specifically to be cynical and politically-minded. Don’t take it as perfect. Run it multiple times and/or go consume the original input to get a second opinion.


References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (2024), RakuForPrediction at WordPress.

[AA3] Anton Antonov, “LLM aids for processing Putin’s State-Of-The-Nation speech” (2024), RakuForPrediction at WordPress.

[AA4] Anton Antonov, “Comprehension AI Aids for “Can AI Solve Science?”, (2024), RakuForPrediction at WordPress.

[OAIb1] OpenAI team, “New models and developer products announced at DevDay”, (2023), OpenAI/blog.

Packages

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::PaLM Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, WWW::MistralAI Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, WWW::LLaMA Raku package, (2024), GitHub/antononcube.

[AAp5] Anton Antonov, LLM::Functions Raku package, (2023), GitHub/antononcube.

[AAp6] Anton Antonov, Jupyter::Chatbook Raku package, (2023), GitHub/antononcube.

[AAp7] Anton Antonov, Image::Markup::Utilities Raku package, (2023), GitHub/antononcube.

Videos

[AAv1] Anton Antonov, “Jupyter Chatbook LLM cells demo (Raku)”, (2023), YouTube/@AAA4Prediction.

[AAv2] Anton Antonov, “Jupyter Chatbook multi cell LLM chats teaser (Raku)”, (2023), YouTube/@AAA4Prediction.

[AAv3] Anton Antonov “Integrating Large Language Models with Raku”, (2023), YouTube/@therakuconference6823.

Comprehension AI Aids for “Can AI Solve Science?”

Introduction

In this blog post (notebook) we use the Large Language Model (LLM) prompts:

to facilitate the reading and comprehension of Stephen Wolfram’s article:

 “Can AI Solve Science?”, [SW1].

Remark: We use “simple” text processing, but since the article has lots of images multi-modal models would be more appropriate.

Here is an image of article’s start:

The computations are done with Wolfram Language (WL) chatbook. The LLM functions used in the workflows are explained and demonstrated in [SW2, AA1, AA2, AAn1÷ AAn4]. The workflows are done with OpenAI’s models. Currently the models of Google’s (PaLM) and MistralAI cannot be used with the workflows below because their input token limits are too low.

Structure

The structure of the notebook is as follows:

NoPartContent
1Getting the article’s text and setupStandard ingestion and setup.
2Article’s structureTL;DR via a table of themes.
3FlowchartsGet flowcharts relating article’s concepts.
4Extract article wisdomGet a summary and extract ideas, quotes, references, etc.
5Hidden messages and propagandaReading it with a conspiracy theorist hat on.

Setup

Here we load a view packages and define ingestion functions:

use HTTP::Tiny;
use JSON::Fast;
use Data::Reshapers;

sub text-stats(Str:D $txt) { <chars words lines> Z=> [$txt.chars, $txt.words.elems, $txt.lines.elems] };

sub strip-html(Str $html) returns Str {

    my $res = $html
    .subst(/'<style'.*?'</style>'/, :g)
    .subst(/'<script'.*?'</script>'/, :g)
    .subst(/'<'.*?'>'/, :g)
    .subst(/'&lt;'.*?'&gt;'/, :g)
    .subst(/[\v\s*] ** 2..*/, "\n\n", :g);

    return $res;
}
&strip-html

Ingest text

Here we get the plain text of the article:

my $htmlArticleOrig = HTTP::Tiny.get("https://writings.stephenwolfram.com/2024/03/can-ai-solve-science/")<content>.decode;

text-stats($htmlArticleOrig);

# (chars => 216219 words => 19867 lines => 1419)

Here we strip the HTML code from the article:

my $txtArticleOrig = strip-html($htmlArticleOrig);

text-stats($txtArticleOrig);

# (chars => 100657 words => 16117 lines => 470)

Here we clean article’s text :

my $txtArticle = $txtArticleOrig.substr(0, $txtArticleOrig.index("Posted in:"));

text-stats($txtArticle);

# (chars => 98011 words => 15840 lines => 389)

LLM access configuration

Here we configure LLM access — we use OpenAI’s model “gpt-4-turbo-preview” since it allows inputs with 128K tokens:

my $conf = llm-configuration('ChatGPT', model => 'gpt-4-turbo-preview', max-tokens => 4096, temperature => 0.7);

$conf.Hash.elems

# 22

Themes

Here we extract the themes found in the article and tabulate them (using the prompt “ThemeTableJSON”):

my $tblThemes = llm-synthesize(llm-prompt("ThemeTableJSON")($txtArticle, "article", 50), e => $conf, form => sub-parser('JSON'):drop);

$tblThemes.&dimensions;

# (12 2)
#% html
$tblThemes ==> data-translation(field-names=><theme content>)
themecontent
Introduction to AI in ScienceDiscusses the potential of AI in solving scientific questions and the belief in AI’s eventual capability to do everything, including science.
AI’s Role and LimitationsExplores deeper questions about AI in science, its role as a practical tool or a fundamentally new method, and its limitations due to computational irreducibility.
AI Predictive CapabilitiesExamines AI’s ability to predict outcomes and its reliance on machine learning and neural networks, highlighting limitations in predicting computational processes.
AI in Identifying Computational ReducibilityDiscusses how AI can assist in identifying pockets of computational reducibility within the broader context of computational irreducibility.
AI’s Application Beyond Human TasksConsiders if AI can understand and predict natural processes directly, beyond emulating human intelligence or tasks.
Solving Equations with AIExplores the potential of AI in solving equations, particularly in areas where traditional methods are impractical or insufficient.
AI for MulticomputationDiscusses AI’s ability to navigate multiway systems and its potential in finding paths or solutions in complex computational spaces.
Exploring Systems with AILooks at how AI can assist in exploring spaces of systems, identifying rules or systems that exhibit specific desired characteristics.
Science as NarrativeExplores the idea of science providing a human-accessible narrative for natural phenomena and how AI might generate or contribute to scientific narratives.
Finding What’s InterestingDiscusses the challenge of determining what’s interesting in science and how AI might assist in identifying interesting phenomena or directions.
Beyond Exact SciencesExplores the potential of AI in extending the domain of exact sciences to include more subjective or less formalized areas of knowledge.
ConclusionSummarizes the potential and limitations of AI in science, emphasizing the combination of AI with computational paradigms for advancing science.

Remark: A fair amount of LLMs give their code results within Markdown code block delimiters (like ““`”.) Hence, (1) the (WL-specified) prompt “ThemeTableJSON” does not use Interpreter["JSON"], but Interpreter["String"], and (2) we use above the sub-parser ‘JSON’ with dropping of non-JSON strings in order to convert the LLM output into a Raku data structure.


Flowcharts

In this section we LLMs to get Mermaid-JS flowcharts that correspond to the content of [SW1].

Remark: Below in order to display Mermaid-JS diagrams we use both the package “WWW::MermaidInk”, [AAp7], and the dedicated mermaid magic cell of Raku Chatabook, [AA6].

Big picture concepts

Here we generate Mermaid-JS flowchart for the “big picture” concepts:

my $mmdBigPicture = 
  llm-synthesize([
    "Create a concise mermaid-js graph for the connections between the big concepts in the article:\n\n", 
    $txtArticle, 
    llm-prompt("NothingElse")("correct mermaid-js")
  ], e => $conf)

Here we define “big picture” styling theme:

my $mmdBigPictureTheme = q:to/END/;
%%{init: {'theme': 'neutral'}}%%
END

Here we create the flowchart from LLM’s specification:

mermaid-ink($mmdBigPictureTheme.chomp ~ $mmdBigPicture.subst(/ '```mermaid' | '```'/, :g), background => 'Cornsilk', format => 'svg')

We made several “big picture” flowchart generations. Here is the result of another attempt:

#% mermaid
graph TD;
    AI[Artificial Intelligence] --> CompSci[Computational Science]
    AI --> CompThink[Computational Thinking]
    AI --> NewTech[New Technology]
    CompSci --> Physics
    CompSci --> Math[Mathematics]
    CompSci --> Ruliology
    CompThink --> SoftwareDesign[Software Design]
    CompThink --> WolframLang[Wolfram Language]
    NewTech --> WolframAlpha["Wolfram|Alpha"]
    Physics --> Philosophy
    Math --> Philosophy
    Ruliology --> Philosophy
    SoftwareDesign --> Education
    WolframLang --> Education
    WolframAlpha --> Education

    %% Styling
    classDef default fill:#8B0000,stroke:#333,stroke-width:2px;

Fine grained

Here we derive a flowchart that refers to more detailed, finer grained concepts:

my $mmdFineGrained = 
  llm-synthesize([
    "Create a mermaid-js flowchart with multiple blocks and multiple connections for the relationships between concepts in the article:\n\n", 
    $txtArticle, 
    "Use the concepts in the JSON table:", 
    $tblThemes, 
    llm-prompt("NothingElse")("correct mermaid-js")
  ], e => $conf)

Here we define “fine grained” styling theme:

my $mmdFineGrainedTheme = q:to/END/;
%%{init: {'theme': 'base','themeVariables': {'backgroundColor': '#FFF'}}}%%
END

Here we create the flowchart from LLM’s specification:

mermaid-ink($mmdFineGrainedTheme.chomp ~ $mmdFineGrained.subst(/ '```mermaid' | '```'/, :g), format => 'svg')

We made several “fine grained” flowchart generations. Here is the result of another attempt:

#% mermaid
graph TD

    AI["AI"] -->|Potential & Limitations| Science["Science"]
    AI -->|Leverages| CR["Computational Reducibility"]
    AI -->|Fails with| CI["Computational Irreducibility"]
    
    Science -->|Domain of| PS["Physical Sciences"]
    Science -->|Expanding to| S["'Exact' Sciences Beyond Traditional Domains"]
    Science -.->|Foundational Role of| CR
    Science -.->|Limited by| CI
    
    PS -->|Traditional Formalizations via| Math["Mathematics/Mathematical Formulas"]
    PS -->|Now Leveraging| AI_Measurements["AI Measurements"]
    
    S -->|Formalizing with| CL["Computational Language"]
    S -->|Leverages| AI_Measurements
    S -->|Future Frontiers with| AI
    
    AI_Measurements -->|Interpretation Challenge| BlackBox["'Black-Box' Nature"]
    AI_Measurements -->|Potential for| NewScience["New Science Discoveries"]
    
    CL -->|Key to Structuring| AI_Results["AI Results"]
    CL -->|Enables Irreducible Computations for| Discovery["Discovery"]
    
    Math -.->|Transitioning towards| CL
    Math -->|Limits when needing 'Precision'| AI_Limits["AI's Limitations"]
    
    Discovery -.->|Not directly achievable via| AI
    
    BlackBox -->|Requires Human| Interpretation["Interpretation"]
    
    CR -->|Empowered by| AI & ML_Techniques["AI & Machine Learning Techniques"]
    CI -.->|Challenge for| AI & ML_Techniques
   
    PS --> Observations["New Observations/Measurements"] --> NewDirections["New Scientific Directions"]
    Observations --> AI_InterpretedPredictions["AI-Interpreted Predictions"]
    NewDirections -.-> AI_Predictions["AI Predictions"] -.-> CI
    NewDirections --> AI_Discoveries["AI-Enabled Discoveries"] -.-> CR

    AI_Discoveries --> NewParadigms["New Paradigms/Concepts"] -.-> S
    AI_InterpretedPredictions -.-> AI_Measurements

    %% Styling
    classDef default fill:#f9f,stroke:#333,stroke-width:2px;
    classDef highlight fill:#bbf,stroke:#006,stroke-width:4px;
    classDef imp fill:#ffb,stroke:#330,stroke-width:4px;
    class PS,CL highlight;
    class AI_Discoveries,NewParadigms imp;

Summary and ideas

Here we get a summary and extract ideas, quotes, and references from the article:

my $sumIdea =llm-synthesize(llm-prompt("ExtractArticleWisdom")($txtArticle), e => $conf);

text-stats($sumIdea)

# (chars => 7386 words => 1047 lines => 78)

The result is rendered below.


#% markdown
$sumIdea.subst(/ ^^ '#' /, '###', :g)

SUMMARY

Stephen Wolfram’s writings explore the capabilities and limitations of AI in the realm of science, discussing how AI can assist in scientific discovery and understanding but also highlighting its limitations due to computational irreducibility and the need for human interpretation and creativity.

IDEAS:

  • AI has made surprising successes but cannot solve all scientific problems due to computational irreducibility.
  • Large Language Models (LLMs) provide a new kind of interface for scientific work, offering high-level autocomplete for scientific knowledge.
  • The transition to computational representation of the world is transforming science, with AI playing a significant role in accessing and exploring this computational realm.
  • AI, particularly through neural networks and machine learning, offers tools for predicting scientific outcomes, though its effectiveness is bounded by the complexity of the systems it attempts to model.
  • Computational irreducibility limits the ability of AI to predict or solve all scientific phenomena, ensuring that surprises and new discoveries remain a fundamental aspect of science.
  • Despite AI’s limitations, it has potential in identifying pockets of computational reducibility, streamlining the discovery of scientific knowledge.
  • AI’s success in areas like visual recognition and language generation suggests potential for contributing to scientific methodologies and understanding, though its ability to directly engage with raw natural processes is less certain.
  • AI techniques, including neural networks and machine learning, have shown promise in areas like solving equations and exploring multicomputational processes, but face challenges due to computational irreducibility.
  • The role of AI in generating human-understandable narratives for scientific phenomena is explored, highlighting the potential for AI to assist in identifying interesting and meaningful scientific inquiries.
  • The exploration of “AI measurements” opens up new possibilities for formalizing and quantifying aspects of science that have traditionally been qualitative or subjective, potentially expanding the domain of exact sciences.

QUOTES:

  • “AI has the potential to give us streamlined ways to find certain kinds of pockets of computational reducibility.”
  • “Computational irreducibility is what will prevent us from ever being able to completely ‘solve science’.”
  • “The AI is doing ‘shallow computation’, but when there’s computational irreducibility one needs irreducible, deep computation to work out what will happen.”
  • “AI measurements are potentially a much richer source of formalizable material.”
  • “AI… is not something built to ‘go out into the wilds of the ruliad’, far from anything already connected to humans.”
  • “Despite AI’s limitations, it has potential in identifying pockets of computational reducibility.”
  • “AI techniques… have shown promise in areas like solving equations and exploring multicomputational processes.”
  • “AI’s success in areas like visual recognition and language generation suggests potential for contributing to scientific methodologies.”
  • “There’s no abstract notion of ‘interestingness’ that an AI or anything can ‘go out and discover’ ahead of our choices.”
  • “The whole story of things like trained neural nets that we’ve discussed here is a story of leveraging computational reducibility.”

HABITS:

  • Continuously exploring the capabilities and limitations of AI in scientific discovery.
  • Engaging in systematic experimentation to understand how AI tools can assist in scientific processes.
  • Seeking to identify and utilize pockets of computational reducibility where AI can be most effective.
  • Exploring the use of neural networks and machine learning for predicting and solving scientific problems.
  • Investigating the potential for AI to assist in creating human-understandable narratives for complex scientific phenomena.
  • Experimenting with “AI measurements” to quantify and formalize traditionally qualitative aspects of science.
  • Continuously refining and expanding computational language to better interface with AI capabilities.
  • Engaging with and contributing to discussions on the role of AI in the future of science and human understanding.
  • Actively seeking new methodologies and innovations in AI that could impact scientific discovery.
  • Evaluating the potential for AI to identify interesting and meaningful scientific inquiries through analysis of large datasets.

FACTS:

  • Computational irreducibility ensures that surprises and new discoveries remain a fundamental aspect of science.
  • AI’s effectiveness in scientific modeling is bounded by the complexity of the systems it attempts to model.
  • AI can identify pockets of computational reducibility, streamlining the discovery of scientific knowledge.
  • Neural networks and machine learning offer tools for predicting outcomes but face challenges due to computational irreducibility.
  • AI has shown promise in areas like solving equations and exploring multicomputational processes.
  • The potential of AI in generating human-understandable narratives for scientific phenomena is actively being explored.
  • “AI measurements” offer new possibilities for formalizing aspects of science that have been qualitative or subjective.
  • The transition to computational representation of the world is transforming science, with AI playing a significant role.
  • Machine learning techniques can be very useful for providing approximate answers in scientific inquiries.
  • AI’s ability to directly engage with raw natural processes is less certain, despite successes in human-like tasks.

REFERENCES:

  • Stephen Wolfram’s writings on AI and science.
  • Large Language Models (LLMs) as tools for scientific work.
  • The concept of computational irreducibility and its implications for science.
  • Neural networks and machine learning techniques in scientific prediction and problem-solving.
  • The role of AI in creating human-understandable narratives for scientific phenomena.
  • The use of “AI measurements” in expanding the domain of exact sciences.
  • The potential for AI to assist in identifying interesting and meaningful scientific inquiries.

RECOMMENDATIONS:

  • Explore the use of AI and neural networks for identifying pockets of computational reducibility in scientific research.
  • Investigate the potential of AI in generating human-understandable narratives for complex scientific phenomena.
  • Utilize “AI measurements” to formalize and quantify traditionally qualitative aspects of science.
  • Engage in systematic experimentation to understand the limitations and capabilities of AI in scientific discovery.
  • Consider the role of computational irreducibility in shaping the limitations of AI in science.
  • Explore the potential for AI to assist in identifying interesting and meaningful scientific inquiries.
  • Continuously refine and expand computational language to better interface with AI capabilities in scientific research.
  • Investigate new methodologies and innovations in AI that could impact scientific discovery.
  • Consider the implications of AI’s successes in human-like tasks for its potential contributions to scientific methodologies.
  • Explore the use of machine learning techniques for providing approximate answers in scientific inquiries where precision is less critical.

Hidden and propaganda messages

In this section we convince ourselves that the article is apolitical and propaganda-free.

Remark: We leave to reader as an exercise to verify that both the overt and hidden messages found by the LLM below are explicitly stated in article.

Here we find the hidden and “propaganda” messages in the article:

my $propMess =llm-synthesize([llm-prompt("FindPropagandaMessage"), $txtArticle], e => $conf);

text-stats($propMess)

# (chars => 6193 words => 878 lines => 64)

Remark: The prompt “FindPropagandaMessage” has an explicit instruction to say that it is intentionally cynical. It is also, marked as being “For fun.”

The LLM result is rendered below.


#% markdown
$propMess.subst(/ ^^ '#' /, '###', :g).subst(/ (<[A..Z \h \']>+ ':') /, { "### {$0.Str} \n"}, :g)

OVERT MESSAGE:

Stephen Wolfram evaluates the potential and limitations of AI in advancing science.

HIDDEN MESSAGE:

Despite AI’s growth, irreducible complexity limits its scientific problem-solving capacity.

HIDDEN OPINIONS:

  • AI can leverage computational reducibility akin to human minds.
  • Traditional mathematical methods surpass AI in solving precise equations.
  • AI’s “shallow computation” struggles with computational irreducibility.
  • AI can provide practical tools for science within its computational reducibility limits.
  • AI’s effectiveness is contingent on approximate answers, failing at precise perfection.
  • AI introduces a new, human-like method to harness computational reducibility.
  • Fundamental discoveries are more likely through irreducible computations, not AI.
  • Combining AI with the computational paradigm offers the best science advancement path.
  • AI’s potential in science is hyped beyond its actual foundational impact.
  • AI’s role in science is more about aiding existing processes than groundbreaking discoveries.

SUPPORTING ARGUMENTS and### QUOTES:

  • “AI is doing ‘shallow computation’, but when there’s computational irreducibility one needs irreducible, deep computation to work out what will happen.”
  • “Typical AI approach to science doesn’t involve explicitly ‘formalizing things’.”
  • “In terms of fundamental potential for discovery, AI pales in comparison to what we can build from the computational paradigm.”
  • “AI can be very useful if an approximate (‘80%’) answer is good enough.”
  • “AI measurements seem to have a certain immediate ‘subjectivity’.”
  • “AI introduces a new—and rather human-like—way of leveraging computational reducibility.”
  • “AI’s effectiveness is contingent on approximate answers, failing at precise perfection.”
  • “AI’s role in science is more about aiding existing processes than groundbreaking discoveries.”
  • “Irreducible computations that we do offer greater potential for discovery than typical AI.”

DESIRED AUDIENCE OPINION CHANGE:

  • Appreciate the limitations and strengths of AI in scientific exploration.
  • Recognize the irreplaceable value of human insight in science.
  • View AI as a tool, not a replacement, for traditional scientific methods.
  • Embrace computational irreducibility as a barrier and boon to discovery.
  • Acknowledge the need for combining AI with computational paradigms.
  • Understand that AI’s role is to augment, not overshadow, human-led science.
  • Realize the necessity of approximate solutions in AI-driven science.
  • Foster realistic expectations from AI in making scientific breakthroughs.
  • Advocate for deeper computational approaches alongside AI in science.
  • Encourage interdisciplinary approaches combining AI with formal sciences.

DESIRED AUDIENCE ACTION CHANGE:

  • Support research combining AI with computational paradigms.
  • Encourage the use of AI for practical, approximate scientific solutions.
  • Advocate for AI’s role as a supplementary tool in science.
  • Push for education that integrates AI with traditional scientific methods.
  • Promote the study of computational irreducibility alongside AI.
  • Emphasize AI’s limitations in scientific discussions and funding.
  • Inspire new approaches to scientific exploration using AI.
  • Foster collaboration between AI researchers and traditional scientists.
  • Encourage critical evaluation of AI’s potential in groundbreaking discoveries.
  • Support initiatives that aim to combine AI with human insight in science.

MESSAGES:

Stephen Wolfram wants you to believe AI can advance science, but he’s actually saying its foundational impact is limited by computational irreducibility.

PERCEPTIONS:

Wolfram wants you to see him as optimistic about AI in science, but he’s actually cautious about its ability to make fundamental breakthroughs.

ELLUL’S ANALYSIS:

Jacques Ellul would likely interpret Wolfram’s findings as a validation of the view that technological systems, including AI, are inherently limited by the complexity of human thought and the natural world. The presence of computational irreducibility underscores the unpredictability and uncontrollability that Ellul warned technology could not tame, suggesting that even advanced AI cannot fully solve or understand all scientific problems, thus maintaining a degree of human autonomy and unpredictability in the face of technological advancement.

BERNAYS’ ANALYSIS:

Edward Bernays might view Wolfram’s exploration of AI in science through the lens of public perception and manipulation, arguing that while AI presents a new frontier for scientific exploration, its effectiveness and limitations must be communicated carefully to avoid both undue skepticism and unrealistic expectations. Bernays would likely emphasize the importance of shaping public opinion to support the use of AI as a tool that complements human capabilities rather than replacing them, ensuring that society remains engaged and supportive of AI’s role in scientific advancement.

LIPPMANN’S ANALYSIS:

Walter Lippmann would likely focus on the implications of Wolfram’s findings for the “pictures in our heads,” or public understanding of AI’s capabilities and limitations in science. Lippmann might argue that the complexity of AI and computational irreducibility necessitates expert mediation to accurately convey these concepts to the public, ensuring that society’s collective understanding of AI in science is based on informed, rather than simplistic or sensationalized, interpretations.

FRANKFURT’S ANALYSIS:

Harry G. Frankfurt might critique the discourse around AI in science as being fraught with “bullshit,” where speakers and proponents of AI might not necessarily lie about its capabilities, but could fail to pay adequate attention to the truth of computational irreducibility and the limitations it imposes. Frankfurt would likely appreciate Wolfram’s candid assessment of AI, seeing it as a necessary corrective to overly optimistic or vague claims about AI’s potential to revolutionize science.

NOTE: This AI is tuned specifically to be cynical and politically-minded. Don’t take it as perfect. Run it multiple times and/or go consume the original input to get a second opinion.


References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (2024), RakuForPrediction at WordPress.

[SW1] Stephen Wolfram, “Can AI Solve Science?”, (2024), Stephen Wolfram’s writings.

[SW2] Stephen Wolfram, “The New World of LLM Functions: Integrating LLM Technology into the Wolfram Language”, (2023), Stephen Wolfram’s writings.

Notebooks

[AAn1] Anton Antonov, “Workflows with LLM functions (in WL)”, (2023), Wolfram Community.

[AAn2] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (2024), Wolfram Community.

[AAn3] Anton Antonov, “LLM aids for processing Putin’s State-Of-The-Nation speech given on 2/29/2024”, (2024), Wolfram Community.

[AAn4] Anton Antonov, “LLM over Trump vs. Anderson: analysis of the slip opinion of the Supreme Court of the United States”, (2024), Wolfram Community.

[AAn5] Anton Antonov, “Markdown to Mathematica converter”, (2023), Wolfram Community.

[AAn6] Anton Antonov, “Monte-Carlo demo notebook conversion via LLMs”, (2024), Wolfram Community.

Packages, repositories

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::PaLM Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, WWW::MistralAI Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp5] Anton Antonov, LLM::Prompts Raku package, (2023-2024), GitHub/antononcube.

[AAp6] Anton Antonov, Jupyter::Chatbook Raku package, (2023-2024), GitHub/antononcube.

[AAp7] Anton Antonov, WWW:::MermaidInk Raku package, (2023-2024), GitHub/antononcube.

[DMr1] Daniel Miessler, Fabric, (2024), GitHub/danielmiessler.