Comprehension AI Aids for “Can AI Solve Science?”

Introduction

In this blog post (notebook) we use the Large Language Model (LLM) prompts:

to facilitate the reading and comprehension of Stephen Wolfram’s article:

 “Can AI Solve Science?”, [SW1].

Remark: We use “simple” text processing, but since the article has lots of images multi-modal models would be more appropriate.

Here is an image of article’s start:

The computations are done with Wolfram Language (WL) chatbook. The LLM functions used in the workflows are explained and demonstrated in [SW2, AA1, AA2, AAn1÷ AAn4]. The workflows are done with OpenAI’s models. Currently the models of Google’s (PaLM) and MistralAI cannot be used with the workflows below because their input token limits are too low.

Structure

The structure of the notebook is as follows:

NoPartContent
1Getting the article’s text and setupStandard ingestion and setup.
2Article’s structureTL;DR via a table of themes.
3FlowchartsGet flowcharts relating article’s concepts.
4Extract article wisdomGet a summary and extract ideas, quotes, references, etc.
5Hidden messages and propagandaReading it with a conspiracy theorist hat on.

Setup

Here we load a view packages and define ingestion functions:

use HTTP::Tiny;
use JSON::Fast;
use Data::Reshapers;

sub text-stats(Str:D $txt) { <chars words lines> Z=> [$txt.chars, $txt.words.elems, $txt.lines.elems] };

sub strip-html(Str $html) returns Str {

    my $res = $html
    .subst(/'<style'.*?'</style>'/, :g)
    .subst(/'<script'.*?'</script>'/, :g)
    .subst(/'<'.*?'>'/, :g)
    .subst(/'&lt;'.*?'&gt;'/, :g)
    .subst(/[\v\s*] ** 2..*/, "\n\n", :g);

    return $res;
}
&strip-html

Ingest text

Here we get the plain text of the article:

my $htmlArticleOrig = HTTP::Tiny.get("https://writings.stephenwolfram.com/2024/03/can-ai-solve-science/")<content>.decode;

text-stats($htmlArticleOrig);

# (chars => 216219 words => 19867 lines => 1419)

Here we strip the HTML code from the article:

my $txtArticleOrig = strip-html($htmlArticleOrig);

text-stats($txtArticleOrig);

# (chars => 100657 words => 16117 lines => 470)

Here we clean article’s text :

my $txtArticle = $txtArticleOrig.substr(0, $txtArticleOrig.index("Posted in:"));

text-stats($txtArticle);

# (chars => 98011 words => 15840 lines => 389)

LLM access configuration

Here we configure LLM access — we use OpenAI’s model “gpt-4-turbo-preview” since it allows inputs with 128K tokens:

my $conf = llm-configuration('ChatGPT', model => 'gpt-4-turbo-preview', max-tokens => 4096, temperature => 0.7);

$conf.Hash.elems

# 22

Themes

Here we extract the themes found in the article and tabulate them (using the prompt “ThemeTableJSON”):

my $tblThemes = llm-synthesize(llm-prompt("ThemeTableJSON")($txtArticle, "article", 50), e => $conf, form => sub-parser('JSON'):drop);

$tblThemes.&dimensions;

# (12 2)
#% html
$tblThemes ==> data-translation(field-names=><theme content>)
themecontent
Introduction to AI in ScienceDiscusses the potential of AI in solving scientific questions and the belief in AI’s eventual capability to do everything, including science.
AI’s Role and LimitationsExplores deeper questions about AI in science, its role as a practical tool or a fundamentally new method, and its limitations due to computational irreducibility.
AI Predictive CapabilitiesExamines AI’s ability to predict outcomes and its reliance on machine learning and neural networks, highlighting limitations in predicting computational processes.
AI in Identifying Computational ReducibilityDiscusses how AI can assist in identifying pockets of computational reducibility within the broader context of computational irreducibility.
AI’s Application Beyond Human TasksConsiders if AI can understand and predict natural processes directly, beyond emulating human intelligence or tasks.
Solving Equations with AIExplores the potential of AI in solving equations, particularly in areas where traditional methods are impractical or insufficient.
AI for MulticomputationDiscusses AI’s ability to navigate multiway systems and its potential in finding paths or solutions in complex computational spaces.
Exploring Systems with AILooks at how AI can assist in exploring spaces of systems, identifying rules or systems that exhibit specific desired characteristics.
Science as NarrativeExplores the idea of science providing a human-accessible narrative for natural phenomena and how AI might generate or contribute to scientific narratives.
Finding What’s InterestingDiscusses the challenge of determining what’s interesting in science and how AI might assist in identifying interesting phenomena or directions.
Beyond Exact SciencesExplores the potential of AI in extending the domain of exact sciences to include more subjective or less formalized areas of knowledge.
ConclusionSummarizes the potential and limitations of AI in science, emphasizing the combination of AI with computational paradigms for advancing science.

Remark: A fair amount of LLMs give their code results within Markdown code block delimiters (like ““`”.) Hence, (1) the (WL-specified) prompt “ThemeTableJSON” does not use Interpreter["JSON"], but Interpreter["String"], and (2) we use above the sub-parser ‘JSON’ with dropping of non-JSON strings in order to convert the LLM output into a Raku data structure.


Flowcharts

In this section we LLMs to get Mermaid-JS flowcharts that correspond to the content of [SW1].

Remark: Below in order to display Mermaid-JS diagrams we use both the package “WWW::MermaidInk”, [AAp7], and the dedicated mermaid magic cell of Raku Chatabook, [AA6].

Big picture concepts

Here we generate Mermaid-JS flowchart for the “big picture” concepts:

my $mmdBigPicture = 
  llm-synthesize([
    "Create a concise mermaid-js graph for the connections between the big concepts in the article:\n\n", 
    $txtArticle, 
    llm-prompt("NothingElse")("correct mermaid-js")
  ], e => $conf)

Here we define “big picture” styling theme:

my $mmdBigPictureTheme = q:to/END/;
%%{init: {'theme': 'neutral'}}%%
END

Here we create the flowchart from LLM’s specification:

mermaid-ink($mmdBigPictureTheme.chomp ~ $mmdBigPicture.subst(/ '```mermaid' | '```'/, :g), background => 'Cornsilk', format => 'svg')

We made several “big picture” flowchart generations. Here is the result of another attempt:

#% mermaid
graph TD;
    AI[Artificial Intelligence] --> CompSci[Computational Science]
    AI --> CompThink[Computational Thinking]
    AI --> NewTech[New Technology]
    CompSci --> Physics
    CompSci --> Math[Mathematics]
    CompSci --> Ruliology
    CompThink --> SoftwareDesign[Software Design]
    CompThink --> WolframLang[Wolfram Language]
    NewTech --> WolframAlpha["Wolfram|Alpha"]
    Physics --> Philosophy
    Math --> Philosophy
    Ruliology --> Philosophy
    SoftwareDesign --> Education
    WolframLang --> Education
    WolframAlpha --> Education

    %% Styling
    classDef default fill:#8B0000,stroke:#333,stroke-width:2px;

Fine grained

Here we derive a flowchart that refers to more detailed, finer grained concepts:

my $mmdFineGrained = 
  llm-synthesize([
    "Create a mermaid-js flowchart with multiple blocks and multiple connections for the relationships between concepts in the article:\n\n", 
    $txtArticle, 
    "Use the concepts in the JSON table:", 
    $tblThemes, 
    llm-prompt("NothingElse")("correct mermaid-js")
  ], e => $conf)

Here we define “fine grained” styling theme:

my $mmdFineGrainedTheme = q:to/END/;
%%{init: {'theme': 'base','themeVariables': {'backgroundColor': '#FFF'}}}%%
END

Here we create the flowchart from LLM’s specification:

mermaid-ink($mmdFineGrainedTheme.chomp ~ $mmdFineGrained.subst(/ '```mermaid' | '```'/, :g), format => 'svg')

We made several “fine grained” flowchart generations. Here is the result of another attempt:

#% mermaid
graph TD

    AI["AI"] -->|Potential & Limitations| Science["Science"]
    AI -->|Leverages| CR["Computational Reducibility"]
    AI -->|Fails with| CI["Computational Irreducibility"]
    
    Science -->|Domain of| PS["Physical Sciences"]
    Science -->|Expanding to| S["'Exact' Sciences Beyond Traditional Domains"]
    Science -.->|Foundational Role of| CR
    Science -.->|Limited by| CI
    
    PS -->|Traditional Formalizations via| Math["Mathematics/Mathematical Formulas"]
    PS -->|Now Leveraging| AI_Measurements["AI Measurements"]
    
    S -->|Formalizing with| CL["Computational Language"]
    S -->|Leverages| AI_Measurements
    S -->|Future Frontiers with| AI
    
    AI_Measurements -->|Interpretation Challenge| BlackBox["'Black-Box' Nature"]
    AI_Measurements -->|Potential for| NewScience["New Science Discoveries"]
    
    CL -->|Key to Structuring| AI_Results["AI Results"]
    CL -->|Enables Irreducible Computations for| Discovery["Discovery"]
    
    Math -.->|Transitioning towards| CL
    Math -->|Limits when needing 'Precision'| AI_Limits["AI's Limitations"]
    
    Discovery -.->|Not directly achievable via| AI
    
    BlackBox -->|Requires Human| Interpretation["Interpretation"]
    
    CR -->|Empowered by| AI & ML_Techniques["AI & Machine Learning Techniques"]
    CI -.->|Challenge for| AI & ML_Techniques
   
    PS --> Observations["New Observations/Measurements"] --> NewDirections["New Scientific Directions"]
    Observations --> AI_InterpretedPredictions["AI-Interpreted Predictions"]
    NewDirections -.-> AI_Predictions["AI Predictions"] -.-> CI
    NewDirections --> AI_Discoveries["AI-Enabled Discoveries"] -.-> CR

    AI_Discoveries --> NewParadigms["New Paradigms/Concepts"] -.-> S
    AI_InterpretedPredictions -.-> AI_Measurements

    %% Styling
    classDef default fill:#f9f,stroke:#333,stroke-width:2px;
    classDef highlight fill:#bbf,stroke:#006,stroke-width:4px;
    classDef imp fill:#ffb,stroke:#330,stroke-width:4px;
    class PS,CL highlight;
    class AI_Discoveries,NewParadigms imp;

Summary and ideas

Here we get a summary and extract ideas, quotes, and references from the article:

my $sumIdea =llm-synthesize(llm-prompt("ExtractArticleWisdom")($txtArticle), e => $conf);

text-stats($sumIdea)

# (chars => 7386 words => 1047 lines => 78)

The result is rendered below.


#% markdown
$sumIdea.subst(/ ^^ '#' /, '###', :g)

SUMMARY

Stephen Wolfram’s writings explore the capabilities and limitations of AI in the realm of science, discussing how AI can assist in scientific discovery and understanding but also highlighting its limitations due to computational irreducibility and the need for human interpretation and creativity.

IDEAS:

  • AI has made surprising successes but cannot solve all scientific problems due to computational irreducibility.
  • Large Language Models (LLMs) provide a new kind of interface for scientific work, offering high-level autocomplete for scientific knowledge.
  • The transition to computational representation of the world is transforming science, with AI playing a significant role in accessing and exploring this computational realm.
  • AI, particularly through neural networks and machine learning, offers tools for predicting scientific outcomes, though its effectiveness is bounded by the complexity of the systems it attempts to model.
  • Computational irreducibility limits the ability of AI to predict or solve all scientific phenomena, ensuring that surprises and new discoveries remain a fundamental aspect of science.
  • Despite AI’s limitations, it has potential in identifying pockets of computational reducibility, streamlining the discovery of scientific knowledge.
  • AI’s success in areas like visual recognition and language generation suggests potential for contributing to scientific methodologies and understanding, though its ability to directly engage with raw natural processes is less certain.
  • AI techniques, including neural networks and machine learning, have shown promise in areas like solving equations and exploring multicomputational processes, but face challenges due to computational irreducibility.
  • The role of AI in generating human-understandable narratives for scientific phenomena is explored, highlighting the potential for AI to assist in identifying interesting and meaningful scientific inquiries.
  • The exploration of “AI measurements” opens up new possibilities for formalizing and quantifying aspects of science that have traditionally been qualitative or subjective, potentially expanding the domain of exact sciences.

QUOTES:

  • “AI has the potential to give us streamlined ways to find certain kinds of pockets of computational reducibility.”
  • “Computational irreducibility is what will prevent us from ever being able to completely ‘solve science’.”
  • “The AI is doing ‘shallow computation’, but when there’s computational irreducibility one needs irreducible, deep computation to work out what will happen.”
  • “AI measurements are potentially a much richer source of formalizable material.”
  • “AI… is not something built to ‘go out into the wilds of the ruliad’, far from anything already connected to humans.”
  • “Despite AI’s limitations, it has potential in identifying pockets of computational reducibility.”
  • “AI techniques… have shown promise in areas like solving equations and exploring multicomputational processes.”
  • “AI’s success in areas like visual recognition and language generation suggests potential for contributing to scientific methodologies.”
  • “There’s no abstract notion of ‘interestingness’ that an AI or anything can ‘go out and discover’ ahead of our choices.”
  • “The whole story of things like trained neural nets that we’ve discussed here is a story of leveraging computational reducibility.”

HABITS:

  • Continuously exploring the capabilities and limitations of AI in scientific discovery.
  • Engaging in systematic experimentation to understand how AI tools can assist in scientific processes.
  • Seeking to identify and utilize pockets of computational reducibility where AI can be most effective.
  • Exploring the use of neural networks and machine learning for predicting and solving scientific problems.
  • Investigating the potential for AI to assist in creating human-understandable narratives for complex scientific phenomena.
  • Experimenting with “AI measurements” to quantify and formalize traditionally qualitative aspects of science.
  • Continuously refining and expanding computational language to better interface with AI capabilities.
  • Engaging with and contributing to discussions on the role of AI in the future of science and human understanding.
  • Actively seeking new methodologies and innovations in AI that could impact scientific discovery.
  • Evaluating the potential for AI to identify interesting and meaningful scientific inquiries through analysis of large datasets.

FACTS:

  • Computational irreducibility ensures that surprises and new discoveries remain a fundamental aspect of science.
  • AI’s effectiveness in scientific modeling is bounded by the complexity of the systems it attempts to model.
  • AI can identify pockets of computational reducibility, streamlining the discovery of scientific knowledge.
  • Neural networks and machine learning offer tools for predicting outcomes but face challenges due to computational irreducibility.
  • AI has shown promise in areas like solving equations and exploring multicomputational processes.
  • The potential of AI in generating human-understandable narratives for scientific phenomena is actively being explored.
  • “AI measurements” offer new possibilities for formalizing aspects of science that have been qualitative or subjective.
  • The transition to computational representation of the world is transforming science, with AI playing a significant role.
  • Machine learning techniques can be very useful for providing approximate answers in scientific inquiries.
  • AI’s ability to directly engage with raw natural processes is less certain, despite successes in human-like tasks.

REFERENCES:

  • Stephen Wolfram’s writings on AI and science.
  • Large Language Models (LLMs) as tools for scientific work.
  • The concept of computational irreducibility and its implications for science.
  • Neural networks and machine learning techniques in scientific prediction and problem-solving.
  • The role of AI in creating human-understandable narratives for scientific phenomena.
  • The use of “AI measurements” in expanding the domain of exact sciences.
  • The potential for AI to assist in identifying interesting and meaningful scientific inquiries.

RECOMMENDATIONS:

  • Explore the use of AI and neural networks for identifying pockets of computational reducibility in scientific research.
  • Investigate the potential of AI in generating human-understandable narratives for complex scientific phenomena.
  • Utilize “AI measurements” to formalize and quantify traditionally qualitative aspects of science.
  • Engage in systematic experimentation to understand the limitations and capabilities of AI in scientific discovery.
  • Consider the role of computational irreducibility in shaping the limitations of AI in science.
  • Explore the potential for AI to assist in identifying interesting and meaningful scientific inquiries.
  • Continuously refine and expand computational language to better interface with AI capabilities in scientific research.
  • Investigate new methodologies and innovations in AI that could impact scientific discovery.
  • Consider the implications of AI’s successes in human-like tasks for its potential contributions to scientific methodologies.
  • Explore the use of machine learning techniques for providing approximate answers in scientific inquiries where precision is less critical.

Hidden and propaganda messages

In this section we convince ourselves that the article is apolitical and propaganda-free.

Remark: We leave to reader as an exercise to verify that both the overt and hidden messages found by the LLM below are explicitly stated in article.

Here we find the hidden and “propaganda” messages in the article:

my $propMess =llm-synthesize([llm-prompt("FindPropagandaMessage"), $txtArticle], e => $conf);

text-stats($propMess)

# (chars => 6193 words => 878 lines => 64)

Remark: The prompt “FindPropagandaMessage” has an explicit instruction to say that it is intentionally cynical. It is also, marked as being “For fun.”

The LLM result is rendered below.


#% markdown
$propMess.subst(/ ^^ '#' /, '###', :g).subst(/ (<[A..Z \h \']>+ ':') /, { "### {$0.Str} \n"}, :g)

OVERT MESSAGE:

Stephen Wolfram evaluates the potential and limitations of AI in advancing science.

HIDDEN MESSAGE:

Despite AI’s growth, irreducible complexity limits its scientific problem-solving capacity.

HIDDEN OPINIONS:

  • AI can leverage computational reducibility akin to human minds.
  • Traditional mathematical methods surpass AI in solving precise equations.
  • AI’s “shallow computation” struggles with computational irreducibility.
  • AI can provide practical tools for science within its computational reducibility limits.
  • AI’s effectiveness is contingent on approximate answers, failing at precise perfection.
  • AI introduces a new, human-like method to harness computational reducibility.
  • Fundamental discoveries are more likely through irreducible computations, not AI.
  • Combining AI with the computational paradigm offers the best science advancement path.
  • AI’s potential in science is hyped beyond its actual foundational impact.
  • AI’s role in science is more about aiding existing processes than groundbreaking discoveries.

SUPPORTING ARGUMENTS and### QUOTES:

  • “AI is doing ‘shallow computation’, but when there’s computational irreducibility one needs irreducible, deep computation to work out what will happen.”
  • “Typical AI approach to science doesn’t involve explicitly ‘formalizing things’.”
  • “In terms of fundamental potential for discovery, AI pales in comparison to what we can build from the computational paradigm.”
  • “AI can be very useful if an approximate (‘80%’) answer is good enough.”
  • “AI measurements seem to have a certain immediate ‘subjectivity’.”
  • “AI introduces a new—and rather human-like—way of leveraging computational reducibility.”
  • “AI’s effectiveness is contingent on approximate answers, failing at precise perfection.”
  • “AI’s role in science is more about aiding existing processes than groundbreaking discoveries.”
  • “Irreducible computations that we do offer greater potential for discovery than typical AI.”

DESIRED AUDIENCE OPINION CHANGE:

  • Appreciate the limitations and strengths of AI in scientific exploration.
  • Recognize the irreplaceable value of human insight in science.
  • View AI as a tool, not a replacement, for traditional scientific methods.
  • Embrace computational irreducibility as a barrier and boon to discovery.
  • Acknowledge the need for combining AI with computational paradigms.
  • Understand that AI’s role is to augment, not overshadow, human-led science.
  • Realize the necessity of approximate solutions in AI-driven science.
  • Foster realistic expectations from AI in making scientific breakthroughs.
  • Advocate for deeper computational approaches alongside AI in science.
  • Encourage interdisciplinary approaches combining AI with formal sciences.

DESIRED AUDIENCE ACTION CHANGE:

  • Support research combining AI with computational paradigms.
  • Encourage the use of AI for practical, approximate scientific solutions.
  • Advocate for AI’s role as a supplementary tool in science.
  • Push for education that integrates AI with traditional scientific methods.
  • Promote the study of computational irreducibility alongside AI.
  • Emphasize AI’s limitations in scientific discussions and funding.
  • Inspire new approaches to scientific exploration using AI.
  • Foster collaboration between AI researchers and traditional scientists.
  • Encourage critical evaluation of AI’s potential in groundbreaking discoveries.
  • Support initiatives that aim to combine AI with human insight in science.

MESSAGES:

Stephen Wolfram wants you to believe AI can advance science, but he’s actually saying its foundational impact is limited by computational irreducibility.

PERCEPTIONS:

Wolfram wants you to see him as optimistic about AI in science, but he’s actually cautious about its ability to make fundamental breakthroughs.

ELLUL’S ANALYSIS:

Jacques Ellul would likely interpret Wolfram’s findings as a validation of the view that technological systems, including AI, are inherently limited by the complexity of human thought and the natural world. The presence of computational irreducibility underscores the unpredictability and uncontrollability that Ellul warned technology could not tame, suggesting that even advanced AI cannot fully solve or understand all scientific problems, thus maintaining a degree of human autonomy and unpredictability in the face of technological advancement.

BERNAYS’ ANALYSIS:

Edward Bernays might view Wolfram’s exploration of AI in science through the lens of public perception and manipulation, arguing that while AI presents a new frontier for scientific exploration, its effectiveness and limitations must be communicated carefully to avoid both undue skepticism and unrealistic expectations. Bernays would likely emphasize the importance of shaping public opinion to support the use of AI as a tool that complements human capabilities rather than replacing them, ensuring that society remains engaged and supportive of AI’s role in scientific advancement.

LIPPMANN’S ANALYSIS:

Walter Lippmann would likely focus on the implications of Wolfram’s findings for the “pictures in our heads,” or public understanding of AI’s capabilities and limitations in science. Lippmann might argue that the complexity of AI and computational irreducibility necessitates expert mediation to accurately convey these concepts to the public, ensuring that society’s collective understanding of AI in science is based on informed, rather than simplistic or sensationalized, interpretations.

FRANKFURT’S ANALYSIS:

Harry G. Frankfurt might critique the discourse around AI in science as being fraught with “bullshit,” where speakers and proponents of AI might not necessarily lie about its capabilities, but could fail to pay adequate attention to the truth of computational irreducibility and the limitations it imposes. Frankfurt would likely appreciate Wolfram’s candid assessment of AI, seeing it as a necessary corrective to overly optimistic or vague claims about AI’s potential to revolutionize science.

NOTE: This AI is tuned specifically to be cynical and politically-minded. Don’t take it as perfect. Run it multiple times and/or go consume the original input to get a second opinion.


References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (2024), RakuForPrediction at WordPress.

[SW1] Stephen Wolfram, “Can AI Solve Science?”, (2024), Stephen Wolfram’s writings.

[SW2] Stephen Wolfram, “The New World of LLM Functions: Integrating LLM Technology into the Wolfram Language”, (2023), Stephen Wolfram’s writings.

Notebooks

[AAn1] Anton Antonov, “Workflows with LLM functions (in WL)”, (2023), Wolfram Community.

[AAn2] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (2024), Wolfram Community.

[AAn3] Anton Antonov, “LLM aids for processing Putin’s State-Of-The-Nation speech given on 2/29/2024”, (2024), Wolfram Community.

[AAn4] Anton Antonov, “LLM over Trump vs. Anderson: analysis of the slip opinion of the Supreme Court of the United States”, (2024), Wolfram Community.

[AAn5] Anton Antonov, “Markdown to Mathematica converter”, (2023), Wolfram Community.

[AAn6] Anton Antonov, “Monte-Carlo demo notebook conversion via LLMs”, (2024), Wolfram Community.

Packages, repositories

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::PaLM Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, WWW::MistralAI Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp5] Anton Antonov, LLM::Prompts Raku package, (2023-2024), GitHub/antononcube.

[AAp6] Anton Antonov, Jupyter::Chatbook Raku package, (2023-2024), GitHub/antononcube.

[AAp7] Anton Antonov, WWW:::MermaidInk Raku package, (2023-2024), GitHub/antononcube.

[DMr1] Daniel Miessler, Fabric, (2024), GitHub/danielmiessler.

LLM aids for processing Putin’s State-Of-The-Nation speech

… given on February, 29, 2024


Introduction

In this blog post (notebook) we provide aids and computational workflows for the analysis of Vladimir Putin’s State Of The Nation speech given on February 29th, 2024. We use Large Language Models (LLMs). We walk through various steps involved in examining and understanding the speech in a systematic and reproducible manner.

The speech transcript (in Russian) is taken from kremlin.ru.

The computations are done with a Raku chatbook, [AAp6, AAv1÷AAv3]. The LLM functions used in the workflows are explained and demonstrated in [AA1, AAv3]. The workflows are done with OpenAI’s models [AAp1]. Currently, the models of Google’s (PaLM), [AAp2], and MistralAI, [AAp3], cannot be used with the workflows below because their input token limits are too low.

Remark: An important feature of the LLM workflows (and underlying models) is that although the speech transcript is in Russian, the LLM results are in English.

A similar set of workflows is described in “LLM aids for processing of the first Carlson-Putin interview”, [AA2], and it has been reused to a large degree below.

The following table — derived from Putin’s speech — should be of great interest to people living in Western countries (with governments that want to fight Russia):

namerussian_nametypestatusdescriptiondamage
KinzhalКинжалHypersonic Airborne Missile SystemOperationalA hypersonic missile capable of striking targets with high precision over long distances at speeds exceeding Mach 5.High
ZirconЦирконHypersonic Cruise MissileOperationalA sea-based hypersonic cruise missile designed to attack naval and ground targets.High
AvangardАвангардHypersonic Glide VehicleOperationalMounted on an intercontinental ballistic missile, it can carry a nuclear payload and maneuver at high speeds to evade missile defense systems.Very High
PeresvetПересветLaser Weapon SystemOperationalA laser system purportedly designed to counter aerial threats and possibly to disable satellites and other space assets.Variable
BurevestnikБуревестникNuclear-Powered Cruise MissileIn TestingClaimed to have virtually unlimited range thanks to its nuclear power source, designed for strategic bombing missions.Potentially Very High
PoseidonПосейдонNuclear-Powered Underwater DroneIn DevelopmentAn autonomous underwater vehicle intended to carry nuclear warheads to create radioactive tsunamis near enemy coastlines.Catastrophic
SarmatСарматIntercontinental Ballistic MissileOperationalA heavy missile intended to replace the aging Soviet-era Voevoda, capable of carrying multiple nuclear warheads.Catastrophic

Structure

The structure of the notebook is as follows:

  1. Getting the speech text and setup
    Standard ingestion and setup.
  2. Summary
    The speech in brief.
  3. Themes
    TL;DR via a table of themes.
  4. Important parts
    What are the most important parts or most provocative statements?
  5. Talking to the West
    LLM pretends to be Putin and addresses the West.
  6. Weapons tabulation
    For people living in the West.

Getting the speech text and setup

Here we load packages and define a text statistics function:

use HTTP::Tiny;
use JSON::Fast;
use Data::Reshapers;

sub text-stats(Str:D $txt) { <chars words lines> Z=> [$txt.chars, $txt.words.elems, $txt.lines.elems] }

Ingest text

Here we ingest the text of the speech:

my $url = 'https://raw.githubusercontent.com/antononcube/RakuForPrediction-blog/main/Data/Putin-State-of-the-Nation-Address-2024-02-29-Russian.txt';
my $txtRU = HTTP::Tiny.new.get($url)<content>.decode;

$txtRU .= subst(/ \v+ /, "\n", :g);
text-stats($txtRU)
# (chars => 93212 words => 12797 lines => 290)

LLM access configuration

Here we configure LLM access — we use OpenAI’s model “gpt-4-turbo-preview” since it allows inputs with 128K tokens:

my $conf = llm-configuration('ChatGPT', model => 'gpt-4-turbo-preview', max-tokens => 4096, temperature => 0.7);
$conf.Hash.elems

#  22

LLM functions

Here we define an LLM translation function:

my &fTrans = llm-function({"Translate from $^a to $^b the following text:\n $^c"}, e => $conf)

Here we make a function of extracting significant parts from the interview:

my &fProv = llm-function({"Which are the top $^a most $^b in the following speech? Answer in English.\n\n" ~ $txtRU}, e => $conf)

Summary

Here we summarize the speech via an LLM synthesis:

my $summary = llm-synthesize([
    "Summarize the following speech within 300 words.\n\n", 
    $txtRU,
    ], e => $conf);

text-stats($summary)
# (chars => 1625 words => 217 lines => 5)

Show the summary in Markdown format:

#% markdown
$summary

In his address, Vladimir Putin emphasized the vision for Russia’s future, focusing on strategic tasks crucial for the country’s long-term development. He highlighted the importance of direct engagement with citizens, including workers, educators, scientists, and military personnel, acknowledging their role in shaping government actions and initiatives. Putin expressed gratitude to various professionals and emphasized plans for large-scale investments in social services, demographics, the economy, science, technology, and infrastructure.

Putin stressed the need for a more equitable tax system, supporting families and businesses investing in development and innovation, while closing loopholes for tax evasion. He announced significant financial support for regional development, infrastructure modernization, and environmental protection. The address included plans to enhance Russia’s transportation network, including highways, airports, and the Northern Sea Route, to boost economic and tourism potential.

The speech also highlighted the importance of supporting veterans and participants of the special military operation, proposing a new professional development program “The Time of Heroes” to prepare them for leadership roles in various sectors. Putin called for a collective effort from the state, society, and business to achieve national goals, emphasizing that the success of these plans heavily relies on the courage and determination of Russian soldiers currently in combat. He concluded by expressing confidence in Russia’s future victories and successes, backed by national solidarity and resilience.


Themes

Here we make an LLM request for finding and distilling the themes or the speech:

my $llmParts = llm-synthesize([
    'Split the following speech into thematic parts:', 
    $txtRU,
    "Return the parts as a JSON array.",
    llm-prompt('NothingElse')('JSON')
    ], e => $conf, form => sub-parser('JSON'):drop);

deduce-type($llmParts)
# Vector(Assoc(Atom((Str)), Atom((Str)), 2), 10)

Here we tabulate the found themes:

#%html
$llmParts ==> data-translation(field-names => <theme content>)
themecontent
IntroductionAddressing the Federal Assembly, focusing on the future, strategic tasks, and long-term development.
Economic Development and Strategic GoalsAction program formed through dialogs, addressing real people’s needs, and focusing on strategic development tasks.
National Projects and Strategic InitiativesEfforts in various sectors including regional development, technology, economy, and social programs.
Social Programs and DemographicsInitiatives aimed at supporting families, increasing birth rates, and improving living standards.
Education and Youth DevelopmentImproving education systems, supporting youth, and creating opportunities for professional growth.
Technological Development and InnovationInvesting in new technologies, supporting startups, and enhancing Russia’s competitiveness.
Infrastructure DevelopmentImprovements in transportation, utilities, and urban development to enhance quality of life.
Environmental Protection and SustainabilityPrograms for ecological conservation, waste management, and promoting green technologies.
Defense and SecurityAcknowledging the role of military personnel and veterans in national security and development.
National Unity and Future VisionEmphasizing solidarity, resilience, and the collective effort towards Russia’s prosperity.

Important parts

Most important statements

Here we get the most important statements:

#% markdown
&fProv(3, "important statements")

Given the extensive nature of the speech, identifying the top 3 most important statements depends on the context of what one considers “important”—whether it be strategic goals, domestic policies, military actions, or socio-economic initiatives. However, based on the broad significance and impact, the following three statements can be highlighted as critically important:

  1. Strategic Development and Sovereignty: “Самостоятельность, самодостаточность, суверенитет нужно доказывать, подтверждать каждый день. Речь идёт о нашей и только нашей ответственности за настоящее и за будущее России. Это наша родина, родина наших предков, и она нужна и дорога только нам и, конечно, потомкам, которым мы обязаны передать сильную и благополучную страну.”
    • This statement underscores the importance Putin places on Russia’s autonomy, self-sufficiency, and sovereignty, emphasizing the responsibility to maintain and strengthen these principles for the country’s future.
  2. Special Military Operation and its Heroes: “Такие, безусловно, не отступят, не подведут и не предадут. Они и должны выходить на ведущие позиции и в системе образования и воспитания молодёжи, и в общественных объединениях, в госкомпаниях, бизнесе, в государственном и муниципальном управлении, возглавлять регионы, предприятия в конечном итоге, самые крупные отечественные проекты.”
    • This part highlights the role and valor of those participating in what Russia calls the “special military operation,” suggesting they should be integrated into leading positions across all sectors of society, reflecting the operation’s significance in Putin’s vision for Russia.
  3. Long-term Development Plans and Investments: “Несмотря на сложный период, несмотря на нынешние испытания и трудности, мы намечаем долгосрочные планы. Программа, которую обозначил сегодня в Послании, носит объективный и фундаментальный характер. Это программа сильной, суверенной страны, которая уверенно смотрит в будущее. Для достижения поставленных целей у нас есть и ресурсы, и колоссальные возможности.”
    • This statement underlines Putin’s commitment to Russia’s long-term strategic development and investments despite current challenges, portraying an optimistic and determined vision for the country’s future.

These statements encapsulate key themes of sovereignty, military valor, and long-term development, which are recurrent in Putin’s address, highlighting their importance in the broader context of Russia’s direction and policies under his leadership.

Most provocative statements for the West

Here we (try to) get the most provocative statements form Western’s politician’s point of view:

#% markdown
&fProv(3, "provocative statements from Western politician's point of view")

Given the extensive content of the speech, identifying the top 3 most provocative statements from a Western politician’s point of view involves subjective interpretation, as what might be considered provocative can vary depending on specific sensitivities and current geopolitical contexts. However, based on the themes and assertions made in Vladimir Putin’s speech, here are three statements or themes that could be seen as particularly provocative or significant from a Western perspective:

  1. Defense of the “Russian Spring” and actions in Crimea and Donbass: Putin’s celebration of the 10th anniversary of the “Russian Spring” and the pride in the actions of Crimea, Sevastopol, and the people of Donbass could be seen as provocative. This is because the annexation of Crimea by Russia in 2014 and the ongoing conflict in Eastern Ukraine are viewed by many Western countries as violations of international law and Ukraine’s sovereignty.
  2. Criticism of the West and Claims of Western Aggression: Putin’s assertion that the West, with its “colonial habits” of inciting national conflicts globally, aims not just to contain Russia’s development but to turn it into a dependent territory, reflects a strong criticism of Western policies. He accuses the West of wanting to bring discord and weaken Russia from within, similar to what he claims was done in Ukraine. This portrayal of the West seeking Russia’s strategic defeat could be seen as particularly provocative amid current tensions.
  3. Strategic Nuclear Forces Readiness and Advanced Weapons Development: Putin’s statement about the readiness of Russia’s strategic nuclear forces and the mention of advanced weapons systems, such as hypersonic missiles and other strategic capabilities, could be viewed as a provocative show of military strength. The emphasis on nuclear readiness and the development of weapons that can bypass missile defense systems underscore the ongoing arms race and can be seen as a direct challenge to NATO and Western military capabilities.

These points, among others in the speech, reflect Russia’s stance on key geopolitical issues, its criticism of Western policies, and its emphasis on military and strategic strength. The provocative nature of these statements lies in their challenge to the current international order and the implications for security and stability in Europe and beyond.


Talking to the West

Taking into account the content of the LLM results above here we craft and execute a special prompt that makes the LLM “pretend” that it is V. Putin and addresses bellicose Western politicians:

my $westTellPOV = llm-synthesize([
    "You are the speaker of the following speech.",
    "Make a short statement addressing Western politicians based on the following full speech.",
    "The Western politicians you address are very bellicose, so, accentuate on Russia's ability:",
    "(i) to deploy and use dangerous weapons, and",
    "(ii) be unaffected by economic sanctions.",
    "Mention concrete weapons.",
    # llm-prompt('NothingElse')('Statements on weapons, sanctions, and wars in English'),
    "\n\n", 
    $txtRU,
    ], e => $conf);

text-stats($westTellPOV)
# (chars => 1651 words => 236 lines => 5)

Here we render the LLM result as in Markdown format:

#% markdown
$westTellPOV

Addressing Western politicians, it is imperative to recognize that Russia stands as a formidable power, possessing a formidable arsenal and an economy resilient to sanctions. Our strategic nuclear forces remain on high alert, including the deployment of avant-garde hypersonic systems like “Kinzhal” and the “Tsirkon” hypersonic missile, which have already been proven in combat efficiency. Additionally, the “Avangard” hypersonic glide vehicles and “Peresvet” laser systems bolster our defensive capabilities, alongside the ongoing tests of the “Burevestnik” nuclear-powered cruise missile and the “Poseidon” unmanned underwater vehicle. These advanced weapons systems underscore our technological prowess and military readiness.

Moreover, Russia’s economy has demonstrated its resilience in the face of external pressures, including sanctions. Our commitment to the welfare of our citizens, the development of our country, and the protection of our sovereignty remains unwavering. The strength of our economy is further evidenced by our ability to undertake significant national projects and to invest in our social and economic infrastructure, ensuring the prosperity and security of our nation.

In light of these capabilities and our steadfast resolve, it is crucial for Western politicians to reassess their approach towards Russia. Engaging in dialogue and mutual respect is the pathway to peace and stability in the international arena. We urge Western leaders to consider the implications of their actions and to work towards constructive relations with Russia. The time is now to foster understanding and cooperation for the benefit of all.


Weapons tabulation

From the point of views Western citizens and politicians of great interest (should be) the statements in the speech that discuss weapons for mass destruction and weapons that give decisive military advantage. Here we synthesize an LLM response that tabulates mentioned weapons’ names, status, and descriptions:

my $weapons = llm-synthesize([
    "Briefly describe the weapons mentioned in the following speech.",
    "Give the result in English with JSON data structure that is table with the column names: name, type, status, description, damage.",
    "Make sure the descriptions provide some level of detail.",
    llm-prompt('NothingElse')('JSON'),
    "\n\n", 
    $txtRU,
    ], e => $conf, form => sub-parser('JSON'):drop);

deduce-type($weapons)
# Vector(Assoc(Atom((Str)), Atom((Str)), 5), 7)

Here we render the table:

#%html
$weapons ==> data-translation(field-names => <name type status description damage>)
nametypestatusdescriptiondamage
КинжалHypersonic Airborne Missile SystemOperationalA hypersonic missile capable of striking targets with high precision over long distances at speeds exceeding Mach 5.High
ЦирконHypersonic Cruise MissileOperationalA sea-based hypersonic cruise missile designed to attack naval and ground targets.High
АвангардHypersonic Glide VehicleOperationalMounted on an intercontinental ballistic missile, it can carry a nuclear payload and maneuver at high speeds to evade missile defense systems.Very High
ПересветLaser Weapon SystemOperationalA laser system purportedly designed to counter aerial threats and possibly to disable satellites and other space assets.Variable
БуревестникNuclear-Powered Cruise MissileIn TestingClaimed to have virtually unlimited range thanks to its nuclear power source, designed for strategic bombing missions.Potentially Very High
ПосейдонNuclear-Powered Underwater DroneIn DevelopmentAn autonomous underwater vehicle intended to carry nuclear warheads to create radioactive tsunamis near enemy coastlines.Catastrophic
СарматIntercontinental Ballistic MissileOperationalA heavy missile intended to replace the aging Soviet-era Voevoda, capable of carrying multiple nuclear warheads.Catastrophic

Remark: We could have specified in the prompt the following column names to be used: “name_english, name_russian, type, status, description, damage”. But it turns out that (with the LLM current models) the results are less reproducible. Hence, we use “name, type, status, description, damage” and adjust with corresponding translations below.

Since the results of the above LLM synthesis are often given in Russian or have interlaced Russian and English names or phrases here we translate the LLM result into English:

$weapons
==> to-json()
==> &fTrans("Russian", "English") 
==> my $weaponsEN;

deduce-type($weaponsEN);
# Vector(Assoc(Atom((Str)), Atom((Str)), 5), 7)

Here we derive a dictionary of weapon names and corresponding Wikipedia URLs:

my %urlTbl = llm-synthesize([
    "Provide a JSON dictionary of the Wikipedia hyperlinks for these weapons:",
    $weaponsEN.map(*<name>).join(', '),
    ], e => $conf, form => sub-parser('JSON'):drop);

deduce-type(%urlTbl);

Here is a direct assignment of one the results of the code above, for which we have verified the hyperlinks:

my %urlTbl = {:Avangard("https://en.wikipedia.org/wiki/Avangard_(hypersonic_glide_vehicle)"), :Burevestnik("https://en.wikipedia.org/wiki/9M730_Burevestnik"), :Kinzhal("https://en.wikipedia.org/wiki/Kh-47M2_Kinzhal"), :Peresvet("https://en.wikipedia.org/wiki/Peresvet_(laser_weapon)"), :Poseidon("https://en.wikipedia.org/wiki/Status-6_Oceanic_Multipurpose_System"), :Sarmat("https://en.wikipedia.org/wiki/RS-28_Sarmat"), :Zircon("https://en.wikipedia.org/wiki/3M22_Zircon")};
%urlTbl.elems;

# 7

Here we craft a prompt with which we merge the Russian names column of the weapons table derived first into the translated table:

my $weaponsEN2 = llm-synthesize([
    "Take the Russian names of the first JSON table and put them in a new column in the second JSON table:",
    "1st table:\n", to-json($weapons),
    "2nd table:\n", to-json($weaponsEN)
], e => $conf, form => sub-parser('JSON'):drop);

deduce-type($weaponsEN2)
# Vector(Assoc(Atom((Str)), Atom((Str)), 6), 7)

Here we make a corresponding HTML table and modify the (English) names column to have hyperlinks:

#% html
$weaponsEN2 ==> data-translation(field-names=><name russian_name type status description damage>) ==> my $weaponsEN3;
my &reg = / '<td>' (<{%urlTbl.keys.join(' | ')}>) '</td>' /;
$weaponsEN3.subst(&reg, {
    "<td><span><a href=\"{%urlTbl{$0.Str}}\">{$0.Str}</a></span></td>"
}, :g)
namerussian_nametypestatusdescriptiondamage
KinzhalКинжалHypersonic Airborne Missile SystemOperationalA hypersonic missile capable of striking targets with high precision over long distances at speeds exceeding Mach 5.High
ZirconЦирконHypersonic Cruise MissileOperationalA sea-based hypersonic cruise missile designed to attack naval and ground targets.High
AvangardАвангардHypersonic Glide VehicleOperationalMounted on an intercontinental ballistic missile, it can carry a nuclear payload and maneuver at high speeds to evade missile defense systems.Very High
PeresvetПересветLaser Weapon SystemOperationalA laser system purportedly designed to counter aerial threats and possibly to disable satellites and other space assets.Variable
BurevestnikБуревестникNuclear-Powered Cruise MissileIn TestingClaimed to have virtually unlimited range thanks to its nuclear power source, designed for strategic bombing missions.Potentially Very High
PoseidonПосейдонNuclear-Powered Underwater DroneIn DevelopmentAn autonomous underwater vehicle intended to carry nuclear warheads to create radioactive tsunamis near enemy coastlines.Catastrophic
SarmatСарматIntercontinental Ballistic MissileOperationalA heavy missile intended to replace the aging Soviet-era Voevoda, capable of carrying multiple nuclear warheads.Catastrophic

References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (2024), RakuForPrediction at WordPress.

[OAIb1] OpenAI team, “New models and developer products announced at DevDay”, (2023), OpenAI/blog.

Packages

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::PaLM Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, WWW::MistralAI Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, WWW::LLaMA Raku package, (2024), GitHub/antononcube.

[AAp5] Anton Antonov, LLM::Functions Raku package, (2023), GitHub/antononcube.

[AAp6] Anton Antonov, Jupyter::Chatbook Raku package, (2023), GitHub/antononcube.

[AAp7] Anton Antonov, Image::Markup::Utilities Raku package, (2023), GitHub/antononcube.

Videos

[AAv1] Anton Antonov, “Jupyter Chatbook LLM cells demo (Raku)”, (2023), YouTube/@AAA4Prediction.

[AAv2] Anton Antonov, “Jupyter Chatbook multi cell LLM chats teaser (Raku)”, (2023), YouTube/@AAA4Prediction.

[AAv3] Anton Antonov “Integrating Large Language Models with Raku”, (2023), YouTube/@therakuconference6823.