Wisdom of “Integrating Large Language Models with Raku”

Introduction

This post applies various Large Language Model (LLM) summarization prompts to the transcript of The Raku Conference 2023 (TRC-2023) presentation “Integrating Large Language Models with Raku” hosted by the YouTube channel The Raku Conference.

In the presentation, Anton Antonov presents “Integrating Large Language Models with Raku,” demonstrating functionalities in Visual Studio Code using a Raku Chatbook. The presentation explores using OpenAI, PaLM (Google’s large language model), and DALL-E (image generation service) through Raku, showcasing dynamic interaction with large language models, embedding them in notebooks, and generating code and markdown outputs.

Remark: The LLM results below were obtained from the “raw” transcript, which did not have punctuation.

Remark: The transcription software had problems parsing the names of the participants. Some of the names were manually corrected.

Remark: The applied “main” LLM prompt — “ExtractingArticleWisdom” — is a modified version of a prompt (or pattern) with a similar name from “fabric”, [DMr1].

Remark: The themes table was LLM obtained with the prompt “ThemeTableJSON”.

Remark: The content of this post was generated with the computational Markdown file “LLM-content-breakdown-template.md”, which was executed (or woven) by the CLI script file-code-chunks-eval of “Text::CodeProcessing”, [AAp7]..

Post’s structure:

  1. Themes
    Instead of a summary.
  2. Mind-maps
    An even better summary replacement!
  3. Summary, ideas, and recommendations
    The main course.

Themes

Instead of a summary consider this table of themes:

themecontent
IntroductionAnton Antonov introduces the presentation on integrating large language models with Raku and begins with a demonstration in Visual Studio Code.
DemonstrationDemonstrates using Raku chatbook in Jupyter Notebook to interact with OpenAI, PaLM, and DALL-E services for various tasks like querying information and generating images.
Direct Access vs. Chat ObjectsDiscusses the difference between direct access to web APIs and using chat objects for dynamic interaction with large language models.
Translation and Code GenerationShows how to translate text and generate Raku code for solving mathematical problems using chat objects.
Motivation for Integrating Raku with Large Language ModelsExplains the need for dynamic interaction between Raku and large language models, including notebook solutions and facilitating interaction.
Technical Details and PackagesDetails the packages developed for interacting with large language models and the functionalities required for the integration.
Use CasesDescribes various use cases like template engine functionalities, embeddings, and generating documentation from tests using large language models.
Literate Programming and Markdown TemplatesIntroduces computational markdown for generating documentation and the use of Markdown templates for creating structured documents.
Generating Tests and DocumentationDiscusses generating package documentation from tests and conversing between chat objects for training purposes.
Large Language Model WorkflowsCovers workflows for utilizing large language models, including ‘Too Long Didn’t Read’ documentation utilization.
Comparison with Python and MathematicaCompares the implementation of functionalities in Raku with Python and Mathematica, highlighting the ease of extending the Jupyter framework for Python.
Q&A SessionAnton answers questions about extending the Jupyter kernel and other potential limitations or features that could be integrated.

Mind-map

Here is a mind-map showing presentation’s structure:

Here is a mind-map summarizing the main LLMs part of the talk:


Summary, ideas, and recommendations

SUMMARY

Anton Antonov presents “Integrating Large Language Models with Raku,” demonstrating functionalities in Visual Studio Code using a Raku chatbook. The presentation explores using OpenAI, PaLM (Google’s large language model), and DALL-E (image generation service) through Raku, showcasing dynamic interaction with large language models, embedding them in notebooks, and generating code and markdown outputs.

IDEAS:

  • Integrating large language models with programming languages can enhance dynamic interaction and facilitate complex tasks.
  • Utilizing Jupiter notebooks with Raku chatbook kernels allows for versatile programming and data analysis.
  • Direct access to web APIs like OpenAI and PaLM can streamline the use of large language models in software development.
  • The ability to automatically format outputs into markdown or plain text enhances the readability and usability of generated content.
  • Embedding image generation services within programming notebooks can enrich content and aid in visual learning.
  • Creating chat objects within notebooks can simulate interactive dialogues, providing a unique approach to user input and command execution.
  • The use of prompt expansion and a database of prompts can significantly improve the efficiency of generating content with large language models.
  • Implementing literate programming techniques can facilitate the generation of comprehensive documentation and tutorials.
  • The development of computational markdown allows for the seamless integration of code and narrative, enhancing the learning experience.
  • Utilizing large language models for generating test descriptions and documentation can streamline the development process.
  • The concept of “few-shot learning” with large language models can be applied to generate specific outputs based on minimal input examples.
  • Leveraging large language models for semantic analysis and recommendation systems can offer significant advancements in text analysis.
  • The ability to translate natural language commands into programming commands can simplify complex tasks for developers.
  • Integrating language models for entity recognition and data extraction from text can enhance data analysis and information retrieval.
  • The development of frameworks for extending programming languages with large language model functionalities can foster innovation.
  • The use of large language models in generating code for solving mathematical equations demonstrates the potential for automating complex problem-solving.
  • The exploration of generating dialogues between chat objects presents new possibilities for creating interactive and dynamic content.
  • The application of large language models in generating package documentation from tests highlights the potential for improving software documentation practices.
  • The integration of language models with programming languages like Raku showcases the potential for enhancing programming environments with AI capabilities.
  • The demonstration of embedding services like image generation and language translation within programming notebooks opens new avenues for creative and technical content creation.
  • The discussion on the limitations and challenges of integrating large language models with programming environments provides insights into future development directions.

QUOTES:

  • “Integrating large language models with Raku allows for dynamic interaction and enhanced functionalities within notebooks.”
  • “Direct access to web APIs streamlines the use of large language models in software development.”
  • “Automatically formatting outputs into markdown or plain text enhances the readability and usability of generated content.”
  • “Creating chat objects within notebooks provides a unique approach to interactive dialogues and command execution.”
  • “The use of prompt expansion and a database of prompts can significantly improve efficiency in content generation.”
  • “Literate programming techniques facilitate the generation of comprehensive documentation and tutorials.”
  • “Computational markdown allows for seamless integration of code and narrative, enhancing the learning experience.”
  • “Few-shot learning with large language models can generate specific outputs based on minimal input examples.”
  • “Leveraging large language models for semantic analysis and recommendation systems offers significant advancements in text analysis.”
  • “Translating natural language commands into programming commands simplifies complex tasks for developers.”

HABITS:

  • Utilizing Visual Studio Code for programming and data analysis.
  • Embedding large language models within programming notebooks for dynamic interaction.
  • Automatically formatting outputs to enhance readability and usability.
  • Creating and utilizing chat objects for interactive programming.
  • Employing prompt expansion and maintaining a database of prompts for efficient content generation.
  • Implementing literate programming techniques for documentation and tutorials.
  • Developing and using computational markdown for integrated code and narrative.
  • Applying few-shot learning techniques with large language models for specific outputs.
  • Leveraging large language models for semantic analysis and recommendation systems.
  • Translating natural language commands into programming commands to simplify tasks.

FACTS:

  • Raku chatbook kernels in Jupiter notebooks allow for versatile programming and data analysis.
  • OpenAI, PaLM, and DALL-E are utilized for accessing large language models and image generation services.
  • Large language models can automatically format outputs into markdown or plain text.
  • Chat objects within notebooks can simulate interactive dialogues and command execution.
  • A database of prompts improves the efficiency of generating content with large language models.
  • Computational markdown integrates code and narrative, enhancing the learning experience.
  • Large language models can generate code for solving mathematical equations and other complex tasks.
  • The integration of large language models with programming languages like Raku enhances programming environments.
  • Embedding services like image generation and language translation within programming notebooks is possible.
  • The presentation explores the potential for automating complex problem-solving with AI.

REFERENCES:

RECOMMENDATIONS:

  • Explore integrating large language models with programming languages for enhanced functionalities.
  • Utilize Jupiter notebooks with Raku chatbook kernels for versatile programming tasks.
  • Take advantage of direct access to web APIs for streamlined software development.
  • Employ automatic formatting of outputs for improved readability and usability.
  • Create and utilize chat objects within notebooks for interactive programming experiences.
  • Implement literate programming techniques for comprehensive documentation and tutorials.
  • Develop computational markdown for an integrated code and narrative learning experience.
  • Apply few-shot learning techniques with large language models for generating specific outputs.
  • Leverage large language models for advanced text analysis and recommendation systems.
  • Translate natural language commands into programming commands to simplify complex tasks.

References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “Day 21 – Using DALL-E models in Raku”, (2023), Raku Advent Calendar at WordPress.

Packages, repositories

[AAp1] Anton Antonov, Jupyter::Chatbook Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, LLM::Prompts Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp5] Anton Antonov, WWW::PaLM Raku package, (2023-2024), GitHub/antononcube.

[AAp6] Anton Antonov, WWW::Gemini Raku package, (2024), GitHub/antononcube.

[AAp7] Anton Antonov, Text::CodeProcessing Raku package, (2021-2023), GitHub/antononcube.

[DMr1] Daniel Miessler, “fabric”, (2023-2024), GitHub/danielmiessler.

Videos

[AAv1] Anton Antonov, “Integrating Large Language Models with Raku” (2023), The Raku Conference at YouTube.

LLM aids for processing Biden’s State of the Union address

… given on March, 7, 2024


Introduction

In this blog post (notebook) we provide aids and computational workflows for the analysis of Joe Biden’s State of the Union address given on March 7th, 2024. We use Large Language Model (LLM) workflows and prompts for examining and understanding the speech in a systematic and reproducible manner.

The speech transcript is taken from whitehouse.gov.

The computations are done with a Raku chatbook, [AAp6, AAv1÷AAv3]. The LLM functions used in the workflows are explained and demonstrated in [AA1, AAv3]. The workflows are done with OpenAI’s models [AAp1]. Currently the models of Google’s (PaLM, Gemini), [AAp2], and MistralAI, [AAp4], cannot be used with the workflows below because their input token limits are too low.

Similar set of workflows and prompts are described in:

The prompts of the latter are used below.

The following table — derived from Biden’s address — has the most important or provocative statements (found by an LLM):

subjectstatement
Global politics and securityOverseas, Putin of Russia is on the march, invading Ukraine and sowing chaos throughout Europe and beyond.
Support for UkraineBut Ukraine can stop Putin if we stand with Ukraine and provide the weapons it needs to defend itself.
Domestic politicsA former American President actually said that, bowing down to a Russian leader.
NATOToday, we’ve made NATO stronger than ever.
Democracy and January 6thJanuary 6th and the lies about the 2020 election, and the plots to steal the election, posed the gravest threat to our democracy since the Civil War.
Reproductive rightsGuarantee the right to IVF nationwide!
Economic recovery and policies15 million new jobs in just three years – that’s a record!
Healthcare and prescription drug costsInstead of paying $400 a month for insulin seniors with diabetes only have to pay $35 a month!
Education and tuitionLet’s continue increasing Pell Grants for working- and middle-class families.
Tax reformIt’s time to raise the corporate minimum tax to at least 21% so every big corporation finally begins to pay their fair share.
Gun violence preventionI’m demanding a ban on assault weapons and high-capacity magazines!
ImmigrationSend me the border bill now!
Climate actionI am cutting our carbon emissions in half by 2030.
Israel and Gaza conflictIsrael has a right to go after Hamas.
Vision for America’s futureI see a future where the middle class finally has a fair shot and the wealthy finally have to pay their fair share in taxes.

Structure

The structure of the notebook is as follows:

  1. Getting the speech text and setup
    Standard ingestion and setup.
  2. Themes
    TL;DR via a table of themes.
  3. Most important or provocative statements
    What are the most important or provocative statements?
  4. Summary and recommendations
    Extracting speech wisdom.
  5. Hidden and propaganda messages
    For people living in USA.

Remark: This blog post (and corresponding notebook) are made “for completeness” of the set [AA2, AA3]. Ideally, that would be the last of those.

Remark: I strongly consider making a Cro app, and/or non-Cro one, that apply the described workflows and prompts to user uploaded texts.


Getting the speech text and setup

Here we load packages and define a text statistics function and HTML stripping function:

use HTTP::Tiny;
use JSON::Fast;
use Data::Reshapers;

sub text-stats(Str:D $txt) { <chars words lines> Z=> [$txt.chars, $txt.words.elems, $txt.lines.elems] }

sub strip-html(Str $html) returns Str {

    my $res = $html
    .subst(/'<style'.*?'</style>'/, :g)
    .subst(/'<script'.*?'</script>'/, :g)
    .subst(/'<'.*?'>'/, :g)
    .subst(/'&lt;'.*?'&gt;'/, :g)
    .subst(/'&nbsp;'/, ' ', :g)
    .subst(/[\v\s*] ** 2..*/, "\n\n", :g);

    return $res;
}

Ingest text

Here we ingest the text of the speech:

my $url = 'https://www.whitehouse.gov/briefing-room/speeches-remarks/2024/03/07/remarks-of-president-joe-biden-state-of-the-union-address-as-prepared-for-delivery-2/';
my $htmlEN = HTTP::Tiny.new.get($url)<content>.decode;

$htmlEN .= subst(/ \v+ /, "\n", :g);

my $txtEN = strip-html($htmlEN);

$txtEN .= substr($txtEN.index('March 07, 2024') .. ($txtEN.index("Next Post:") - 1));

text-stats($txtEN)
# (chars => 37702 words => 6456 lines => 453)

LLM access configuration

Here we configure LLM access — we use OpenAI’s model “gpt-4-turbo-preview” since it allows inputs with 128K tokens:

my $conf = llm-configuration('ChatGPT', model => 'gpt-4-turbo-preview', max-tokens => 4096, temperature => 0.7);
$conf.Hash.elems

Themes

Here we extract the themes found in the speech and tabulate them (using the prompt “ThemeTableJSON”):

my $tblThemes = llm-synthesize(llm-prompt("ThemeTableJSON")($txtEN, "article", 50), e => $conf, form => sub-parser('JSON'):drop);

$tblThemes.&dimensions;
# (16 2)

Here we tabulate the found themes:

#% html
$tblThemes ==> data-translation(field-names=><theme content>)
themecontent
IntroductionPresident Joe Biden begins by reflecting on historical challenges to freedom and democracy, comparing them to current threats both domestically and internationally.
Foreign Policy and National SecurityBiden addresses the situation in Ukraine, NATO’s strength, and the necessity of bipartisan support to confront Putin and Russia’s aggression.
Domestic Challenges and DemocracyHe discusses the January 6th insurrection, the ongoing threats to democracy in the U.S., and the need for unity to defend democratic values.
Reproductive RightsBiden criticizes the overturning of Roe v. Wade, shares personal stories to highlight the impact, and calls for legislative action to protect reproductive freedoms.
Economic Recovery and PolicyThe President outlines his administration’s achievements in job creation, economic growth, and efforts to reduce inflation, emphasizing a middle-out economic approach.
Infrastructure and ManufacturingBiden highlights investments in infrastructure, clean energy, and manufacturing, including specific projects and acts that have contributed to job creation and economic development.
HealthcareHe details achievements in healthcare reform, including prescription drug pricing, expansion of Medicare, and initiatives focused on women’s health research.
Housing and EducationThe President proposes solutions for the housing crisis and outlines plans for improving education from preschool to college affordability.
Tax Reform and Fiscal ResponsibilityBiden proposes tax reforms targeting corporations and the wealthy to ensure fairness and fund his policy initiatives, contrasting his approach with the previous administration.
Climate Change and Environmental PolicyHe discusses significant actions taken to address climate change, emphasizing job creation in clean energy and conservation efforts.
Immigration and Border SecurityBiden contrasts his immigration policy with his predecessor’s, advocating for a bipartisan border security bill and a compassionate approach to immigration.
Voting Rights and Civil RightsThe President calls for the passage of voting rights legislation and addresses issues around diversity, equality, and the protection of fundamental rights.
Gun Violence PreventionHe shares personal stories to underscore the urgency of gun violence prevention, celebrating past achievements and calling for further action.
Foreign ConflictsBiden addresses the conflict in the Middle East, emphasizing the need for humanitarian aid and a two-state solution for Israel and Palestine.
Global Leadership and CompetitionThe President discusses the U.S.’s economic competition with China, American leadership on the global stage, and the importance of alliances.
Vision for America’s FutureBiden concludes with an optimistic vision for America, focusing on unity, democracy, and a future where America leads by example.

Most important or provocative statements

Here we find important or provocative statements in the speech via an LLM synthesis:

my $imp = llm-synthesize([
    "Give the most important or provocative statements in the following speech.\n\n", 
    $txtEN,
    "Give the results as a JSON array with subject-statement pairs.",
    llm-prompt('NothingElse')('JSON')
    ], e => $conf, form => sub-parser('JSON'):drop);

$imp.&dimensions
# (15 2)

Show the important or provocative statements in Markdown format:

#% html
$imp ==> data-translation(field-names => <subject statement>)
subjectstatement
Global politics and securityOverseas, Putin of Russia is on the march, invading Ukraine and sowing chaos throughout Europe and beyond.
Support for UkraineBut Ukraine can stop Putin if we stand with Ukraine and provide the weapons it needs to defend itself.
Domestic politicsA former American President actually said that, bowing down to a Russian leader.
NATOToday, we’ve made NATO stronger than ever.
Democracy and January 6thJanuary 6th and the lies about the 2020 election, and the plots to steal the election, posed the gravest threat to our democracy since the Civil War.
Reproductive rightsGuarantee the right to IVF nationwide!
Economic recovery and policies15 million new jobs in just three years – that’s a record!
Healthcare and prescription drug costsInstead of paying $400 a month for insulin seniors with diabetes only have to pay $35 a month!
Education and tuitionLet’s continue increasing Pell Grants for working- and middle-class families.
Tax reformIt’s time to raise the corporate minimum tax to at least 21% so every big corporation finally begins to pay their fair share.
Gun violence preventionI’m demanding a ban on assault weapons and high-capacity magazines!
ImmigrationSend me the border bill now!
Climate actionI am cutting our carbon emissions in half by 2030.
Israel and Gaza conflictIsrael has a right to go after Hamas.
Vision for America’s futureI see a future where the middle class finally has a fair shot and the wealthy finally have to pay their fair share in taxes.

Summary and recommendations

Here we get a summary and extract ideas, quotes, and recommendations from the speech:

my $sumIdea =llm-synthesize(llm-prompt("ExtractArticleWisdom")($txtEN), e => $conf);

text-stats($sumIdea)
# (chars => 6166 words => 972 lines => 95)

The result is rendered below.


#% markdown
$sumIdea.subst(/ ^^ '#' /, '###', :g)

SUMMARY

President Joe Biden delivered the State of the Union Address on March 7, 2024, focusing on the challenges and opportunities facing the United States. He discussed the assault on democracy, the situation in Ukraine, domestic policies including healthcare and economy, and the need for unity and progress in addressing national and international issues.

IDEAS:

  • The historical context of challenges to freedom and democracy, drawing parallels between past and present threats.
  • The role of the United States in supporting Ukraine against Russian aggression.
  • The importance of bipartisan support for national security and democracy.
  • The need for America to take a leadership role in NATO and support its allies.
  • The dangers of political figures undermining democratic values for personal or political gain.
  • The connection between domestic policies and the strength of democracy, including reproductive rights and healthcare.
  • The economic recovery and growth under the Biden administration, emphasizing job creation and infrastructure improvements.
  • The focus on middle-class prosperity and the role of unions in economic recovery.
  • The commitment to addressing climate change and promoting clean energy jobs.
  • The significance of education, from pre-school access to college affordability, in securing America’s future.
  • The need for fair taxation and closing loopholes for the wealthy and corporations.
  • The importance of healthcare reform, including lowering prescription drug costs and expanding Medicare.
  • The commitment to protecting Social Security and Medicare from cuts.
  • The approach to immigration reform and border security as humanitarian and security issues.
  • The stance on gun violence prevention and the need for stricter gun control laws.
  • The emphasis on America’s resilience and optimism for the future.
  • The call for unity in defending democracy and building a better future for all Americans.
  • The vision of America as a land of possibilities, with a focus on progress and inclusivity.
  • The acknowledgment of America’s role in the world, including support for Israel and a two-state solution with Palestine.
  • The importance of scientific research and innovation in solving major challenges like cancer.

QUOTES:

  • “Not since President Lincoln and the Civil War have freedom and democracy been under assault here at home as they are today.”
  • “America is a founding member of NATO the military alliance of democratic nations created after World War II to prevent war and keep the peace.”
  • “History is watching, just like history watched three years ago on January 6th.”
  • “You can’t love your country only when you win.”
  • “Inflation has dropped from 9% to 3% – the lowest in the world!”
  • “The racial wealth gap is the smallest it’s been in 20 years.”
  • “I’ve been delivering real results in a fiscally responsible way.”
  • “Restore the Child Tax Credit because no child should go hungry in this country!”
  • “No billionaire should pay a lower tax rate than a teacher, a sanitation worker, a nurse!”
  • “We are the only nation in the world with a heart and soul that draws from old and new.”

HABITS:

  • Advocating for bipartisan cooperation in Congress.
  • Emphasizing the importance of education in personal growth and national prosperity.
  • Promoting the use of clean energy and sustainable practices to combat climate change.
  • Prioritizing healthcare reform to make it more affordable and accessible.
  • Supporting small businesses and entrepreneurship as engines of economic growth.
  • Encouraging scientific research and innovation, especially in healthcare.
  • Upholding the principles of fair taxation and economic justice.
  • Leveraging diplomacy and international alliances for global stability.
  • Committing to the protection of democratic values and institutions.
  • Fostering community engagement and civic responsibility.

FACTS:

  • The United States has welcomed Finland and Sweden into NATO, strengthening the alliance.
  • The U.S. economy has created 15 million new jobs in three years, a record number.
  • Inflation in the United States has decreased from 9% to 3%.
  • More people have health insurance in the U.S. today than ever before.
  • The racial wealth gap is the smallest it has been in 20 years.
  • The United States is investing more in research and development than ever before.
  • The Biden administration has made the largest investment in public safety ever through the American Rescue Plan.
  • The murder rate saw the sharpest decrease in history last year.
  • The United States is leading international efforts to provide humanitarian assistance to Gaza.
  • The U.S. has revitalized its partnerships and alliances in the Pacific region.

REFERENCES:

  • NATO military alliance.
  • Bipartisan National Security Bill.
  • Chips and Science Act.
  • Bipartisan Infrastructure Law.
  • Affordable Care Act (Obamacare).
  • Voting Rights Act.
  • Freedom to Vote Act.
  • John Lewis Voting Rights Act.
  • PRO Act for worker’s rights.
  • PACT Act for veterans exposed to toxins.

RECOMMENDATIONS:

  • Stand with Ukraine and support its defense against Russian aggression.
  • Strengthen NATO and support new member states.
  • Pass the Bipartisan National Security Bill to enhance U.S. security.
  • Guarantee reproductive rights nationwide and protect healthcare decisions.
  • Continue economic policies that promote job creation and infrastructure development.
  • Implement fair taxation for corporations and the wealthy to ensure economic justice.
  • Expand Medicare and lower prescription drug costs for all Americans.
  • Protect and strengthen Social Security and Medicare.
  • Pass comprehensive immigration reform and secure the border humanely.
  • Address climate change through significant investments in clean energy and jobs.
  • Promote education access from pre-school to college to ensure a competitive workforce.
  • Implement stricter gun control laws, including bans on assault weapons and universal background checks.
  • Support a two-state solution for Israel and Palestine and work towards peace in the Middle East.
  • Harness the promise of artificial intelligence while protecting against its perils.

Hidden and propaganda messages

In this section we try to find is the speech apolitical and propaganda-free.

Remark: We leave to reader as an exercise to verify that both the overt and hidden messages found by the LLM below are explicitly stated in article.

Here we find the hidden and “propaganda” messages in the article:

my $propMess =llm-synthesize([llm-prompt("FindPropagandaMessage"), $txtEN], e => $conf);

text-stats($propMess)
# (chars => 6441 words => 893 lines => 83)

Remark: The prompt “FindPropagandaMessage” has an explicit instruction to say that it is intentionally cynical. It is also, marked as being “For fun.”

The LLM result is rendered below.


#% markdown
$propMess.subst(/ ^^ '#' /, '###', :g).subst(/ ^^ (<[A..Z \h \']>+ ':') /, { "### {$0.Str} \n"}, :g)

OVERT MESSAGE

President Biden emphasizes democracy, support for Ukraine, and domestic advancements in his address.

HIDDEN MESSAGE

Biden seeks to consolidate Democratic power by evoking fear of Republican governance and foreign threats.

HIDDEN OPINIONS

  • Democratic policies ensure national and global security effectively.
  • Republican opposition jeopardizes both national unity and international alliances.
  • Historical comparisons highlight current threats to democracy as unprecedented.
  • Support for Ukraine is a moral and strategic imperative for global democracy.
  • Criticism of the Supreme Court’s decisions reflects a push for legislative action on contentious issues.
  • Emphasis on job creation and economic policies aims to showcase Democratic governance success.
  • Investments in infrastructure and technology are crucial for future American prosperity.
  • Health care reforms and education investments underscore a commitment to social welfare.
  • Climate change initiatives are both a moral obligation and economic opportunity.
  • Immigration reforms are positioned as essential to American identity and values.

SUPPORTING ARGUMENTS and QUOTES

  • Comparisons to past crises underscore the urgency of current threats.
  • Criticism of Republican predecessors and Congress members suggests a need for Democratic governance.
  • References to NATO and Ukraine highlight a commitment to international democratic principles.
  • Mention of Supreme Court decisions and calls for legislative action stress the importance of Democratic control.
  • Economic statistics and policy achievements are used to argue for the effectiveness of Democratic governance.
  • Emphasis on infrastructure, technology, and climate investments showcases forward-thinking policies.
  • Discussion of health care and education reforms highlights a focus on social welfare.
  • The portrayal of immigration reforms reflects a foundational American value under Democratic leadership.

DESIRED AUDIENCE OPINION CHANGE

  • See Democratic policies as essential for both national and global security.
  • View Republican opposition as a threat to democracy and unity.
  • Recognize the urgency of supporting Ukraine against foreign aggression.
  • Agree with the need for legislative action on Supreme Court decisions.
  • Appreciate the success of Democratic economic and infrastructure policies.
  • Support Democratic initiatives on climate change as crucial for the future.
  • Acknowledge the importance of health care and education investments.
  • Value immigration reforms as core to American identity and values.
  • Trust in Democratic leadership for navigating global crises.
  • Believe in the effectiveness of Democratic governance for social welfare.

DESIRED AUDIENCE ACTION CHANGE

  • Support Democratic candidates in elections.
  • Advocate for legislative action on contentious Supreme Court decisions.
  • Endorse and rally for Democratic economic and infrastructure policies.
  • Participate in initiatives supporting climate change action.
  • Engage in advocacy for health care and education reforms.
  • Embrace and promote immigration reforms as fundamental to American values.
  • Voice opposition to Republican policies perceived as threats to democracy.
  • Mobilize for international solidarity, particularly regarding Ukraine.
  • Trust in and amplify the successes of Democratic governance.
  • Actively defend democratic principles both nationally and internationally.

MESSAGES

President Biden wants you to believe he is advocating for democracy and progress, but he is actually seeking to consolidate Democratic power and diminish Republican influence.

PERCEPTIONS

President Biden wants you to believe he is a unifier and protector of democratic values, but he’s actually a strategic politician emphasizing Democratic successes and Republican failures.

ELLUL’S ANALYSIS

According to Jacques Ellul’s “Propaganda: The Formation of Men’s Attitudes,” Biden’s address exemplifies modern political propaganda through its strategic framing of issues, historical comparisons, and appeals to democratic ideals. Ellul would likely note the address’s dual function: to solidify in-group unity (among Democratic supporters) and to subtly influence the broader public’s perceptions of domestic and international challenges. The speech leverages crises as opportunities for reinforcing the necessity of Democratic governance, illustrating Ellul’s observation that effective propaganda exploits existing tensions to achieve political objectives.

BERNAYS’ ANALYSIS

Based on Edward Bernays’ “Propaganda” and “Engineering of Consent,” Biden’s speech can be seen as an exercise in shaping public opinion towards Democratic policies and leadership. Bernays would recognize the sophisticated use of symbols (e.g., references to historical events and figures) and emotional appeals to construct a narrative that positions Democratic governance as essential for the nation’s future. The speech’s emphasis on bipartisan achievements and calls for legislative action also reflect Bernays’ insights into the importance of creating a perception of consensus and societal progress.

LIPPMANN’S ANALYSIS

Walter Lippmann’s “Public Opinion” offers a perspective on how Biden’s address attempts to manufacture consent for Democratic policies by presenting a carefully curated version of reality. Lippmann would likely point out the strategic selection of facts, statistics, and stories designed to reinforce the audience’s existing preconceptions and to guide them towards desired conclusions. The address’s focus on bipartisan accomplishments and urgent challenges serves to create an environment where Democratic solutions appear both reasonable and necessary.

FRANKFURT’S ANALYSIS

Harry G. Frankfurt’s “On Bullshit” provides a lens for criticizing the speech’s relationship with truth and sincerity. Frankfurt might argue that while the address purports to be an honest assessment of the nation’s state, it strategically blurs the line between truth and falsehood to serve political ends. The speech’s selective presentation of facts and omission of inconvenient truths could be seen as indicative of a broader political culture where the distinction between lying and misleading is increasingly irrelevant.

NOTE: This AI is tuned specifically to be cynical and politically-minded. Don’t take it as perfect. Run it multiple times and/or go consume the original input to get a second opinion.


References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (2024), RakuForPrediction at WordPress.

[AA3] Anton Antonov, “LLM aids for processing Putin’s State-Of-The-Nation speech” (2024), RakuForPrediction at WordPress.

[AA4] Anton Antonov, “Comprehension AI Aids for “Can AI Solve Science?”, (2024), RakuForPrediction at WordPress.

[OAIb1] OpenAI team, “New models and developer products announced at DevDay”, (2023), OpenAI/blog.

Packages

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::PaLM Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, WWW::MistralAI Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, WWW::LLaMA Raku package, (2024), GitHub/antononcube.

[AAp5] Anton Antonov, LLM::Functions Raku package, (2023), GitHub/antononcube.

[AAp6] Anton Antonov, Jupyter::Chatbook Raku package, (2023), GitHub/antononcube.

[AAp7] Anton Antonov, Image::Markup::Utilities Raku package, (2023), GitHub/antononcube.

Videos

[AAv1] Anton Antonov, “Jupyter Chatbook LLM cells demo (Raku)”, (2023), YouTube/@AAA4Prediction.

[AAv2] Anton Antonov, “Jupyter Chatbook multi cell LLM chats teaser (Raku)”, (2023), YouTube/@AAA4Prediction.

[AAv3] Anton Antonov “Integrating Large Language Models with Raku”, (2023), YouTube/@therakuconference6823.

Notebook transformations

Introduction

In this blog post we describe a series of different (computational) notebook transformations using different tools. We are using a series of recent articles and notebooks for processing the English and Russian texts of a recent 2-hour long interview. The workflows given in the notebooks are in Raku and Wolfram Language (WL).

Remark: Wolfram Language (WL) and Mathematica are used as synonyms in this document.

Remark: Using notebooks with Large Language Model (LLM) workflows is convenient because the WL LLM functions are also implemented in Python and Raku, [AA1, AAp1, AAp2].

We can say that this blog post attempts to advertise the Raku package “Markdown::Grammar”, [AAp3], demonstrated in the videos:

TL;DR: Using Markdown as an intermediate format we can transform easily enough between Jupyter- and Mathematica notebooks.


Transformation trip

The transformation trip starts with the notebook of the article  “LLM aids for processing of the first Carlson-Putin interview”, [AA1]. 

  1. Make the Raku Jupyter notebook
  2. Convert the Jupyter notebook into Markdown
    • Using Jupyter’s built-in converter
  3. Publish the Markdown version to WordPress, [AA2]
  4. Convert the Markdown file into a Mathematica notebook
  5. Publish that to Wolfram Community
    • That notebook was deleted by moderators, because it does not feature Wolfram Language (WL)
  6. Make the corresponding Mathematica notebook using WL LLM functions
  7. Publish to Wolfram Community
  8. Make the Russian version with the Russian transcript
  9. Publish to Wolfram Community
    • That notebook was deleted by the moderators, because it is not in English
  10. Convert the Mathematica notebook to Markdown
    • Using Kuba Podkalicki’s M2MD, [KPp1]
  11. Publish to WordPress, [AA3]
  12. Convert the Markdown file to Jupyter
  13. Re-make the (Russian described) workflows using Raku, [AAn5]
  14. Re-make workflows using Python, [AAn6], [AAn7]

Here is the corresponding Mermaid-JS diagram (using the package “WWW::MermaidInk”, [AAp6]):

use WWW::MermaidInk;

my $diagram = q:to/END/;
graph TD
A[Make the Raku Jupyter notebook] --> B[Convert the Jupyter notebook into Markdown]
B --> C[Publish to WordPress]
C --> D[Convert the Markdown file into a Mathematica notebook]
D --> E[Publish that to Wolfram Community]
E --> F[Make the corresponding Mathematica notebook using WL functions]
F --> G[Publish to Wolfram Community]
G --> H[Make the Russian version with the Russian transcript]
H --> I[Publish to Wolfram Community]
I --> J[Convert the Mathematica notebook to Markdown]
J --> K[Publish to WordPress]
K --> L[Convert the Markdown file to Jupyter]
L --> M[Re-make the workflows using Raku]
M --> N[Re-make the workflows using Python]
N -.-> Nen([English])
N -.-> Nru([Russian])
C -.-> WordPress{{Word Press}}
K -.-> WordPress
E -.-> |Deleted:<br>features Raku| WolframCom{{Wolfram Community}}
G -.-> WolframCom
I -.-> |"Deleted:<br>not in English"|WolframCom
D -.-> MG[[Markdown::Grammar]]
B -.-> Ju{{Jupyter}}
L -.-> jupytext[[jupytext]]
J -.-> M2MD[[M2MD]]
E -.-> RakuMode[[RakuMode]]
END

say mermaid-ink($diagram, format => 'md-image');

Clarifications

Russian versions

The first Carlson-Putin interview that is processed in the notebooks was held both in English and Russian. I think just doing the English study is “half-baked.” Hence, I did the workflows with the Russian text and translated to Russian the related explanations.

Remark: The Russian versions are done in all three programming languages: Python, Raku, Wolfram Language. See [AAn4, AAn5, AAn7].

Using different programming languages

From my point of view, having Raku-enabled Mathematica / WL notebook is a strong statement about WL. Fair amount of coding was required for the paclet “RakuMode”, [AAp4].

To have that functionality implemented is preconditioned on WL having extensive external evaluation functionalities.

When we compare WL, Python, and R over Machine Learning (ML) projects, WL always appears to be the best choice for ML. (Overall.)

I do use these sets of comparison posts at Wolfram Community to support my arguments in discussions regarding which programming language is better. (Or bigger.)

Example comparison: WL workflows

The following three Wolfram Community posts are more or less the same content — “Workflows with LLM functions” — but in different programming languages:

Example comparison: LSA over mandala collections

The following Wolfram Community posts are more or less the same content — “LSA methods comparison over random mandalas deconstruction”, [AAv1] — but in different programming languages:

Remark: The movie, [AAv1], linked in those notebooks also shows a comparison with the LSA workflow in R.

Using Raku with LLMs

I generally do not like using Jupyter notebooks, but using Raku with LLMs is very convenient [AAv2, AAv3, AAv4]. WL is clunkier when it comes to pre- or post-processing of LLM results.

Also, the Raku Chatbooks, [AAp5], provided better environment for display of the often Markdown formatted results of LLMs. (Like the ones in notebooks discussed here.)


References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (2024), RakuForPrediction at WordPress.

[AA3] Anton Antonov, “LLM помогает в обработке первого интервью Карлсона-Путина”, (2024), MathematicaForPrediction at WordPress.

[AA4] Anton Antonov, “Markdown to Mathematica converter”, (2022). Wolfram Community.

Notebooks

[AAn1] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (Raku/Jupyter), (2024), RakuForPrediction-book at GitHub/antononcube.

[AAn2] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (Raku/Mathematica), (2024), WolframCloud/antononcube.

[AAn3] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (WL/Mathematica), (2024), WolframCloud/antononcube.

[AAn4] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (in Russian), (WL/Mathematica), (2024), WolframCloud/antononcube.

[AAn5] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (in Russian), (Raku/Jupyter), (2024), RakuForPrediction-book at GitHub/antononcube.

[AAn6] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (Python/Jupyter), (2024), PythonForPrediction-blog at GitHub/antononcube.

[AAn7] Anton Antonov, “LLM aids for processing of the first Carlson-Putin interview”, (in Russian), (Python/Jupyter), (2024), PythonForPrediction-blog at GitHub/antononcube.

Packages, paclets

[AAp1] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp2] Anton Antonov, LLM::Prompts Raku package, (2023), GitHub/antononcube.

[AAp3] Anton Antonov, Markdown::Grammar Raku package, (2022-2023), GitHub/antononcube.

[AAp4] Anton Antonov, RakuMode WL paclet, (2022-2023), Wolfram Language Paclet Repository.

[AAp5] Anton Antonov, Jupyter::Chatbook Raku package, (2023-2024), GitHub/antononcube.

[AAp6] Anton Antonov, WWW::MermaidInk Raku package, (2023), GitHub/antononcube.

[KPp1] Kuba Podkalicki’s, M2MD WL paclet, (2018-2023), GitHub/kubaPod.

Videos

[AAv1] Anton Antonov “Random Mandalas Deconstruction in R, Python, and Mathematica (Greater Boston useR Meetup, Feb 2022)” (2022), YouTube/@AAA4Prediction.

[AAv2] Anton Antonov, “Jupyter Chatbook LLM cells demo (Raku)” (2023), YouTube/@AAA4Prediction.

[AAv3] Anton Antonov, “Jupyter Chatbook multi cell LLM chats teaser (Raku)”, (2023), YouTube/@AAA4Prediction.

[AAv4] Anton Antonov “Integrating Large Language Models with Raku”, (2023), YouTube/@therakuconference6823.

[AAv5] Anton Antonov, “Markdown to Mathematica converter (CLI and StackExchange examples)”, (2022), Anton A. Antonov’s channel at YouTube.

[AAv6] Anton Antonov, “Markdown to Mathematica converter (Jupyter notebook example)”, (2022), Anton A. Antonov’s channel at YouTube.

AI vision via Raku

Introduction

In the fall of 2023 OpenAI introduced the image vision model “gpt-4-vision-preview”, [OAIb1].

The model “gpt-4-vision-preview” represents a significant enhancement to the GPT-4 model, providing developers and AI enthusiasts with a more versatile tool capable of interpreting and narrating images alongside text. This development opens up new possibilities for creative and practical applications of AI in various fields.

For example, consider the following Raku-developer-centric applications:

  • Narration of UML diagrams
  • Code generation from narrated (and suitably tweaked) narrations of architecture diagrams and charts
  • Generating presentation content draft from slide images
  • Extracting information from technical plots
  • etc.

A more diverse set of the applications would be:

  • Dental X-ray images narration
  • Security or baby camera footage narration
    • How many people or cars are seen, etc.
  • Transportation trucks content descriptions
    • Wood logs, alligators, boxes, etc.
  • Web page visible elements descriptions
    • Top menu, biggest image seen, etc.
  • Creation of recommender systems for image collections
    • Based on both image features and image descriptions
  • etc.

As a first concrete example, consider the following image that fable-dramatizes the release of Raku’s 2023.11 version (https://i.imgur.com/UcRYl9Yl.jpg):

Here is its narration:

#% bash
openai-playground --images=https://i.imgur.com/UcRYl9Yl.jpg Very concisely describe the image
The image depicts a vibrant, colorful illustration of two raccoons under a large tree adorned with various fruits, surrounded by butterflies, with a landscape of trees and a clear sky in the background. There's a date "2023.11" on the tree trunk.

Ways to use with Raku

There are five ways to utilize image interpretation (or vision) services in Raku:

  • Command Line Interface (CLI) script, [AAp1]
  • Dedicated Web API functions, [AAp1]
  • LLM functions, [AAp2]
  • Dedicated notebook cell type, [AAp3, AAv1]
  • Any combinations of the above

In this document are demonstrated the first three and the fifth. The fourth one is still “under design and consideration.”

Remark: Document’s structure reflects the list of the “five ways” above.

Remark: The model “gpt-4-vision-preview” is given as a “chat completion model”, therefore, in this document we consider it to be a Large Language Model (LLM).

Remark: This document was prepared as Jupyter chatbook, [AAp3], and then it was converted to Markdown and Mathematica / Wolfram Language notebook.

Packages

Here are the Raku packages (namespaces) used below, [AAp1, AAp2, AAp3]:

use WWW::OpenAI;
use WWW::OpenAI::ChatCompletions;
use WWW::MermaidInk;
use Lingua::Translation::DeepL; 
use Data::Translators;
use LLM::Functions;

Remark: The Jupypter kernel of “Jupyter::Chatbook”, [AAp3], automatically loads the packages “Data::Translators” and “LLM::Functions”. The functionalities of the packages “WWW::OpenAI” and “WWW::MermaidInk” are available in chatbooks through dedicated cells, [AAv1, AAv3].

Images

Here are the links to all images used in this document:

NameLink
Raccoons chasing butterflieshttps://i.imgur.com/UcRYl9Yl.jpg
LLM functionalities mind-maphttps://imgur.com/kcUcWnq
Single sightseerhttps://i.imgur.com/LEGfCeql.jpg
Three huntershttps://raw.githubusercontent.com/antononcube/Raku-WWW-OpenAI/main/resources/ThreeHunters.jpg
Raku Jupyter Chatbook solutionhttps://imgur.com/22lXXks
Cyber Week Spending Set to Hit New Highs in 2023https://cdn.statcdn.com/Infographic/images/normal/7045.jpeg

CLI

The simplest way to use the OpenAI’s vision service is through the CLI script of “WWW::OpenAI”, [AAp1]. (Already demoed in the introduction.)

Here is an image that summarizes how Jupyter Chatbooks work (see [AAp3, AAv1, AAv2]):

Here is a CLI shell command that requests the image above to be described (using at most 900 tokens):

#% bash
openai-playground --images=https://i.imgur.com/22lXXks.jpg --max-tokens=900 Describe the image
The image displays a flowchart with a structured sequence of operations or processes. The chart is divided into several areas with different headings that seem to be part of a software system or application. The main areas identified in the flowchart are "Message evaluation," "LLM interaction," "Chatbook frontend," "Chatbook backend," and "Prompt processing."

Starting from the left, the message evaluation feeds into "LLM interaction" where there are three boxes labeled "LLM::Functions," "PaLM," and "OpenAI," suggesting these are different functions or APIs that can be interacted with.

In the "Chatbook frontend," there is a process that begins with a "Chat cell" that leads to a decision point asking if "Chat ID specified?" Based on the answer, it either assumes 'NONE' for the chat ID or proceeds with the specified ID.

In the "Chatbook backend," there is a check to see if the "Chat ID exists in DB?" If not, a new chat object is created; otherwise, an existing chat object is retrieved from the "Chat objects" storage.

Finally, the "Prompt processing" area involves parsing a "Prompt DSL spec" and checking if known prompts are found. If they are, it leads to "Prompt expansion" and interacts with "LLM::Prompts" to possibly generate prompts.

Dotted lines indicate references or indirect interactions, while solid lines represent direct flows or processes. The chart is colored in shades of yellow and purple, which may be used to differentiate between different types of processes or to highlight the flow of information.

The flowchart is a typical representation of a software or system architecture, outlining how different components interact and what processes occur based on various conditions.

Shell workflow pipelines can be constructed with commands using CLI scripts of the packages loaded above. For example, here is a pipeline that translates the obtained image description from English to Bulgarian using the package “Lingua::Translation::DeepL”, [AAp5]:

#% bash
openai-playground --images=https://i.imgur.com/22lXXks.jpg --max-tokens=900 'Very concisely describe the image' | deepl-translation -t=Bulgarian
Изображението представлява блок-схема, която очертава процес, включващ оценка на съобщенията, взаимодействие с големи езикови модели (LLM) като PaLM и OpenAI и бекенд система за обработка на чат взаимодействия. Процесът включва стъпки за анализиране на подсказките, проверка за известни подсказки и управление на чат обекти в база данни. Изглежда, че това е системен дизайн за обработка и реагиране на потребителски входове в приложение за чат.

Of course, we can just request OpenAI’s vision to give the image description in whatever language we want (say, by using emojis):

#% bash
openai-playground --images=https://i.imgur.com/22lXXks.jpg --max-tokens=900 Very concisely describe the image in 🇷🇺
Это изображение диаграммы потока данных или алгоритма, на котором представлены различные этапы взаимодействия и обработки сообщений в компьютерной системе. На диаграмме есть блоки с надписями и стрелки, указывающие направление потока данных.

Web API functions

Within a Raku script or REPL session OpenAI’s vision service can be accessed with the functions openai-completion or openai-playground.

Remark: The function openai-playground is an umbrella function that redirects to various “specialized” functions for interacting with different OpenAI services. openai-completion is one of them. Other specialized functions are those for moderation, vector embeddings, and audio transcription and processing; see [AAp1].

If the function openai-completion is given a list of images, a textual result corresponding to those images is returned. The argument “images” is a list of image URLs, image file names, or image Base64 representations. (Any combination of those element types can be specified.)

Before demonstrating the vision functionality below we first obtain and show a couple of images.

Images

Here is a URL of an image: (https://i.imgur.com/LEGfCeql.jpg). Here is the image itself:

Next, we demonstrate how to display the second image by using the file path and the encode-image function from the WWW::OpenAI::ChatCompletions namespace. The encode-image function converts image files into Base64 image strings, which are a type of text representation of the image.

When we use the openai-completion function and provide a file name under the “images” argument, the encode-image function is automatically applied to that file.

Here is an example of how we apply encode-image to the image from a given file path ($*HOME ~ '/Downloads/ThreeHunters.jpg'):

my $img3 = WWW::OpenAI::ChatCompletions::encode-image($*HOME ~ '/Downloads/ThreeHunters.jpg'); "![]($img3)"</code>

Remark: The “three hunters” image is a resource file of “WWW::OpenAI”, [AAp1].

Image narration

Here is an image narration example with the two images above, again, one specified with a URL, the other with a file path:

my $url1 = 'https://i.imgur.com/LEGfCeql.jpg';
my $fname2 = $*HOME ~ '/Downloads/ThreeHunters.jpg'; 
my @images = [$url1, $fname2]; 

openai-completion("Give concise descriptions of the images.", :@images, max-tokens => 900, format => 'values');
1. The first image features a single raccoon perched on a tree branch surrounded by a multitude of colorful butterflies in an array of blues and oranges, set against a vibrant, nature-themed backdrop.

2. The second image depicts three raccoons on a tree branch in a forest setting, with two of them looking towards the viewer and one looking to the side. The background is filled with autumnal-colored leaves and numerous butterflies that match the whimsical atmosphere of the scene.

Description of a mind-map

Here is an application that should be more appealing to Raku-developers — getting a description of a technical diagram or flowchart. Well, in this case, it is a mind-map from [AA1]:

Here are get the vision model description of the mind-map above (and place the output in Markdown format):

my $mm-descr = 
    openai-completion(
        "How many branches this mind-map has? Describe each branch separately. Use relevant emoji prefixes.", 
        images => 'https://imgur.com/kcUcWnq.jpeg', 
        max-tokens => 1024,
        format => 'values'
    );

$mm-descr
The mind-map has five branches, each representing a different aspect or functionality related to LLM (Large Language Models) services access. Here's the description of each branch with relevant emoji prefixes:

1. 🎨 **DALL-E**: This branch indicates that DALL-E, an AI system capable of creating images from textual descriptions, is related to or a part of LLM services.

2. 🤖 **ChatGPT**: This branch suggests that ChatGPT, which is likely a conversational AI based on GPT (Generative Pre-trained Transformer), is associated with LLM services.

3. 🧠 **PaLM**: This branch points to PaLM, suggesting that it is another model or technology related to LLM services. PaLM might stand for a specific model or framework in the context of language processing.

4. 💬 **LLM chat objects**: This branch leads to a node indicating chat-related objects or functionalities that are part of LLM services.

5. 📚 **Chatbooks**: This branch leads to a concept called "Chatbooks," which might imply a feature or application related to creating books from chat or conversational content using LLM services.

Each of these branches emanates from the central node labeled "LLM services access," indicating that they are all different access points or functionalities within the realm of large language model services.

Here from the obtained description we request a (new) Mermaid-JS diagram to be generated:

my $mmd-chart = llm-synthesize(["Make the corresponding Mermaid-JS diagram code for the following description. Give the code only, without Markdown symbols.", $mm-descr], e=>'ChatGPT')
graph LR
A[LLM services access] 
B[DALL-E]-->A 
C[ChatGPT]-->A 
D[PaLM]-->A 
E[LLM chat objects]-->A 
F[Chatbooks]-->A

Here is a diagram made with Mermaid-JS spec obtained above using a function of “WWW::MermaidInk”, [AAp4]:

#% markdown 
mermaid-ink($mmd-chart, format=>'md-image')

Remark: In a Jupyter chatbook, [AAp3], Mermaid-JS diagrams can be “directly” visualized with notebook cells that have the magic mermaid. Below is given an instance of one of the better LLM results for making a Mermaid-JS diagram over the “vision-derived” mind-map description.

#% markdown
mermaid-ink('
graph TB
    A[LLM services access] --> B[DALL-E]
    A --> C[ChatGPT]
    A --> D[PaLM]
    A --> E[LLM chat objects]
    A --> F[Chatbooks]
    B -->|related to| G[DALL-E AI system]
    C -->|associated with| H[ChatGPT]
    D -->|related to| I[PaLM model]
    E -->|part of| J[chat-related objects/functionalities]
    F -->|implies| K[Feature or application related to chatbooks]
', format => 'md-image')

Here is an example of code generation based on the “vision derived” mind-map description above:

#% markdown
llm-synthesize([ "Generate Raku code -- using Markdown markings -- with an object oriented hierarchy corresponding to the description:\n", $mm-descr], e=>'ChatGPT')
class LLM::ServiceAccess {
    has DALLE $.dalle;
    has ChatGPT $.chatgpt;
    has PaLM $.palm;
    has LLMChatObjects $.llm-chat-objects;
    has Chatbooks $.chatbooks;
}

class DALLE {
    # Implementation for DALL-E functionality
}

class ChatGPT {
    # Implementation for ChatGPT functionality
}

class PaLM {
    # Implementation for PaLM functionality
}

class LLMChatObjects {
    # Implementation for LLM chat objects
}

class Chatbooks {
    # Implementation for Chatbooks functionality
}

# Usage
my $llm-service-access = LLM::ServiceAccess.new(
    dalle => DALLE.new,
    chatgpt => ChatGPT.new,
    palm => PaLM.new,
    llm-chat-objects => LLMChatObjects.new,
    chatbooks => Chatbooks.new,
);

LLM Functions

Let us show programmatic utilizations of the vision capabilities.

Here is the workflow we consider:

  1. Ingest an image file and encode it into a Base64 string
  2. Make an LLM configuration with that image string (and a suitable model)
  3. Synthesize a response to a basic request (like, image description)
    • Using llm-synthesize
  4. Make an LLM function for asking different questions over image
    • Using llm-function
  5. Ask questions and verify results
    • ⚠️ Answers to “hard” numerical questions are often wrong.

Image ingestion and encoding

Here we ingest an image and display it:

#%markdown </code>
my $imgBarChart = WWW::OpenAI::ChatCompletions::encode-image($*HOME ~ '/Downloads/Cyber-Week-Spending-Set-to-Hit-New-Highs-in-2023-small.jpeg');
"![]($imgBarChart)"</code>

Remark: The image was downloaded from the post “Cyber Week Spending Set to Hit New Highs in 2023”.

Configuration and synthesis

Here we make a suitable LLM configuration with the image:

<code>my $confImg = llm-configuration("ChatGPT", model => 'gpt-4-vision-preview', images => $imgBarChart, temperature => 0.2); $confImg.WHAT</code>
(Configuration)

Here we synthesize a response of a image description request:

llm-synthesize("Describe the image.", e=> $confImg)
The image is a bar chart titled "Cyber Week Spending Set to Hit New Highs in 2023". It shows estimated online spending on Thanksgiving weekend in the United States for the years 2019, 2020, 2021, 2022, and a forecast for 2023. The spending is broken down by three days: Thanksgiving Day, Black Friday, and Cyber Monday.

Each year is represented by a different color, with bars for each day showing the progression of spending over the years. The spending amounts range from $0B to $12B. The chart indicates an overall upward trend in spending, with the forecast for 2023 showing the highest spending across all three days.

In the top left corner of the chart, there is a small illustration of a computer with coins, suggesting online transactions. At the bottom, there is a note indicating that the forecast is based on data from Adobe Analytics. The Statista logo is visible in the bottom right corner, and there are Creative Commons and share icons in the bottom left corner.

Repeated questioning

Here we define an LLM function that allows the multiple question request invocations over the image:

my &fst = llm-function({"For the given image answer the question: $_ . Be as concise as possible in your answers."}, e => $confImg);
-> **@args, *%args { #`(Block|3507398517968) ... }
&fst('How many years are presented in that image?')
Five years are presented in the image.
&fst('Which year has the highest value? What is that value?')
The year with the highest value is 2023, with a value of just over $11 billion.

Remark: Numerical value readings over technical plots or charts seem to be often wrong. OpenAI’s vision model warns about this in the responses often enough.


Dedicated notebook cells

In the context of the “recently-established” notebook solution “Jupyter::Chatbook”, [AAp3], I am contemplating an extension to integrate OpenAI’s vision service.

The main challenges here include determining how users will specify images in the notebook, such as through URLs, file names, or Base64 strings, each with unique considerations. Additionally, I am exploring how best to enable users to input prompts or requests for image processing by the AI/LLM service.

This integration, while valuable, it is not my immediate focus as there are programmatic ways to access OpenAI’s vision service already. (See the previous section.)


Combinations (fairytale generation)

Consider the following computational workflow for making fairytales:

  1. Draw or LLM-generate a few images that characterize parts of a story.
  2. Narrate the images using the LLM “vision” functionality.
  3. Use an LLM to generate a story over the narrations.

Remark: Multi-modal LLM / AI systems already combine steps 2 and 3.

Remark: The workflow above (after it is programmed) can be executed multiple times until satisfactory results are obtained.

Here are image generations using DALL-E for four different requests with the same illustrator name in them:

my @story-images = [
"a girl gets a basket with wine and food for her grandma.",
"a big bear meets a girl carrying a basket in the forest.",
"a girl that gives food from a basket to a big bear.",
"a big bear builds a new house for girl's grandma."
].map({ openai-create-image( 'Painting in the style of John Bauer of ' ~ $_, response-format => 'b64_json', format => 'values') });

@story-images.elems
4

Here we display the images:

#% markdown 
@story-images.map({ '![](data:image/jpeg;base64,' ~ $_ ~ ')' }).join(' ' x 5)

Here we get the image narrations (via the OpenAI’s “vision service”):

my $story-images-descriptions = 
    openai-completion("Consisely describe the images.",
        images => @story-images.map({ 'data:image/jpeg;base64,' ~ $_ ~ '' }).List,
        max-tokens => 900,
        format => 'values'
    );

$story-images-descriptions 
1. The first image features a stylized painting of a woman in a red hood, carrying a large basket with a girl in white looking up at her.
2. The second image shows a dark, fairy tale-like forest scene where a young girl holding a basket is facing a large black bear.
3. The third image is an illustration of a girl offering a basket to a seated, gentle-looking bear in a green forest setting.
4. The final image depicts a bear on two legs, dressed in a blue dress, standing by a yellow house with a girl in blue in front of the door.

Here we extract the descriptions into a list:

#% markdown
my @descr = $story-images-descriptions.split("\n", :skip-empty).grep({ $_ ~~ / ^ \d /});
@descr ==> to-html()
  • 1. The first image features a stylized painting of a woman in a red hood, carrying a large basket with a girl in white looking up at her.
  • 2. The second image shows a dark, fairy tale-like forest scene where a young girl holding a basket is facing a large black bear.
  • 3. The third image is an illustration of a girl offering a basket to a seated, gentle-looking bear in a green forest setting.
  • 4. The final image depicts a bear on two legs, dressed in a blue dress, standing by a yellow house with a girl in blue in front of the door.

Here we generate the story from the descriptions above (using Google’s PaLM):

my $res = llm-synthesize(['Write a story that fits the following four descriptions:',|@descr], e=>llm-configuration('PaLM', max-tokens=>1200))
Once upon a time, there was a little girl named Gretel who lived with her mother in a small village on the edge of a dark and mysterious forest. One day, Gretel's mother fell ill and could not work. The little girl knew that they needed money to buy food, so she decided to go into the forest and find berries to sell at the market.

Gretel walked for a long time, but she could not find any berries. She was about to give up when she saw a large black bear sitting under a tree. The bear was licking its paws and looked very friendly.

Gretel was scared, but she knew that she had to do something. She walked up to the bear and said, "Hello, Mr. Bear. I am Gretel. I am looking for berries to sell at the market. Do you know where I can find some?"

The bear smiled and said, "Of course I do, Gretel. I know where all the best berries are. Follow me."

The bear led Gretel through the forest to a clearing. The clearing was full of beautiful berries. Gretel picked as many as she could carry and thanked the bear.

"You're welcome, Gretel," said the bear. "I'm glad I could help."

Gretel took the berries to the market and sold them. She made enough money to buy food for her mother and herself. The next day, Gretel went back to the forest to visit the bear. She brought him some bread and milk. The bear was very happy to see her.

"Thank you for the bread and milk, Gretel," said the bear. "You are very kind."

Gretel and the bear became friends. They would often meet in the forest and play together. One day, the bear said to Gretel, "Gretel, I would like to ask you something."

"What is it?" asked Gretel.

"I would like you to marry me," said the bear.

Gretel was surprised, but she was also happy. She said, "Yes, I will marry you."

Gretel and the bear were married in a small ceremony in the forest. They lived happily ever after in a little house on the edge of the forest.

References

Articles

[AA1] Anton Antonov, “Workflows with LLM functions”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “Re-programming to Python of LLM- and Chatbook packages”, (2023), RakuForPrediction at WordPress.

[OAIb1] OpenAI team, “New models and developer products announced at DevDay”, (2023), OpenAI/blog.

Packages

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023), GitHub/antononcube.

[AAp2] Anton Antonov, LLM::Functions Raku package, (2023), GitHub/antononcube.

[AAp3] Anton Antonov, Jupyter::Chatbook Raku package, (2023), GitHub/antononcube.

[AAp4] Anton Antonov, WWW::MermaidInk Raku package, (2023), GitHub/antononcube.

[AAp5] Anton Antonov, Lingua::Translation::DeepL Raku package, (2023), GitHub/antononcube.

Videos

[AAv1] Anton Antonov, “Jupyter Chatbook LLM cells demo (Raku)” (2023), YouTube/@AAA4Prediction.

[AAv2] Anton Antonov, “Jupyter Chatbook multi cell LLM chats teaser (Raku)” (2023), YouTube/@AAA4Prediction.

Integrating Large Language Models with Raku (TRC-2023)

Two weeks ago I gave the presentation titled “Integrating Large Language Models with in Raku”. (Here is the video link.)

In this presentation we discuss different ways of using Large Language Models (LLMs) in Raku.

We consider using LLMs via:

The presentation has multiple demos and examples of LLM utilization that include:

Here is the mind-map used in the talk (has “clickable” hyperlinks):

Re-programming to Python of LLM- and Chatbook packages

Introduction

In this computational document (converted into a Markdown and/or blog post) I would like to proclaim my efforts to re-program the Large Language Models (LLM) Raku packages into Python packages.

I heavily borrowed use case ideas and functionality designs from LLM works of Wolfram Research, Inc. (WRI), see [SW1, SW2]. Hence, opportunistically, I am also going to include comparisons with Wolfram Language (WL) (aka Mathematica.)

Why doing this?

Here is a list of reasons why I did the Raku-to-Python reprogramming:

  • I am mostly doing that kind re-programmings for getting new perspectives and — more importantly — to review and evaluate the underlying software architecture of the packages.
    • Generally speaking, my Raku packages are not used by others much, hence re-programming to any other language is a fairly good way to review and evaluate them.
  • Since I, sort of, “do not care” about Python, I usually try to make only “advanced” Minimal Viable Products (MVPs) in Python.
    • Hence, the brainstorming perspective of removing “the fluff” from the Raku packages.
  • Of course, an “advanced MVP” has a set of fairly useful functionalities.
    • If the scope of the package is small, I can make its Python translation as advanced (or better) than the corresponding Raku package.
  • Good, useful documentation is essential, hence:
    • I usually write “complete enough” (often “extensive”) documentation of the Raku packages I create and publish.
    • The Raku documentation is of course a good start for the corresponding Python documentation.
      • …and a way to review and evaluate it.
  • In the re-programming of the Raku LLM packages, I used a Raku Jupyter Chatbook for translation of Raku code into Python code.
    • In other words: I used LLMs to reprogram LLM interaction software.
    • That, of course, is a typical application of the principle “eat your own dog food.”
  • I also used a Raku chatbook to write the Python-centric article “Workflows with LLM functions”, [AAn3py].
  • The “data package” “LLM::Prompts” provides ≈200 prompts — it is beneficial to have those prompts in other programming languages.
    • The usefulness of chat cells in chatbooks is greatly enhanced with the prompt expansion provided by “LLM::Prompts”, [AAv2].
    • It was instructive to reprogram into Python the corresponding Domain Specific Language (DSL) for prompt specifications.
      • Again, an LLM interaction in a chatbook was used to speed-up the re-programming.

Article structure

  • Big picture use case warm-up
    Mind-map for LLMs and flowchart for chatbooks
  • Tabulated comparisons
    Quicker overview, clickable entries
  • LLM functions examples
    Fundamental in order to “manage” LLMs
  • LLM prompts examples
    Tools for pre-conditioning and bias (of LLMs)
  • Chatbook multi-cell chats
    Must have for LLMs
  • Observations, remarks, and conclusions
    Could be used to start the article with…
  • Future plans
    Missing functionalities

Big picture warm-up

Mind-map

Here is a mind-map aimed at assisting in understanding and evaluating the discussed LLM functionalities in this document:

Primary use case

primary use case for LLMs in Raku is the following:

A Raku “chat notebook solution” — chatbook — that allows convenient access to LLM services and facilitates multiple multi-cell chat-interactions with LLMs.

We are interested in other types of workflows, but they would be either readily available or easy to implement if the primary use case is developed, tested, and documented.

An expanded version of the use-case formulation can be as follows:

The Raku chatbook solution aims to provide a user-friendly interface for interacting with LLM (Language Model) services and offers seamless integration for managing multiple multi-cell chats with LLMs. The key features of this solution include:

  1. Direct Access to LLM Services:
    The notebook solution provides a straightforward way to access LLM services without the need for complex setup or configuration. Users can easily connect to their preferred LLM service provider and start utilizing their language modeling capabilities.
  2. Easy Creation of Chat Objects:
    The solution allows users to effortlessly create chat objects within the notebook environment. These chat objects serve as individual instances for conducting conversations with LLMs and act as containers for storing chat-related information.
  3. Simple Access and Invocation of Chat Cells:
    Users can conveniently access and invoke chat cells within the notebook solution. Chat cells represent individual conversation steps or inputs given to the LLM. Users can easily interact with the LLM by adding, modifying, or removing chat cells.
  4. Native Support for Multi-Cell Chats:
    The notebook solution offers native support for managing multi-cell chats per chat object. Users can organize their conversations into multiple cells, making it easier to structure and navigate through complex dialogues. The solution ensures that the context and history of each chat object are preserved throughout

Here is a flowchart that outlines the solution derived with the Raku LLM packages discussed below:

The flowchart represents the process for handling chat requests in the Raku chat notebook solution “Jupyter::Chatbook”, [AAp4p6]. (Also, for Python’s “JupyterChatbook”, [AAp4py].)

  1. When a chat request is received, the system checks if a Chat IDentifier (Chat ID) is specified.
    • If it is, the system verifies if the Chat ID exists in the Chat Objects Database (CODB).
    • If the Chat ID exists, the system retrieves the existing chat object from the database.
    • Otherwise, a new chat object is created.
  2. Next, the system parses the DSL spec of the prompt, which defines the structure and behavior of the desired response.
    • The parsed prompt spec is then checked against the Known Prompts Database (PDB) to determine if any known prompts match the spec.
    • If a match is found, the prompt is expanded, modifying the behavior or structure of the response accordingly.
  3. Once the prompt is processed, the system evaluates the chat message using the underlying LLM function.
    • This involves interacting with the OpenAI and PaLM models.
    • The LLM function generates a response based on the chat message and the prompt.
  4. The generated response is then displayed in the Chat Result Cell (CRCell) in the chat interface.
    • The system also updates the Chat Objects Database (CODB) to store the chat history and other relevant information.

Throughout this process, various components such as the frontend interface, backend logic, prompt processing, and LLM interaction work together to provide an interactive chat experience in the chatbook.

Remark: The flowchart and explanations are also relevant to a large degree for WL’s chatbook solution, [SW2.]


Tabulated comparisons

In this section we put into tables corresponding packages of Raku, Python, Wolfram Language. Similarly, corresponding demonstration videos are also tabulated.

Primary LLM packages

We can say that the Raku packages “LLM::Functions” and “LLM::Prompts” adopted the LLM designs by Wolfram Research, Inc. (WRI); see [SW1, SW2].

Here is a table with links to:

  • “Primary” LLM Raku packages
  • Corresponding Python packages
  • Corresponding Wolfram Language (WL) paclets and prompt repository
What?RakuPythonWL
OpenAI accessWWW::OpenAIopenaiOpenAILink
PaLM accessWWW::PaLMgoogle-generativeaiPaLMLink
LLM functionsLLM::FunctionsLLMFunctionObjectsLLMFunctions
LLM promptsLLM::PromptsLLMPromptsWolfram Prompt Repostory
ChatbookJupyter::ChatbookJupyterChatbookChatbook
Find textual answersML::FindTextualAnswerLLMFunctionObjectsFindTextualAnswer

Remark: There is a plethora of Python packages dealing with LLM and extending Jupyter notebooks with LLM services access.

Remark: Finding of Textual Answers (FTAs) was primary motivator to implement the Raku package “LLM::Functions”. FTA is a fundamental functionality for the NLP Template Engine used to generate correct, executable code for different computational sub-cultures. See [AApwl1, AAv5].

Secondary LLM packages

The “secondary” LLM Raku packages — inspired from working with the “primary” LLM packages — are “Text::SubParsers” and “Data::Translators”.

Also, while using LLMs, conveniently and opportunistically is used the package “Data::TypeSystem”.

Here is a table of the Raku-Python correspondence:

Post processing of LLM resultsRakuPythonWL
Extracting text elementsText::SubParserspart of LLMFunctionObjects
Shapes and typesData::TypeSystemDataTypeSystem
Converting to texts formatsData::Translators
Magic arguments parsingGetopt::Long::Grammarargparse
Copy to clipboardClipboardpyperclip et al.CopyToClipboard

Introductory videos

Here is a table of introduction and guide videos for using chatbooks:

WhatRakuPythonWL
Direct LLM
services access
Jupyter Chatbook LLM cells demo (Raku)
(5 min)
Jupyter Chatbook LLM cells demo (Python)
(4.8 min)
OpenAIMode demo (Mathematica)
(6.5 min)
Multi-cell chatJupyter Chatbook multi cell LLM chats teaser (Raku)
(4.2 min)
Jupyter Chatbook multi cell LLM chats teaser (Python)
(4.5 min)
Chat Notebooks bring the power of Notebooks to LLMs
(57 min)

LLM functions

In this section we show examples of creation and invocation of LLM functions.

Because the name “LLMFunctions” was approximately taken in PyPI.org, I used the name “LLMFunctionObjects” for the Python package.

That name is, actually, more faithful to the design and implementation of the Python package — the creator function llm_function produces function objects (or functors) that have the __call__ magic.

Since the LLM functions functionalities are fundamental, I Python-localized the LLM workflows notebooks I created previously for both Raku and WL. Here are links to all three notebooks:

Raku

Here we create an LLM function:

my &f1 = llm-function({"What is the $^a of the country $^b?"});

-> **@args, *%args { #`(Block|2358575708296) ... }

Here is an example invocation of the LLM function:

&f1('GDB', 'China')

The official ISO 3166-1 alpha-2 code for the People’s Republic of China is CN. The corresponding alpha-3 code is CHN.

Here is another one:

&f1( |<population China> )

As of July 2020, the population of China is estimated to be 1,439,323,776.

Python

Here is the corresponding Python definition and invocation of the Raku LLM function above:

from LLMFunctionObjects import * f1 = llm_function(lambda a, b: f"What is the {a} of the country {b}?") print( f1('GDB', 'China') )

The GDB (Gross Domestic Product) of China in 2020 was approximately $15.42 trillion USD.


LLM prompts

The package “LLM::Prompts” provides ≈200 prompts. The prompts are taken from Wolfram Prompt Repository (WPR) and Google’s generative AI prompt gallery. (Most of the prompts are from WPR.)

Both the Raku and Python prompt packages provide prompt expansion using a simple DSL described on [SW2].

Raku

Here is an example of prompt spec expansion:

my $pewg = llm-prompt-expand("@EmailWriter Hi! What do you do? #Translated|German")

Here the prompt above is used to generate an email (in German) for work-leave:

llm-synthesize([$pewg, "Write a letter for leaving work in order to go to a conference."])

Sehr geehrte Damen und Herren,
 
Ich schreibe Ihnen, um meine Abwesenheit vom Arbeitsplatz für eine Konferenz bekannt zu geben. Ich werde die nächsten zwei Tage nicht im Büro sein, da ich an der Konferenz teilnehmen werde. Während meiner Abwesenheit werde ich meine Aufgaben durch meine Kollegen erledigen lassen.

Falls Sie weitere Informationen benötigen, können Sie mich jederzeit kontaktieren.

Vielen Dank für Ihr Verständnis.

Freundliche Grüße,
[Your name]

Python

The Python implementation is completely analogous:

from LLMPrompts import * print( llm_prompt_expand("@EmailWriter Hi! What do you do? #Translated|German") )

Given a topic, write emails in a concise, professional manner.
 Hi! What do you do? Respond to the prompts only in German. Do not use any language other than German.
DO NOT TRANSLATE THE INPUT.
Provide answers to questions, or follow up on statements. Do not translate the input.


Chatbook cells

Raku

Creation:

#%chat yoda1, prompt='@Yoda' Hi! Who are you?

A Jedi Master, I am. Known as Yoda, am I. How may I assist you, young one?

Alternative creation via chat meta cell:

#%chat yoda2 prompt @Yoda

Chat object created with ID : yoda2.
Expanded prompt:
⎡You are Yoda. 
Respond to ALL inputs in the voice of Yoda from Star Wars. 
Be sure to ALWAYS use his distinctive style and syntax. Vary sentence length.⎦

Here is an interaction with the first chat object:

#%chat yoda1 How many students did you teach? What is the color of your light saber?

Many students, I have taught. Countless, they are. Young Padawans, eager to learn the ways of the Force. Lightsaber, you ask about? Green, it is. The color of wisdom and tranquility, it represents.

Python

Here we load the extension “JuptyerChatbook”:

%load_ext JupyterChatbook

Creation using a magic argument for prompt (and prompt expansion):

%%chat -i mad --prompt='@MadHatter' Hi! Who are you?

Ah, greetings, my dear interlocutor! I am none other than the Mad Hatter, at your service. Delighted to make your acquaintance, I must say. Pray tell, have you ever attended a tea party? Oh, the joyous revelry that ensues! But I digress, for I am here to engage in whimsical banter with you. So, what brings you to my peculiar corner of Wonderland?

%%chat -i mad I am looking for Alice...

Ah, Alice! A delightful young lass, indeed. Curiosity personified, she is. But alas, I must inform you that I have not seen her lately. She tends to wander off into the most peculiar of places, you see. Perhaps she has found herself in the company of the Cheshire Cat or engaged in a riddle with the Queen of Hearts. Oh, the adventures she embarks upon! But fret not, my friend, for tea time shall surely bring her back. Would you care to join me for a cuppa while we await her return?


Observations, remarks, and conclusions

  • The Python package for LLM services access provided a significant jump-start of the reprogramming endeavors.
  • Much easier to program Jupyter chatbook cells in Python
    • “IPython” facilitates extensions with custom magics in a very streamlined way.
    • Not very documented, though — I had look up concrete implementations in GitHub to figure out:
  • Figuring out (in Python) the prompt expansion DSL parsing and actions took longer than expected.
    • Although, I “knew what I was doing” and I used LLM facilitation of the Raku to Python translation.
      • Basically, I had to read and understand the Python way of using regexes. (Sigh…)
  • For some reason, embedding Mermaid-JS diagrams in Python notebooks is not that easy.
  • Making chat cells tests for Python chatbooks is much easier than for Raku chatbooks.
  • Parsing of Python-Jupyter magic cell arguments is both more restricted and more streamlined than Raku-Jupyter.
  • In Python it was much easier and more obvious (to me) to figure out how to program creation and usage LLM function objects and make them behave like functions than to implement the Raku LLM-function anonymous (pure, lambda) function solution.
    • Actually, it is in my TODO list to have Raku functors; see below.
  • Copying to clipboard was already implemented in Python (and reliably working) for multiple platforms.
  • Working Python code is much more often obtained Raku code when using LLMs.
    • Hence, Python chatbooks could be seen preferable by some.
  • My primary use-case was not chatbooks, but finding textual answers in order to re-implement the NLP Template Engine from WL to Raku.
    • I have done that to a large degree — see “ML::NLPTemplateEngine”.
    • Working on the “workhorse” function llm-find-textual-answer made me look up WRI’s approach to creation of LLM functions and corresponding configurations and evaluators; see [SW1].
  • Quite a few fragments of this document were created via LLM chats:
    • Initial version of the comparison tables from “linear” Markdown lists with links
    • The extended primary use formulation
    • The narration of the flowchart
  • I did not just copy-and-pasted the those LLM generated fragments — I read then in full and edited them too!

Future plans

Both

  • Chatbooks can have magic specs (and corresponding cells) for:
    • DeepL
    • ProdGDT
  • A video with comprehensive (long) discussion of multi-cell chats.

Python

  • Documenting how LLM-generated images can be converted into image objects (and further manipulated image-wise.)

Raku

  • Make Python chatbooks re-runnable as Raku chatbooks.
    • This requires the parsing of Python-style magics.
  • Implement LLM function objects (functors) in Raku.
    • In conjunction of the anonymous functions implementation.
      • Which one is used is specified with an option.
  • Figure out how to warn users for “almost right, yet wrong” chat cell magic specs.
  • Implement copy-to-clipboard for Linux and Windows.
    • I have put rudimentary code for that, but actual implementation and testing for Linux and Windows are needed.

References

Articles

[SW1] Stephen Wolfram, “The New World of LLM Functions: Integrating LLM Technology into the Wolfram Language”, (2023), Stephen Wolfram Writings.

[SW2] Stephen Wolfram, “Introducing Chat Notebooks: Integrating LLMs into the Notebook Paradigm”, (2023), Stephen Wolfram Writings.

Notebooks

[AAn1p6] Anton Antonov, “Workflows with LLM functions (in Raku)”, (2023), community.wolfram.com.

[AAn1wl] Anton Antonov, “Workflows with LLM functions (in WL)”, (2023), community.wolfram.com.

[AAn1py] Anton Antonov, “Workflows with LLM functions (in Python)”, (2023), community.wolfram.com.

Python packages

[AAp1py] Anton Antonov, LLMFunctions Python package, (2023), PyPI.org/antononcube.

[AAp2py] Anton Antonov, LLMPrompts Python package, (2023), PyPI.org/antononcube.

[AAp3py] Anton Antonov, DataTypeSystem Python package, (2023), PyPI.org/antononcube.

[AAp4py] Anton Antonov, JupyterChatbook Python package, (2023), PyPI.org/antononcube.

Raku packages

[AAp1p6] Anton Antonov, LLM::Functions Raku package, (2023), raku.land/antononcube.

[AAp2p6] Anton Antonov, LLMPrompts Raku package, (2023), raku.land/antononcube.

[AAp3p6] Anton Antonov, Data::TypeSystem Raku package, (2023), raku.land/antononcube.

[AAp4p6] Anton Antonov, Jupyter::Chatbook Raku package, (2023), raku.land/antononcube.

[AAp5p6] Anton Antonov, ML::FindTextualAnswer Raku package, (2023), raku.land/antononcube.

Wolfram Language paclets

[WRIp1] Wolfram Research Inc., LLMFunctions paclet, (2023) Wolfram Paclet Repository.

[WRIr1] Wolfram Research Inc., Wolfram Prompt Repository.

[AAp4wl] Anton Antonov, NLPTemplateEngine paclet, (2023) Wolfram Paclet Repository.

Videos

[AAv1] Anton Antonov, “Jupyter Chatbook LLM cells demo (Raku)”, (2023), YouTube/@AAA4Prediction.

[AAv2] Anton Antonov, “Jupyter Chatbook multi-cell LLM chats demo (Raku)”, (2023), YouTube/@AAA4Prediction.

[AAv3] Anton Antonov, “Jupyter Chatbook LLM cells demo (Python)”, (2023), YouTube/@AAA4Prediction.

[AAv4] Anton Antonov, “Jupyter Chatbook multi cell LLM chats teaser (Python)”, (2023), YouTube/@AAA4Prediction.

[AAv5] Anton Antonov, “Simplified Machine Learning Workflows Overview (Raku-centric), (2023), YouTube/@AAA4Prediction.