The design and implementation of the package closes follows that of “WWW::OpenAI”, [AAp1].
Remark: At this point the functionalities provided over PaLM API are “only” about text generation, message generation, and text embedding. Hence, a comprehensive comparison of PaLM and OpenAI with, say, “WWW::PaLM” and “WWW::OpenAI” will use only those three (most important) functionalities.
Remark: Text generation with PaLM API has includes moderation, hence comparing the moderations of PaLM with OpenAI is possible (and relatively easy.)
.say for palm-generate-text('what is the population in Brazil?', format => 'values', n => 3);
# The population in Brazil is 213,317,650.
# The population of Brazil is 212,609,039.
# The population of Brazil is 211,018,526.
Show message generation:
.say for palm-generate-message('Who wrote the book "Dune"?');
# {candidates => [{author => 1, content => Frank Herbert wrote the book "Dune". It was first published in 1965 and is considered a classic of science fiction literature. The book tells the story of Paul Atreides, a young man who is thrust into a war for control of the desert planet Arrakis. Dune has been adapted into several films and television series, and its popularity has only grown in recent years.}], messages => [{author => 0, content => Who wrote the book "Dune"?}]}
Show text embeddings:
my @vecs = palm-embed-text(["say something nice!",
"shout something bad!",
"wher is the best coffee made?"],
format => 'values');
.say for @vecs;
The package provides a Command Line Interface (CLI) script:
palm-prompt --help
# Usage:
# palm-prompt <text> [--path=<Str>] [-n[=UInt]] [--max-tokens|--max-output-tokens[=UInt]] [-t|--temperature[=Real]] [-m|--model=<Str>] [-a|--auth-key=<Str>] [--timeout[=UInt]] [--format=<Str>] [--method=<Str>] -- Text processing using the OpenAI API.
# palm-prompt [<words> ...] [--path=<Str>] [-n[=UInt]] [--max-tokens|--max-output-tokens[=UInt]] [-m|--model=<Str>] [-t|--temperature[=Real]] [-a|--auth-key=<Str>] [--timeout[=UInt]] [--format=<Str>] [--method=<Str>] -- Command given as a sequence of words.
#
# <text> Text to be processed or audio file name.
# --path=<Str> Path, one of 'generateText', 'generateMessage', 'embedText', or 'models'. [default: 'generateText']
# -n[=UInt] Number of completions or generations. [default: 1]
# --max-tokens|--max-output-tokens[=UInt] The maximum number of tokens to generate in the completion. [default: 100]
# -t|--temperature[=Real] Temperature. [default: 0.7]
# -m|--model=<Str> Model. [default: 'Whatever']
# -a|--auth-key=<Str> Authorization key (to use PaLM API.) [default: 'Whatever']
# --timeout[=UInt] Timeout. [default: 10]
# --format=<Str> Format of the result; one of "json" or "hash". [default: 'json']
# --method=<Str> Method for the HTTP POST query; one of "tiny" or "curl". [default: 'tiny']
Remark: When the authorization key argument “auth-key” is specified set to “Whatever” then palm-prompt attempts to use the env variable PALM_API_KEY.
Mermaid diagram
The following flowchart corresponds to the steps in the package function palm-prompt:
TODO
Implement moderations.
Comparison with “WWW::OpenAI”, [AAp1].
Hook-up finding textual answers implemented in “WWW::OpenAI”, [AAp1].
The Raku package “WWW::OpenAI” provides access to the machine learning service OpenAI, [OAI1]. For more details of the OpenAI’s API usage see the documentation, [OAI2].
The package “WWW::OpenAI” was proclaimed approximately two months ago — see [AA1]. This blog post shows all the improvements and additions since then together with the “original” features.
Remark: The Raku package “WWW::OpenAI” is much “less ambitious” than the official Python package, [OAIp1], developed by OpenAI’s team.
use WWW::OpenAI;
openai-playground('Where is Roger Rabbit?', max-tokens => 64);
# [{finish_reason => stop, index => 0, logprobs => (Any), text =>
#
# Roger Rabbit is a fictional character created by Disney in 1988. He has appeared in several movies and television shows, but is not an actual person.}]
Another one using Bulgarian:
openai-playground('Колко групи могат да се намерят в този облак от точки.', max-tokens => 64);
# [{finish_reason => length, index => 0, logprobs => (Any), text =>
#
# В зависимост от размера на облака от точки, може да бъдат}]
Remark: The function openai-completion can be used instead in the examples above. See the section “Create chat completion” of [OAI2] for more details.
Models
The current OpenAI models can be found with the function openai-models:
There are two types of completions : text and chat. Let us illustrate the differences of their usage by Raku code generation. Here is a text completion:
openai-completion(
'generate Raku code for making a loop over a list',
type => 'text',
max-tokens => 120,
format => 'values');
# my @list = <a b c d e f g h i j>;
# for @list -> $item {
# say $item;
# }
Here is a chat completion:
openai-completion(
'generate Raku code for making a loop over a list',
type => 'chat',
max-tokens => 120,
format => 'values');
# Here's an example of how to make a loop over a list in Raku:
#
# ```
# my @list = (1, 2, 3, 4, 5);
#
# for @list -> $item {
# say $item;
# }
# ```
#
# In this code, we define a list `@list` with some values. Then, we use a `for` loop to iterate over each item in the list. The `-> $item` syntax specifies that we want to assign each item to the variable `$item` as we loop through the list. Finally, we use the
Remark: The argument “type” and the argument “model” have to “agree.” (I.e. be found agreeable by OpenAI.) For example:
model => 'text-davinci-003' implies type => 'text'
Images can be generated with the function openai-create-image — see the section “Images” of [OAI2].
Here is an example:
my $imgB64 = openai-create-image(
"racoon with a sliced onion in the style of Raphael",
response-format => 'b64_json',
n => 1,
size => 'small',
format => 'values',
method => 'tiny');
Here are the options descriptions:
response-format takes the values “url” and “b64_json”
n takes a positive integer, for the number of images to be generated
size takes the values ‘1024×1024’, ‘512×512’, ‘256×256’, ‘large’, ‘medium’, ‘small’.
Here we generate an image, get its URL, and place (embed) a link to it via the output of the code cell:
my @imgRes = |openai-create-image(
"racoon and onion in the style of Roy Lichtenstein",
response-format => 'url',
n => 1,
size => 'small',
method => 'tiny');
'';
my @modRes = |openai-moderation(
"I want to kill them!",
format => "values",
method => 'tiny');
for @modRes -> $m { .say for $m.pairs.sort(*.value).reverse; }
my $fileName = $*CWD ~ '/resources/HelloRaccoonsEN.mp3';
say openai-audio(
$fileName,
format => 'json',
method => 'tiny');
# {
# "text": "Raku practitioners around the world, eat more onions!"
# }
To do translations use the named argument type:
my $fileName = $*CWD ~ '/resources/HowAreYouRU.mp3';
say openai-audio(
$fileName,
type => 'translations',
format => 'json',
method => 'tiny');
# {
# "text": "How are you, bandits, hooligans? I've lost my mind because of you. I've been working as a guard for my whole life."
# }
Embeddings
Embeddings can be obtained with the function openai-embeddings. Here is an example of finding the embedding vectors for each of the elements of an array of strings:
my @queries = [
'make a classifier with the method RandomForeset over the data dfTitanic',
'show precision and accuracy',
'plot True Positive Rate vs Positive Predictive Value',
'what is a good meat and potatoes recipe'
];
my $embs = openai-embeddings(@queries, format => 'values', method => 'tiny');
$embs.elems;
# 4
Here we show:
That the result is an array of four vectors each with length 1536
The distributions of the values of each vector
use Data::Reshapers;
use Data::Summarizers;
say "\$embs.elems : { $embs.elems }";
say "\$embs>>.elems : { $embs>>.elems }";
records-summary($embs.kv.Hash.&transpose);
# $embs.elems : 4
# $embs>>.elems : 1536 1536 1536 1536
# +--------------------------------+------------------------------+-------------------------------+-------------------------------+
# | 3 | 1 | 0 | 2 |
# +--------------------------------+------------------------------+-------------------------------+-------------------------------+
# | Min => -0.6049936 | Min => -0.6674932 | Min => -0.5897995 | Min => -0.6316293 |
# | 1st-Qu => -0.0128846505 | 1st-Qu => -0.012275769 | 1st-Qu => -0.013175397 | 1st-Qu => -0.0125476065 |
# | Mean => -0.00075456833016081 | Mean => -0.000762535416627 | Mean => -0.0007618981246602 | Mean => -0.0007296895499115 |
# | Median => -0.00069939 | Median => -0.0003188204 | Median => -0.00100605615 | Median => -0.00056341792 |
# | 3rd-Qu => 0.012142678 | 3rd-Qu => 0.011146013 | 3rd-Qu => 0.012387738 | 3rd-Qu => 0.011868718 |
# | Max => 0.22202122 | Max => 0.22815572 | Max => 0.21172291 | Max => 0.21270473 |
# +--------------------------------+------------------------------+-------------------------------+-------------------------------+
Here we find the corresponding dot products and (cross-)tabulate them:
use Data::Reshapers;
use Data::Summarizers;
my @ct = (^$embs.elems X ^$embs.elems).map({ %( i => $_[0], j => $_[1], dot => sum($embs[$_[0]] >>*<< $embs[$_[1]])) }).Array;
say to-pretty-table(cross-tabulate(@ct, 'i', 'j', 'dot'), field-names => (^$embs.elems)>>.Str);
Remark: Note that the fourth element (the cooking recipe request) is an outlier. (Judging by the table with dot products.)
Finding textual answers
Here is an example of finding textual answers:
my $text = "Lake Titicaca is a large, deep lake in the Andes
on the border of Bolivia and Peru. By volume of water and by surface
area, it is the largest lake in South America";
openai-find-textual-answer($text, "Where is Titicaca?")
# [Andes on the border of Bolivia and Peru .]
By default openai-find-textual-answer tries to give short answers. If the option “request” is Whatever then depending on the number of questions the request is one those phrases:
“give the shortest answer of the question:”
“list the shortest answers of the questions:”
In the example above the full query given to OpenAI’s models is
Given the text “Lake Titicaca is a large, deep lake in the Andes on the border of Bolivia and Peru. By volume of water and by surface area, it is the largest lake in South America” give the shortest answer of the question: Where is Titicaca?
Here we get a longer answer by changing the value of “request”:
openai-find-textual-answer($text, "Where is Titicaca?", request => "answer the question:")
# [Titicaca is in the Andes on the border of Bolivia and Peru .]
Remark: The function openai-find-textual-answer is inspired by the Mathematica function FindTextualAnswer; see [JL1]. Unfortunately, at this time implementing the full signature of FindTextualAnswer with OpenAI’s API is not easy. (Or cheap to execute.)
Multiple questions
If several questions are given to the function openai-find-textual-answer then all questions are spliced with the given text into one query (that is sent to OpenAI.)
For example, consider the following text and questions:
my $query = 'Make a classifier with the method RandomForest over the data dfTitanic; show precision and accuracy.';
my @questions =
['What is the dataset?',
'What is the method?',
'Which metrics to show?'
];
Then the query send to OpenAI is:
Given the text: “Make a classifier with the method RandomForest over the data dfTitanic; show precision and accuracy.” list the shortest answers of the questions:
What is the dataset?
What is the method?
Which metrics to show?
The answers are assumed to be given in the same order as the questions, each answer in a separated line. Hence, by splitting the OpenAI result into lines we get the answers corresponding to the questions.
If the questions are missing question marks, it is likely that the result may have a completion as a first line followed by the answers. In that situation the answers are not parsed and a warning message is given.
CLI
Playground access
The package provides a Command Line Interface (CLI) script:
openai-playground --help
# Usage:
# openai-playground <text> [--path=<Str>] [-n[=UInt]] [--max-tokens[=UInt]] [-m|--model=<Str>] [-r|--role=<Str>] [-t|--temperature[=Real]] [-l|--language=<Str>] [--response-format=<Str>] [-a|--auth-key=<Str>] [--timeout[=UInt]] [--format=<Str>] [--method=<Str>] -- Text processing using the OpenAI API.
# openai-playground [<words> ...] [-m|--model=<Str>] [--path=<Str>] [-n[=UInt]] [--max-tokens[=UInt]] [-r|--role=<Str>] [-t|--temperature[=Real]] [-l|--language=<Str>] [--response-format=<Str>] [-a|--auth-key=<Str>] [--timeout[=UInt]] [--format=<Str>] [--method=<Str>] -- Command given as a sequence of words.
#
# <text> Text to be processed or audio file name.
# --path=<Str> Path, one of 'chat/completions', 'images/generations', 'moderations', 'audio/transcriptions', 'audio/translations', 'embeddings', or 'models'. [default: 'chat/completions']
# -n[=UInt] Number of completions or generations. [default: 1]
# --max-tokens[=UInt] The maximum number of tokens to generate in the completion. [default: 100]
# -m|--model=<Str> Model. [default: 'Whatever']
# -r|--role=<Str> Role. [default: 'user']
# -t|--temperature[=Real] Temperature. [default: 0.7]
# -l|--language=<Str> Language. [default: '']
# --response-format=<Str> The format in which the generated images are returned; one of 'url' or 'b64_json'. [default: 'url']
# -a|--auth-key=<Str> Authorization key (to use OpenAI API.) [default: 'Whatever']
# --timeout[=UInt] Timeout. [default: 10]
# --format=<Str> Format of the result; one of "json" or "hash". [default: 'json']
# --method=<Str> Method for the HTTP POST query; one of "tiny" or "curl". [default: 'tiny']
Remark: When the authorization key argument “auth-key” is specified set to “Whatever” then openai-playground attempts to use the env variable OPENAI_API_KEY.
Finding textual answers
The package provides a CLI script for finding textual answers:
openai-find-textual-answer --help
# Usage:
# openai-find-textual-answer <text> -q=<Str> [--max-tokens[=UInt]] [-m|--model=<Str>] [-t|--temperature[=Real]] [-r|--request=<Str>] [-p|--pairs] [-a|--auth-key=<Str>] [--timeout[=UInt]] [--format=<Str>] [--method=<Str>] -- Text processing using the OpenAI API.
# openai-find-textual-answer [<words> ...] -q=<Str> [--max-tokens[=UInt]] [-m|--model=<Str>] [-t|--temperature[=Real]] [-r|--request=<Str>] [-p|--pairs] [-a|--auth-key=<Str>] [--timeout[=UInt]] [--format=<Str>] [--method=<Str>] -- Command given as a sequence of words.
#
# <text> Text to be processed or audio file name.
# -q=<Str> Questions separated with '?' or ';'.
# --max-tokens[=UInt] The maximum number of tokens to generate in the completion. [default: 300]
# -m|--model=<Str> Model. [default: 'Whatever']
# -t|--temperature[=Real] Temperature. [default: 0.7]
# -r|--request=<Str> Request. [default: 'Whatever']
# -p|--pairs Should question-answer pairs be returned or not? [default: False]
# -a|--auth-key=<Str> Authorization key (to use OpenAI API.) [default: 'Whatever']
# --timeout[=UInt] Timeout. [default: 10]
# --format=<Str> Format of the result; one of "json" or "hash". [default: 'json']
# --method=<Str> Method for the HTTP POST query; one of "tiny" or "curl". [default: 'tiny']
Refactoring
Separate files for each OpenAI functionality
The original implementation of “WWW::OpenAI” had design and implementation that were very similar to those of “Lingua::Translation::DeepL”, [AAp1].
Major refactoring of the original code was done — now each OpenAI functionality targeted by “WWW::OpenAI” has its code placed in a separate file.
In order to do the refactoring, of course, a comprehensive enough suite of unit tests had to be put in place. Since running the tests costs money, the tests are placed in the “./xt” directory.
De-Cro-ing the requesting code
The first implementation of “WWW::OpenAI” used “Cro::HTTP::Client” to access OpenAI’s services. Often when I use “Cro::HTTP::Client” on macOS I get the errors:
Cannot locate symbol ‘SSL_get1_peer_certificate’ in native library
(See longer discussions about this problem here and here.)
Given the problems of using “Cro::HTTP::Client” and the implementations with curl and “HTTP::Tiny”, I decided it is better to make the implementation of “WWW::OpenAI” more lightweight by removing the code related to “Cro::HTTP::Client”.
The function mermaid-ink of the Raku package “WWW::MermaidInk” gets images corresponding to Mermaid-js specifications via the web Mermaid-ink interface of Mermaid-js.
mermaid-ink($spec) retrieves an image defined by the spec $spec from Mermaid’s Ink Web interface.
mermaid-ink($spec format => 'md-image') returns a string that is a Markdown image specification in Base64 format.
mermaid-ink($spec file => fileName) exports the retrieved image into a specified PNG file.
mermaid-ink($spec file => Whatever) exports the retrieved image into the file $*CMD ~ /out.png.
Details & Options
Mermaid lets you create diagrams and visualizations using text and code.
Mermaid has different types of diagrams: Flowchart, Sequence Diagram, Class Diagram, State Diagram, Entity Relationship Diagram, User Journey, Gantt, Pie Chart, Requirement Diagram, and others. It is a JavaScript based diagramming and charting tool that renders Markdown-inspired text definitions to create and modify diagrams dynamically.
mermaid-ink uses the Mermaid’s functionalities via the Web interface “https://mermaid.ink/img“.
The first argument can be a string (that is, a mermaid-js specification) or a list of pairs.
The option “directives” can be used to control the layout of Mermaid diagrams if the first argument is a list of pairs.
mermaid-ink produces images only.
Examples
Basic Examples
Generate a flowchart from a Mermaid specification:
The package provides the CLI script mermaid-ink. Here is its help message:
mermaid-ink --help
# Usage:
# mermaid-ink <spec> [-o|--file=<Str>] [--format=<Str>] -- Diagram image for Mermaid-JS spec (via mermaid.ink).
# mermaid-ink [<words> ...] [-o|--file=<Str>] [--format=<Str>] -- Command given as a sequence of words.
#
# <spec> Mermaid-JS spec.
# -o|--file=<Str> File to export the image to. [default: '']
# --format=<Str> Format of the result; one of "asis", "base64", "md-image", or "none". [default: 'md-image']
Flowchart
This flowchart summarizes the execution path of obtaining Mermaid images in a Markdown document:
This blog post introduces and exemplifies the Raku package “DSL::FiniteStateMachines” that has class definitions and functions for creation of Finite State Machines (FSMs) and their execution.
This video excerpt, [AAv1], demonstrates the usage workflow of a particular FSM made with that package:
Usage example (Address book)
Here we load the definition of the class AddressBookCaller (provided by this package) and related entities package, “DSL::Entity::AddressBook”:
use DSL::FiniteStateMachines::AddressBookCaller;
use DSL::Entity::AddressBook;
use DSL::Entity::AddressBook::ResourceAccess;
# (Any)
Here we obtain a resource object to access a (particular) address book:
my $resourceObj = DSL::Entity::AddressBook::resource-access-object();
# DSL::Entity::AddressBook::ResourceAccess.new
Here we create the FSM and show its states:
my DSL::FiniteStateMachines::AddressBookCaller $abcFSM .= new;
$abcFSM.make-machine(($resourceObj,));
.say for $abcFSM.states;
# WaitForCallCommand => State object < id => WaitForCallCommand, action => -> $obj { #`(Block|4570571503456) ... } >
# ListOfItems => State object < id => ListOfItems, action => -> $obj { #`(Block|4570571503528) ... } >
# Help => State object < id => Help, action => -> $obj { #`(Block|4570571503600) ... } >
# Exit => State object < id => Exit, action => -> $obj { #`(Block|4570571503672) ... } >
# PrioritizedList => State object < id => PrioritizedList, action => -> $obj { #`(Block|4570571503744) ... } >
# AcquireItem => State object < id => AcquireItem, action => -> $obj { #`(Block|4570571503816) ... } >
# ActOnItem => State object < id => ActOnItem, action => -> $obj { #`(Block|4570571503888) ... } >
# WaitForRequest => State object < id => WaitForRequest, action => -> $obj { #`(Block|4570571503960) ... } >
(Each pair shows the name of the state object and the object itself.)
Here is the graph of FSM’s state transitions:
$abcFSM.to-mermaid-js
Remark: In order to obtain Mathematica — or Wolfram Language (WL) — representation of the state transitions graph the method to-wl can be used.
Here is how the dataset of the created FSM looks like:
.say for $abcFSM.dataset.pick(3);
# {Company => X-Men, DiscordHandle => hugh.jackman#1391, Email => hugh.jackman.523@aol.com, Name => Hugh Jackman, Phone => 940-463-2296, Position => actor}
# {Company => Caribbean Pirates, DiscordHandle => jack.davenport#1324, Email => jack.davenport.152@icloud.net, Name => Jack Davenport, Phone => 627-500-7919, Position => actor}
# {Company => LOTR, DiscordHandle => robert.shaye#6399, Email => robert.shaye.768@gmail.com, Name => Robert Shaye, Phone => 292-252-6866, Position => producer}
For an interactive execution of the FSM we use the command:
#$abcFSM.run('WaitForCallCommand');
Here we run the FSM with a sequence of commands:
$abcFSM.run('WaitForCallCommand',
["call an actor from LOTR", "",
"take last three", "",
"take the second", "", "",
"2", "5", "",
"quit"]);
# 🔊 PLEASE enter call request.
# filter by Position is "actor" and Company is "LOTR"
# 🔊 LISTING items.
# ⚙️ListOfItems: Obtained the records:
# ⚙️+--------------+-----------------+--------------------------------+----------+---------+----------------------+
# ⚙️| Phone | Name | Email | Position | Company | DiscordHandle |
# ⚙️+--------------+-----------------+--------------------------------+----------+---------+----------------------+
# ⚙️| 408-573-4472 | Andy Serkis | andy.serkis.981@gmail.com | actor | LOTR | andy.serkis#8484 |
# ⚙️| 321-985-9291 | Elijah Wood | elijah.wood.53@aol.com | actor | LOTR | elijah.wood#7282 |
# ⚙️| 298-517-5842 | Ian McKellen | ian.mckellen581@aol.com | actor | LOTR | ian.mckellen#9077 |
# ⚙️| 608-925-5727 | Liv Tyler | liv.tyler1177@gmail.com | actor | LOTR | liv.tyler#8284 |
# ⚙️| 570-406-4260 | Orlando Bloom | orlando.bloom.914@gmail.net | actor | LOTR | orlando.bloom#6219 |
# ⚙️| 365-119-3172 | Sean Astin | sean.astin.1852@gmail.net | actor | LOTR | sean.astin#1753 |
# ⚙️| 287-691-8138 | Viggo Mortensen | viggo.mortensen1293@icloud.com | actor | LOTR | viggo.mortensen#7157 |
# ⚙️+--------------+-----------------+--------------------------------+----------+---------+----------------------+
# 🔊 PLEASE enter call request.
# 🔊 LISTING items.
# ⚙️ListOfItems: Obtained the records:
# ⚙️+--------------+----------------------+----------+-----------------+---------+--------------------------------+
# ⚙️| Phone | DiscordHandle | Position | Name | Company | Email |
# ⚙️+--------------+----------------------+----------+-----------------+---------+--------------------------------+
# ⚙️| 570-406-4260 | orlando.bloom#6219 | actor | Orlando Bloom | LOTR | orlando.bloom.914@gmail.net |
# ⚙️| 365-119-3172 | sean.astin#1753 | actor | Sean Astin | LOTR | sean.astin.1852@gmail.net |
# ⚙️| 287-691-8138 | viggo.mortensen#7157 | actor | Viggo Mortensen | LOTR | viggo.mortensen1293@icloud.com |
# ⚙️+--------------+----------------------+----------+-----------------+---------+--------------------------------+
# 🔊 PLEASE enter call request.
# 🔊 LISTING items.
# ⚙️ListOfItems: Obtained the records:
# ⚙️+---------+-----------------+------------+--------------+----------+---------------------------+
# ⚙️| Company | DiscordHandle | Name | Phone | Position | Email |
# ⚙️+---------+-----------------+------------+--------------+----------+---------------------------+
# ⚙️| LOTR | sean.astin#1753 | Sean Astin | 365-119-3172 | actor | sean.astin.1852@gmail.net |
# ⚙️+---------+-----------------+------------+--------------+----------+---------------------------+
# 🔊 ACQUIRE item: {Company => LOTR, DiscordHandle => sean.astin#1753, Email => sean.astin.1852@gmail.net, Name => Sean Astin, Phone => 365-119-3172, Position => actor}
# ⚙️Acquiring contact info for : ⚙️Sean Astin
# 🔊 ACT ON item: {Company => LOTR, DiscordHandle => sean.astin#1753, Email => sean.astin.1852@gmail.net, Name => Sean Astin, Phone => 365-119-3172, Position => actor}
# ⚙️[1] email, [2] phone message, [3] phone call, [4] discord message, or [5] nothing
# ⚙️(choose one...)
# ⚙️message by phone 365-119-3172
# 🔊 ACT ON item: {Company => LOTR, DiscordHandle => sean.astin#1753, Email => sean.astin.1852@gmail.net, Name => Sean Astin, Phone => 365-119-3172, Position => actor}
# ⚙️[1] email, [2] phone message, [3] phone call, [4] discord message, or [5] nothing
# ⚙️(choose one...)
# ⚙️do nothing
# 🔊 SHUTTING down...
Object Oriented Design
Here is the Unified Modeling Language (UML) diagram corresponding to the classes in this package:
my @txtRes = |openai-completion(
'which is the most successful programming language',
n=>3,
temperature=>1.3,
max-tokens=>120,
format=>'values',
:$auth-key);
@txtRes.elems
(*"3"*)
my @imgRes = |openai-create-image(
'Racoon with pearls in the style Raphael',
n=>3,
response-format=>'url',
format=>'values',
:$auth-key);
@imgRes.elems;
(*"3"*)
my @imgRes2 = |openai-create-image(
'Racoon and sliced onion in the style Rene Magritte',
n=>3,
response-format=>'b64_json',
format=>'values',
:$auth-key);
@imgRes2.elems;
(*"3"*)
my @imgRes3 = |openai-create-image('Racoons playing onions and perls in the style Hannah Wilke', n=>3, size => 'medium', format=>'values', response-format=>'b64_json', :$auth-key);
@imgRes3.elems;
(*"3"*)
openai-create-image('Racoons playing onions and perls in the style Monet', n=>3, size => 'largers')
(*"#ERROR: The argument $size is expected to be Whatever or one of '1024x1024, 256x256, 512x512, large, medium, small'.Nil"*)
my @imgRes3 = |openai-create-image('Racoon in the style Helmut Newton', n=>3, size => 'small', format=>'values', response-format=>'b64_json', :$auth-key);
@imgRes3.elems;
(*"3"*)
my @imgRes3 = |openai-create-image('how we live now in the style of Hieronymus Bosch', n=>3, size => 'small', format=>'values', response-format=>'b64_json', :$auth-key);
@imgRes3.elems;
(*"3"*)
The Raku package “WWW::OpenAI” provides access to the machine learning service OpenAI, [OAI1]. For more details of the OpenAI’s API usage see the documentation, [OAI2].
Remark: To use the OpenAI API one has to register and obtain authorization key.
Remark: This Raku package is much “less ambitious” than the official Python package, [OAIp1], developed by OpenAI’s team. Gradually, over time, I expect to add features to the Raku package that correspond to features of [OAIp1].
The design and implementation of “WWW::OpenAI” are very similar to those of “Lingua::Translation::DeepL”, [AAp1].
Installation
Package installations from both sources use zef installer (which should be bundled with the “standard” Rakudo installation file.)
To install the package from Zef ecosystem use the shell command:
zef install WWW::OpenAI
To install the package from the GitHub repository use the shell command:
Remark: When the authorization key, auth-key, is specified to be Whatever then openai-playground attempts to use the env variable OPENAI_API_KEY.
Basic usage
Here is a simple call:
use WWW::OpenAI;
say openai-playground('Where is Roger Rabbit?');
# [{finish_reason => stop, index => 0, message => {content =>
#
# As an AI language model, I do not have access to real-time information or location tracking features. Therefore, I cannot provide an accurate answer to the question "Where is Roger Rabbit?" without additional context. However, if you are referring to the fictional character Roger Rabbit, he is a cartoon character created by Disney and is typically found in various media, including films, television shows, and comic books., role => assistant}}]
Another one using Bulgarian:
say openai-playground('Колко групи могат да се намерят в този облак от точки.');
# [{finish_reason => stop, index => 0, message => {content =>
#
# Като асистент на AI, не мога да видя облак от точки, за да мога да дам точен отговор на този въпрос. Моля, предоставете повече информация или конкретен пример, за да мога да ви помогна., role => assistant}}]
Command Line Interface
The package provides a Command Line Interface (CLI) script:
openai-playground --help
# Usage:
# openai-playground <text> [-m|--model=<Str>] [-r|--role=<Str>] [-t|--temperature[=Real]] [-a|--auth-key=<Str>] [--timeout[=UInt]] [--format=<Str>] -- Text processing using the OpenAI API.
#
# <text> Text to be processed.
# -m|--model=<Str> Model. [default: 'Whatever']
# -r|--role=<Str> Role. [default: 'user']
# -t|--temperature[=Real] Temperature. [default: 0.7]
# -a|--auth-key=<Str> Authorization key (to use OpenAI API.) [default: 'Whatever']
# --timeout[=UInt] Timeout. [default: 10]
# --format=<Str> Format of the result; one of "json" or "hash". [default: 'json']
Remark: When the authorization key argument “auth-key” is specified set to “Whatever” then openai-playground attempts to use the env variable OPENAI_API_KEY.
Mermaid diagram
The following flowchart corresponds to the steps in the package function openai-playground:
The Raku package “Cucumis Sextus, [RL1], aims to provide a “full-blown” specification-and-execution framework in Raku like the typical Cucumber functionalities in other languages. (Ruby, Java, etc.)
This package, “Gherkin::Grammar” takes a minimalist perspective; it aims to provide:
Grammar (and roles) for parsing Gherkin specifications
Test file template generation
Having a “standalone” Gherkin grammar (or role) facilitates the creation and execution of general or specialized frameworks for Raku support of BDD.
The package provides the functions:
gherkin-parse
gherkin-subparse
gherkin-interpret
The Raku outputs of gherkin-interpret are test file templates that after filling-in would provide tests that correspond to the input specifications.
Remark: A good introduction to the Cucumber / Gherkin approach and workflows is the README of [RLp1].
Remark: The grammar in this package was programmed following the specifications and explanations in Gherkin Reference.
The package follows the general Cucumber workflow, but some elements are less automated. Here is a flowchart:
Here is corresponding narration:
Write tests using Gherkin specs
Generate test code
Using the package “Gherkin::Grammar”.
Fill-in the code of step functions
Execute tests
Revisit (refine) steps 1 and/or 4 as needed
Integrate resulting test file
Remark: See the Cucumber framework flowchart in the files Flowcharts.md.
Usage examples
Here is a basic (and short) Gherkin spec interpretation example:
use Gherkin::Grammar;
my $text0 = q:to/END/;
Feature: Calculation
Example: One plus one
When 1 + 1
Then 2
END
gherkin-interpret($text0);
# use v6.d;
#
# #============================================================
#
# proto sub Background($descr) {*}
# proto sub ScenarioOutline(@cmdFuncPairs) {*}
# proto sub Example($descr) {*}
# proto sub Given(Str:D $cmd, |) {*}
# proto sub When(Str:D $cmd, |) {*}
# proto sub Then(Str:D $cmd, |) {*}
#
# #============================================================
#
# use Test;
# plan *;
#
# #============================================================
# # Example : One plus one
# #------------------------------------------------------------
#
# multi sub When( $cmd where * eq '1 + 1' ) {}
#
# multi sub Then( $cmd where * eq '2' ) {}
#
# multi sub Example('One plus one') {
# When( '1 + 1' );
# Then( '2' );
# }
#
# is Example('One plus one'), True, 'One plus one';
#
# done-testing;
Internationalization
The package provides internationalization using different languages. The (initial) internationalization keyword-regexes data structure was taken from [RLp1]. (See the file “I18n.rakumod”.)
Here is an example with Russian:
my $ru-text = q:to/END/;
Функционал: Вычисление
Пример: одно плюс одно
Когда 1 + 1
Тогда 2
END
gherkin-interpret($ru-text, lang => 'Russian');
# use v6.d;
#
# #============================================================
#
# proto sub Background($descr) {*}
# proto sub ScenarioOutline(@cmdFuncPairs) {*}
# proto sub Example($descr) {*}
# proto sub Given(Str:D $cmd, |) {*}
# proto sub When(Str:D $cmd, |) {*}
# proto sub Then(Str:D $cmd, |) {*}
#
# #============================================================
#
# use Test;
# plan *;
#
# #============================================================
# # Example : одно плюс одно
# #------------------------------------------------------------
#
# multi sub When( $cmd where * eq '1 + 1' ) {}
#
# multi sub Then( $cmd where * eq '2' ) {}
#
# multi sub Example('одно плюс одно') {
# When( '1 + 1' );
# Then( '2' );
# }
#
# is Example('одно плюс одно'), True, 'одно плюс одно';
#
# done-testing;
Doc-string Arguments
The package takes both doc-strings and tables as step arguments.
Doc-strings are put between lines with triple quotes; the text between the quotes is given as second argument of the corresponding step function.
Here is an example of a Gherkin specification for testing a data wrangling Domain Specific Language (DSL) parser-interpreter, [AA1, AAp2], that uses doc-string:
Feature: Data wrangling DSL pipeline testing
Scenario: Long pipeline
Given target is Raku
And titanic dataset exists
When is executed the pipeline:
"""
use @dsTitanic;
filter by passengerSurvival is "survived";
cross tabulate passengerSex vs passengerClass
"""
Then result is a hash
The package handles tables as step arguments. The table arguments are treated differently in Example or Scenario blocks than in Scenario outline blocks.
Here is a “simple” use of a table:
Feature: DateTime parsing tests
Scenario: Simple
When today, yesterday, tomorrow
Then the results adhere to:
| Spec | Result |
| today | DateTime.today |
| yesterday | DateTime.today.earlier(:1day) |
| tomorrow | DateTime.today.later(:1day) |
Here is a Scenario Outline spec:
Feature: DateTime parsing tests 2
Scenario Outline: Repeated
Given <Spec>
Then <Result>
Examples: the results adhere to:
| Spec | Result |
| today | DateTime.today |
| yesterday | DateTime.today.earlier(:1day) |
| tomorrow | DateTime.today.later(:1day) |
Remark: The package “Markdown::Grammar”, [AAp1], parses tables in a similar manner, but [AAp1] assumes that a table field can have plain words, words with slant or weight, or hyperlinks.
Remark: The package [AAp1] parses tables with- and without headers. The Gherkin language descriptions and examples I have seen did not have tables with header separators. Hence, a header separator is treated as a regular table row in “Gherkin::Grammar”.
Complete examples
Calculator
The files “Calculator.feature” and “Calculator.rakutest” provide a simple, fully worked example of how this package can be used to implement Cucumber framework workflows.
Remark: The Cucumber framework(s) expect Gherkin test specifications to be written in files with extension “.feature”.
DateTime interpretation
The date-time interpretations of the package “DateTime::Grammar”, [AAp3], are tested with the feature file “DateTime-interpretation.feature” (and the related “*.rakutest” files.)
Numeric word forms parsing
The interpretations of numeric word forms into number of the package “Lingua::NumericWordForms”, [AAp4], are tested with the feature file “Numeric-word-forms-parsing.feature” (and the related “*.rakutest” files.)
DSL for data wrangling
The data wrangling translations and execution results of the package “DSL::English::DataQueryWorkflows”, [AA1, AAp2], are tested with the feature file “DSL-for-data-wrangling.feature” (and the related “*.rakutest” files.)
This is a fairly non-trivial examples that involves multiple packages. Also, it makes a lot of sense to test DSL translators using a testing DSL (like Gherkin.)
CLI
The package provides a Command Line Interface (CLI) script. Here is its help message:
gherkin-interpretation --help
# Usage:
# gherkin-interpretation <fileName> [-l|--from-lang=<Str>] [-t|--to-lang=<Str>] [-o|--output=<Str>] -- Interprets Gherkin specifications.
#
# -l|--from-lang=<Str> Natural language in which the feature specification is written in. [default: 'English']
# -t|--to-lang=<Str> Language to interpret (translate) the specification to. [default: 'Raku']
# -o|--output=<Str> File to place the interpretation to. (If '-' stdout is used.) [default: '-']
The Raku package “Data::Cryptocurrencies” has functions for cryptocurrency data retrieval. (At this point, only Yahoo Finance is used as a data source.)
The implementation follows the Mathematica implementation in [AAf1] described in [AA1]. (Further explorations are discussed in [AA2].)
Here we get Bitcoin (BTC) data from 1/1/2020 until now:
use Data::Cryptocurrencies;
use Data::Summarizers;
use Text::Plot;
my @ts = cryptocurrency-data('BTC', dates => (DateTime.new(2020, 1, 1, 0, 0, 0), now), props => <DateTime Close>,
format => 'dataset'):!cache-all;
say @ts.elems;
# 1137
When we request the data to be returned as “dataset” then the result is an array of hashes. When we request the data to be returned as “timeseries” the result is an array of pairs (sorted by date.)
Here are BTC values for the last week (at the point of retrieval):
Remark: The code DateTime::Parse.new can be replaced with datetime-interpret. Compare the test files of the “DateTime::Grammar” repository that have names starting with “01-” with the corresponding files in [FS1].
use DateTime::Grammar;
my $rfc1123 = datetime-interpret('Sun, 06 Nov 1994 08:49:37 GMT');
$rfc1123.raku
# DateTime.new(1994,11,6,8,49,37)
Just the date:
$rfc1123.Date;
# 1994-11-06
7th day of week:
datetime-interpret('Sun', :rule<wkday>) + 1;
# 7
With the adverb extended we can control whether the datetime specs can be just dates. Here are examples:
datetime-interpret('1/23/1089'):extended;
# 1089-01-23T00:00:00Z
datetime-interpret('1/23/1089'):!extended;
# (Any)
Using the role in “external” grammars
Here is how the role “Grammarish” of “DateTime::Grammar” can be used in “higher order” grammars:
my grammar DateTimeInterval
does DateTime::Grammarish {
rule TOP($*extended) { 'from' <from=.datetime-param-spec> 'to' <to=.datetime-param-spec> }
};
DateTimeInterval.parse('from 2022-12-02 to Oct 4 2023', args => (True,))
# 「from 2022-12-02 to Oct 4 2023」
# from => 「2022-12-02」
# date-spec => 「2022-12-02」
# date5 => 「2022-12-02」
# year => 「2022」
# month => 「12」
# day => 「02」
# to => 「Oct 4 2023」
# date-spec => 「Oct 4 2023」
# date8 => 「Oct 4 2023」
# month => 「Oct」
# month-short-name => 「Oct」
# day => 「4」
# year => 「2023」
The parameter $*extended can be eliminated by using <datetime-spec> instead of <datetime-param-spec>:
my grammar DateTimeInterval2
does DateTime::Grammarish {
rule TOP { 'from' <from=.datetime-spec> 'to' <to=.datetime-spec> }
};
DateTimeInterval2.parse('from 2022-12-02 to Oct 4 2023')
# 「from 2022-12-02 to Oct 4 2023」
# from => 「2022-12-02」
# date-spec => 「2022-12-02」
# date5 => 「2022-12-02」
# year => 「2022」
# month => 「12」
# day => 「02」
# to => 「Oct 4 2023」
# date-spec => 「Oct 4 2023」
# date8 => 「Oct 4 2023」
# month => 「Oct」
# month-short-name => 「Oct」
# day => 「4」
# year => 「2023」
CLI
The package provides a Command Line Interface (CLI) script. Here is its usage message: