KubroTM > Exploring The Generative AI Applications And Limitations
Exploring The Generative AI Applications And Limitations
Posted on 14 March, 2023

Here's our 3.14 cents on Generative AI, based on the experiments at Robotic Online Intelligence:

 

1. It doesn’t know things but it can process

 

The Large Language Model doesn’t ‘know things’, just guesses what ‘might fit next’, and hence the high risk of hallucinations and confident-sounding incorrect statements when a specific fact-based query goes to a general knowledge base.

 

In the ChatGPT case, as per its own replies, it should be taken for what it is, not more.

 

But the processing of unstructured text can work really well, especially when that knowledge base is a specific text. Summarization, translation, paraphrasing, or extraction work magic.

 

2. Domain-specific approach works

 

In the absence of ‘knowledge’, it makes sense to approximate it through domain-specific information, which needs to be prepared for particular use cases. That could mean training on your own data set or referencing a data set as context.

 

The models can serve well as a component or a module, supporting a broader solution via APIs.

 

3. A valuable assistant, with a verification challenge

 

There are great cases of assistance to writers, marketers (text processing), or software developers (thanks to GitHub knowledge base and domain-specific training). What could address the facts-vs-fiction challenges above would be the ability to point to a source. Through prompt engineering, one can undoubtedly push ChatGPT to give up sources but it seems to be still within the same probabilistic framework.

 

 

We are running a couple of experiments at Robotic Online Intelligence:

 

-Extraction of specific data points from a body of text into the clean structure in tables; to integrate into Kubro(TM) Information Engine for data companies, more analogous to Autopilot for coding, but in our case for data extraction

 

-Querying only the knowledge within the pre-filtered information by Kubro(TM), and forcing the referencing, to limit the chance of the AI ‘hallucinating’; also driven by limits to how large a single prompt can be

 

-Email (still the most powerful app…) ‘interface’ for queries that first go into our engine and integrate an API call for one of the many steps in the process. For example, we would forward a few PDF reports by email to my digital colleague’ and ask for a summary of the points related to a particular topic in a certain style/format, get it quickly in the email reply

 

-Translation and summarization work way better than what was available earlier e.g. for local China property news on distressed developers in Signallium(TM) China Property

 

In summary, we would curb the enthusiasm about ‘true AI’ but would be very hyped about the ‘boring’ aspect of text processing.

 

Let’s see what GPT4 will bring this week, along with Baidu’s Ernie, and what Google will do next.