Deep generation models, including GPT-4, can generate more efficient knowledge extraction from the text than conventional models. They are semantically flexible and powerful, but such flexibility hampers generating accurate, task-specific results that should be needed for a particular task; this is why precise, prompt engineering is necessary. This work assesses different prompt engineering strategies for proficient knowledge pulling using GPT-4 and the relation extraction dataset (RED-FM). , A new framework based on Wikidata ontology has been proposed to address the evaluation issue. The outcomes shown here indicate that the LLMs can extract an immense variety of facts from text. Using at least one example related to the prompt boosts performance by two to three times; performance with highly related examples is much better than that of random or conventional examples. Yet if more than one example is given, this results in a lesser effect the more examples are used. Surprisingly, reasoning-based prompting methods fail to beat non-reasoning strategies, which means that KE does not necessarily correlate with reasoning tasks in LLMs. On the other hand, retrieval-augmented prompts' results are very good, and combined with the other methods, they help improve the information retrieval process. Knowledge extraction is, therefore, not a problem when it comes to LLMs, but framing the process as a reasoning-based endeavor may not be effective. Well-designed prompts, particularly those with examples, enable the LLM potential in knowledge acquisition.
Prompt engineering, Generative Knowledge Extraction, Ontology based evaluation, GPT-4, Wikidata, Process Model Comprehension, Business Process Management, Large Language Models, Generative AI.
IRE Journals:
Venkat Sharma Gaddala
"Prompt Engineering in Supply Chain Enterprise Data" Iconic Research And Engineering Journals Volume 6 Issue 3 2022 Page 213-224
IEEE:
Venkat Sharma Gaddala
"Prompt Engineering in Supply Chain Enterprise Data" Iconic Research And Engineering Journals, 6(3)