Search: extract-structured-data

Last modified by admin on 2022/04/24 04:58

Results 1 - 10 of 10 Page 1

[3] Configure Fields for Data Extraction

Last modified by admin on 2023/05/14 13:20
Rendered document content
Each Pipeline defines the structure of Data fields that akaBot Vision extracts. Description When editing this structure you have two options: Use pre-trained Data fields – AkaBot Vision’s Generic AI engine has been pre-trained to recognize specific Data fields and enables you to start extracting data
Title
[3] Configure Fields for Data Extraction
Location
Customizing Data Extract
Configuring Fields for Data Extraction
Raw document content
="wikigeneratedid" id="HParagraph1" %) Each Pipeline defines the structure of Data fields that akaBot Vision extracts. == **Description** == (% class="wikigeneratedid" %) When editing this structure you have two
to recognize specific Data fields and enables you to start extracting data without any additional training

[4] Capture Custom Table Data

Last modified by admin on 2023/05/14 13:20
Rendered document content
A basic element in the extraction schema is the data field. However, akaBot Vision enables the capture of even more complex structures like tables. Adding a predefined table field If you are missing
settings. In this tab, you can manage pre-trained data fields and select which of them should be extracted
Title
[4] Capture Custom Table Data
Location
Customizing Data Extract
Capturing Custom Table Data in akaBot Vision
Raw document content
="wikigeneratedid" id="HParagraph1" %) A basic element in the extraction schema is the data field. However, akaBot Vision enables the capture of even more complex structures like tables. == **Adding a predefined table
of them should be extracted. (% style="text-align:center" %) [[image:image-20220421003652-1.png||data

[3] Validate the Data

Last modified by admin on 2023/07/05 11:13
Rendered document content
When you select a document for review, it will take you to the Validation screen. In the left panel, you can see the predefined list of Data fields that have been extracted. Press Tab to check
be able to export the data. If you realize that the data field was not extracted correctly, you can make
Title
[3] Validate the Data
Location
Validate the Data
Raw document content
, otherwise, you won’t be able to export the data. If you realize that the data field was not extracted
="font-family:Arial,Helvetica,sans-serif" %)In the left panel, you can see the predefined list of Data fields that have been extracted. (% style="font-family:Arial,Helvetica,sans-serif" %)[[[[image:image

[3] Review Document

Last modified by admin on 2024/01/11 18:13
Rendered document content
After importing the document successfully and the data extraction process is successfully finished, the document status will change to TOREVIEW. To Review Tab You can see documents with this status
Shortcuts When reviewing documents in akaBot Vision, there are multiple ways of moving between data fields
Raw document content
="wikigeneratedid" %) After importing the document successfully and the data extraction process is successfully
with this status in the “To review” tab in the user interface. [[image:image-20220420193327-1.png||data-xwiki
in each field that has been detected incorrectly [[image:image-20220420193327-2.png||data-xwiki-image

[1] Automation of Fields

Last modified by admin on 2023/05/14 13:19
Rendered document content
on: Built-in checks – we perform Data Integrity checks based on values found on the document. Such checks
the Extraction schema you should be seeing the message for the required fields with no captured value. Table
Location
Customizing Data Extract
Raw document content
– we perform Data Integrity checks based on values found on the document. Such checks could
saving the Extraction schema you should be seeing the message for the required fields with no captured

[1] Import Document Manually

Last modified by admin on 2023/05/14 13:09
Rendered document content
When you send a document to the pipeline, akaBot Vision will immediately start to extract the data from it. Description During this stage, the document has an importing status. In case something went wrong during the upload stage or later during the importing stage, the document will fail
Raw document content
="wikigeneratedid" %) When you send a document to the pipeline, akaBot Vision will immediately start to extract the data from it. == **Description** == (% class="wikigeneratedid" %) During this stage, the document has an importing status. (% style="text-align:center" %) [[[[image:image-20220420191058-1.png||data-xwiki-image

[1] Overview

Last modified by admin on 2023/10/03 12:07
Rendered document content
status). Find that all the extracted data is ok, they can click Confirm button then this document
Raw document content
to the Postponed tab (Switching to Postponed status). * Find that all the extracted data is ok, they can click

HTTP Client with Body Factory

Last modified by Nhan Nguyen on 2022/05/13 03:16
Rendered document content
” Misc Public (Checkbox) - Check if you want to publicize it. Remember to consider data security
of the activity to organize and structure your code better. Ex: [35413123123] Http Request Simple Authentication
Raw document content
” **Misc** * **Public (Checkbox)** - Check if you want to publicize it. Remember to consider data
edit the name of the activity to organize and structure your code better. Ex: [35413123123] Http

HTTP Client

Last modified by DatPT on 2023/04/19 10:26
Rendered document content
. You can edit the name of the activity to organize and structure your code better. Ex: [35413123123] Http Client Public (Checkbox) - Check if you want to publicize it. Remember to consider data security
Raw document content
of this activity. You can edit the name of the activity to organize and structure your code better. Ex
data security requirements before using it. **OAuth1** * **Consumer Key (String)** - The consumer key

[1.2] RPA Reference

Last modified by admin on 2023/05/14 13:23
Rendered document content
. You can use this key from the import document activity. Extract type: you can choose DataTable/Json
Raw document content
. (% style="text-align:center" %) [[image:image-20220420200751-2.png||cursorshover="true" data-xwiki-image
: the ID of the file you want to export. You can use this key from the import document activity. * Extract
Page 1
RSS feed for search on [extract-structured-data]
Created by admin on 2022/04/17 14:38