Search: extract-structured-data
Last modified by admin on 2022/04/24 04:58
Refine your search
Select a category and activate filters on the current results
Last author
- 8admin
Last modification date
Creation date
Uploaded by
- 8Nhan Nguyen
- 3admin
Upload date
Results 1 - 8 of 8
Page
1
[3] Configure Fields for Data Extraction
Located in
- Rendered document content
Each Pipeline defines the structure of Data fields that akaBot Vision extracts. Description When editing this structure you have two options: Use pre-trained Data fields – AkaBot Vision’s Generic AI engine has been pre-trained to recognize specific Data fields and enables you to start extracting data
- Title
[3] Configure Fields for Data Extraction
- Location
Customizing Data Extract
…Configuring Fields for Data Extraction
- Raw document content
="wikigeneratedid" id="HParagraph1" %) Each Pipeline defines the structure of Data fields that akaBot Vision extracts. == **Description** == (% class="wikigeneratedid" %) When editing this structure you have two
…to recognize specific Data fields and enables you to start extracting data without any additional training
[4] Capture Custom Table Data
Located in
- Rendered document content
A basic element in the extraction schema is the data field. However, akaBot Vision enables the capture of even more complex structures like tables. Adding a predefined table field If you are missing
…settings. In this tab, you can manage pre-trained data fields and select which of them should be extracted
- Title
[4] Capture Custom Table Data
- Location
Customizing Data Extract
…Capturing Custom Table Data in akaBot Vision
- Raw document content
="wikigeneratedid" id="HParagraph1" %) A basic element in the extraction schema is the data field. However, akaBot Vision enables the capture of even more complex structures like tables. == **Adding a predefined table
…of them should be extracted. (% style="text-align:center" %) [[image:image-20220421003652-1.png||data
[3] Validate the Data
Located in
- Rendered document content
When you select a document for review, it will take you to the Validation screen. In the left panel, you can see the predefined list of Data fields that have been extracted. Press Tab to check
…be able to export the data. If you realize that the data field was not extracted correctly, you can make
- Title
[3] Validate the Data
- Location
Validate the Data
- Raw document content
, otherwise, you won’t be able to export the data. If you realize that the data field was not extracted
…="font-family:Arial,Helvetica,sans-serif" %)In the left panel, you can see the predefined list of Data fields that have been extracted. (% style="font-family:Arial,Helvetica,sans-serif" %)[[[[image:image
[3] Review Document
Located in
- Rendered document content
After importing the document successfully and the data extraction process is successfully finished, the document status will change to TOREVIEW. To Review Tab You can see documents with this status
…Shortcuts When reviewing documents in akaBot Vision, there are multiple ways of moving between data fields
- Raw document content
="wikigeneratedid" %) After importing the document successfully and the data extraction process is successfully
…with this status in the “To review” tab in the user interface. [[image:image-20220420193327-1.png||data-xwiki
…in each field that has been detected incorrectly [[image:image-20220420193327-2.png||data-xwiki-image
[1] Automation of Fields
Located in
- Rendered document content
on: Built-in checks – we perform Data Integrity checks based on values found on the document. Such checks
…the Extraction schema you should be seeing the message for the required fields with no captured value. Table
- Location
Customizing Data Extract
- Raw document content
– we perform Data Integrity checks based on values found on the document. Such checks could
…saving the Extraction schema you should be seeing the message for the required fields with no captured
[1] Import Document Manually
Located in
- Rendered document content
When you send a document to the pipeline, akaBot Vision will immediately start to extract the data from it. Description During this stage, the document has an importing status. In case something went wrong during the upload stage or later during the importing stage, the document will fail
- Raw document content
="wikigeneratedid" %) When you send a document to the pipeline, akaBot Vision will immediately start to extract the data from it. == **Description** == (% class="wikigeneratedid" %) During this stage, the document has an importing status. (% style="text-align:center" %) [[[[image:image-20220420191058-1.png||data-xwiki-image
[1] Overview
Located in
- Rendered document content
status). Find that all the extracted data is ok, they can click Confirm button then this document
- Raw document content
to the Postponed tab (Switching to Postponed status). * Find that all the extracted data is ok, they can click
[1.2] RPA Reference
Located in
- Rendered document content
. You can use this key from the import document activity. Extract type: you can choose DataTable/Json
- Raw document content
. (% style="text-align:center" %) [[image:image-20220420200751-2.png||cursorshover="true" data-xwiki-image
…: the ID of the file you want to export. You can use this key from the import document activity. * Extract
Page
1
RSS feed for search on [extract-structured-data]