Uploading bulk data into Salesforce using Salesforce Bulk V2 Data loader connector

SUMMARY
This article guides you through the step-by-step process to load data into Salesforce using Salesforce Bulk V2 Data Loader v51 connector in webMethods.IO Integration.

PREREQUISITES

  1. Salesforce CRM REST account credentials
  2. AWS S3 credentials.
  3. webMethods.IO Integration tenant credentials

CONTENTS

  1. Preparing the csv file containing data to be uploaded
  2. Creating a flowservice that uploads data in bulk into Salesforce backend.

STEPS

To Prepare the csv file containing data to be uploaded.

Step 1: Create a new excel file. Prepare columns based on the Business object for which the data is to be uploaded. For instance, if the user is uploading bulk Contacts then the csv must contain LastName column mandatorily.
A sample file will look like below.

Step 2: Save and upload file into AWS S3 console using the Upload option(shown below) to be used later in the flowservice.

To Upload Bulk data using a flowservice.

Step 1: Login to webMethods.IO Integration tenant using the username and password. Select a project and create new flowservice.
In order to fetch data from AWS S3 User should first configure Amazon S3 in the flowservice. Choose Amazon S3 from list of connectors. Select getObject operation from the list of actions.
Configure the account for S3 by providing the Access and Secret keys of your AWS instance.

Step 2: Click on Pipeline mapping. User can enter the values for “bucketName” and “objectName” with the respective names from the AWS S3 instance.
BucketName : User’s AWS S3 bucket in which the csv is uploaded.
objectName : Name of the csv file as it is in S3.

Step 3: Choose Salesforce Bulk V2 Data Loader v51 from the list of connectors. Select “createAndUploadJobData” from the Type to choose action dropdown.

Step 4: Configure the Account using the Select Account option as shown below. User can fill in the mandatory fields like Client Id , Client Secret, Access token, Refresh token, Server URL to configure the account.

Step 5: Click on pipeline mapping to provide values for mandatory fields such as object, operation and stream as shown below and Save.
Object : Business object the user intends to upload
Operation : Choose ‘insert’ from the picklist.
Stream : csv file fetched from S3 (Map as shown below)

Step 6: Add a third step to close the job that has been created in the step 1. This step is necessary in order to complete the upload process.
User must Add a custom action to create the CloseOrAbortJob operation. (shown below)

Step 7: Click on pipeline mapping for CloseOrAbortJob operation. Provide values for jobId and State as shown below.

JobId : To be mapped from the createAndUploadJobData 
State : UploadComplete (from the picklist)

Step 8: Run the flowservice.The result will be as shown in the image below once the run is successful.
Note : The job completion time depends on the size of data being uploaded.

Step 9: Records will be uploaded once the job is processed completely.

.