Documentation / Product / Integrations / Amazon Web Services / AWS S3

AWS S3: Import Custom Data

Many applications let you write JSON files to Amazon S3, you can easily import this custom data to Lytics. Once imported you can leverage powerful insights on this custom data provided by Lytics data science to drive your marketing efforts.

Integration Details

This integration uses the Amazon S3 API to read the JSON file selected. Each run of the job will proceed as follows:

  1. Query for a list of objects in the bucket selected in the configuration step.
  2. Try to find the file selected by matching the name of the prefix.
  3. If found, fetch the file.
  4. If configured to diff the files, it will compare the file to the data imported from the previous run.
  5. Filter fields based on what was selected during configuration.
  6. Send event fields to the configured stream.
  7. Schedule the next run of the import if it is a scheduled batch.

Fields

Fields imported via JSON through S3 will require custom data mapping. For assistance mapping your custom data to Lytics user fields, please reach out to Lytics support.

Configuration

Follow these steps to set up and configure the S3 JSON import job in the Lytics platform. If you are new to creating jobs in Lytics, see the Jobs Dashboard documentation for more information.

  1. Select Amazon Web Services from the list of providers.
  2. Select the Import Custom Data job type from the list.
  3. Select the Authorization you would like to use or create a new one.
  4. Enter a Label to identify this job you are creating in Lytics.
  5. (Optional) Enter a Description for further context on your job.
  6. From the Stream box, enter or select the data stream you want to import the file(s) into.
  7. From the Bucket drop-down list, select the bucket to import from. If there is an error fetching buckets, your credentials may not have permission to list buckets, use the Bucket Name (Alt) box instead.
  8. (Optional) In the Bucket Name (Alt) box, enter the bucket name to read the file(s) from.
  9. From the File drop-down list, select the file to import. Listing files may take up to a couple minutes after the bucket is chosen.
  10. (Optional) From the Timestamp Field drop-down list, select the name of the column in the JSON that contains the timestamp of an event. If no field is specified, the event will be time stamped with the time of the import.
  11. (Optional) Select the Keep Updated checkbox to run the import on a regular basis. AWS S3 import JSON cfg-1
  12. Additional Configuration options are available by clicking on the Show Advanced Options tab.
  13. (Optional) In the Prefix text box, enter the file name prefix. You may use regular expressions for pattern matching. The prefix must match the file name up to the timestamp. A precalculated prefix derived from the selected file will be available as a dropdown.
  14. (Optional) From the Time of Day drop-down list, select the time of day for the import to be scheduled after the first import. This only applies to the daily, weekly, and monthly import frequencies. If no option is selected the import will be scheduled based on the completion time of the last import.
  15. (Optional) From the Timezone drop-down list, select the time zone for the Time of Day.
  16. (Optional) From the File Upload Frequency drop-down list, select the frequency to run the import. AWS S3 import JSON cfg adv
  17. Click Start Import.

NOTE: For continuous imports, files should be in the following format: prefix_timestamp.json. The workflow will understand the sequence of files based on the timestamp. If no next file is received, the continuous import will stop and a new export will need to be configured.