Documentation / Product / Integrations / Amazon Web Services / AWS S3

AWS S3: Import Audiences & Activity Data

Import audiences and activity data into Lytics via a CSV file directly from your AWS S3 bucket.

Integration Details

This integration uses the Amazon S3 API to read the CSV file selected for the import. Each run of the job will proceed as follows:

  1. Query for a list of objects in the bucket selected in the configuration step.
  2. Read the selected CSV file.
  3. Import all the fields that are chosen during configuration. If configured to diff the files, it will compare the file to the data imported from the previous run.
  4. Send the fields to the configured data stream.
  5. Schedule the next run of import if configured to run continuously.

Please see Custom Data Ingestion for more information on file naming, field formatting, headers, timestamps, etc.

Fields

Once you choose the CSV file to import from your S3 bucket during configuration, Lytics will read the file and list all the fields that can be imported. You can then select the fields that you want to import.

Configuration

Follow these steps to set up and configure an AWS S3 CSV import job in Lytics. If you are new to creating jobs in Lytics, see the Jobs Dashboard documentation for more information.

  1. Select Amazon Web Services from the list of providers.
  2. Select the Import Audiences and Activity Data (S3) job type from the list.
  3. Select the Authorization you would like to use or create a new one.
  4. Enter a Label to identify this job you are creating in Lytics.
  5. (Optional) Enter a Description for further context on your job.
  6. From the Stream box, enter or select the data stream you want to import the file(s) into.
  7. From the Bucket drop-down list, select the bucket to import from. If there is an error fetching buckets, your credentials may not have permission to list buckets. You can type the Bucket Name where the CSV file is.
  8. (Optional) Using the Directory drop-down, select the folder where the CSV file is located. If loading the directory takes too long, you can type the folder name as well.
  9. From the File drop-down, select the file to import. Listing files may take up to a couple minutes after the bucket is chosen. If you have permission for some specific file, you can type in the File Name.
  10. (Optional) In the Custom Delimiter text field, enter the delimiter of the file. Default delimiter is a comma, ,. For tab delimited files enter t.
  11. (Optional) Using the Timestamps drop-down list, select the column in the CSV file that contains the timestamp of an event. If no fields are specified, the event will be time stamped with the time of the import.
  12. (Optional) Using the Fields input, select fields to import. The fields listed in the left side are available for the import. If nothing is selected, all fields will be imported. If no field names appear, check to ensure the CSV file has an appropriate header row or the delimiter may need to be changed.
  13. (Optional) Select the Keep Updated checkbox to run this import continuously.
  14. (Optional) Select the Diff checkbox to compare file contents to the previous file contents during continuous import and import only rows that have changed. This is useful when large amounts of data remain unchanged in each file.
  15. Click the Show Advanced Options button.
  16. (Optional) In the Prefix text box, enter the file name prefix. You may use regular expressions for pattern matching. The prefix must match the file name up to the timestamp. A precalculated prefix derived from the selected file will be available as a dropdown.
  17. (Optional) Using the Time of Day drop-down, select the time of day to start import.
  18. (Optional) Using the Timezone drop-down, select the timezone for the time of day you selected above.
  19. (Optional) Using the File Upload Frequency drop-down, select how often to check for a new file.
  20. Click Start Import. aws-s3-csv-import