This page provides you with instructions on how to extract data from the Facebook Ads?API?and load it into Amazon Redshift. (If this manual process is a bit more involved than you'd prefer, check out Stitch , which can do all the heavy lifting for you?in just a few clicks.)The first step of getting your Facebook Ads?data into AWS Redshift is actually pulling that data off of Facebook's?servers. You can do this using the Facebook Ads Insights API , which is?available to all Facebook advertisers.?Note that Facebook also offers an API for managing and placing ads, but that is not?relevant for extracting reporting data.
The data you extract from Facebook Ads?API reports can be quite granular, allowing you to see things like impressions, clickthrough rates, and CPC broken out by time period.
Below is an example of what a response from the Facebook Ads Insights API might look like for a specific campaign over a specified time period.
"data": "impressions": "1862555", "adset_name": "My ad set", "cost_per_action_type": "action_carousel_card_name": "My Carousel Card 1", "action_type": "app_custom_event.fb_mobile_activate_app", "value": 0.093347346315861 , "action_carousel_card_name": "My Carousel Card 2", "action_type": "app_custom_event.fb_mobile_activate_app", "value": 0.38324089579301 ,.. ,
Now comes the fun part: mapping?the data that comes out of each FB?API result?into a schema that can be inserted into a Redshift database. This means that, for each value in the response, you need to identify a predefined datatype (i.e. INTEGER, DATETIME, etc.) and build a table that can receive them. The Facebook Ads?API documentation can give you a good sense of what fields?will be provided by each endpoint, along with their corresponding datatypes.
Once you have identified all of the columns you will want to insert, you can use the CREATE TABLE statement in Redshift to create a table that can receive all of this data.
With a table built, it may seem like?the easiest way to add your data (especially if there isn't much of it), is to build INSERT statements to add data to your Redshift table row-by-row. If you have any experience with SQL, this will be your gut reaction. But beware! Redshift isn't optimized for inserting data one row at a time, and if you have any kind of high-volume data being inserted, you would be much better off loading the data into Amazon S3 and then using the COPY command to load it into Redshift.
What happens tomorrow when you have thousands of new impressions and want those incorporated into your Redshift database as well?
The key is to build your script in such a way that it can also identify incremental?updates to your data. You can set your script up as?a cron job?or continuous loop to keep pulling down new data as it appears.
Redshift is totally awesome, but sometimes you need to start smaller or optimize for different things. In this case, many people choose to get started with Postgres, which is an open source RDBMS that uses nearly identical SQL syntax to Redshift. If you're interested in seeing the relevant steps for loading this data into Postgres, check out Facebook to Postgres
If all this sounds a bit overwhelming, don't be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn't a very high-leverage use of your time.
Thankfully, products like Stitch were built to solve this problem automatically. With just a few clicks, Stitch starts extracting?your Facebook Ads?data via the API, structuring it in a way that is optimized for analysis, and inserting that data into your Amazon Redshift data warehouse.
Stitch streams all of your data directly to Redshift so facebook marketing campaign template you can focus on analysis, not data consolidation.
No comments:
Post a Comment