Author: Kanchana Velaga

INTRODUCTION:
Cloning: Cloning is creating a copy of an existing object. The clone that is created is both writable and completely independent of its source.
Snowpipe: Snowpipe is used to load data from files as soon as they're available on the stage.
Snowpipe on cloned Database: Whenever a database or any object is cloned, the pipes that refer to an internal stage are not cloned. At the same time, the pipes that refer to the external stage are cloned.
Pipe behavior in Cloning: If the table name to which the pipe has been set up is qualified in the COPY statement (DB_NAME.SCHEMA_NAME.TABLE_NAME or SCHEMA_NAME.TABLE_NAME)while creating the pipe, then the snowpipe duplicates the same data into the source table.
If the table is not completely qualified, then the snowpipe loads the data into the source table and the cloned table correctly.
For a cloned pipe, the default value of AUTO_INGEST is FALSE, due to which the pipe can't get the event notifications. So, it has to be set to TRUE.
USE CASE:
Create a database with a snowpipe configured in the external stage.
Clone that database and check whether the new file gets uploaded to the cloned database when the file is available at the stage.
IMPLEMENTATION:
Step 1: Create a database with a table and configure snowpipe to that table to load data whenever the new file is available in the stage. (NOTE: To configure the snowpipe refer to the link in the reference section)
CREATE DATABASE SNOWPIPEDB; CREATE OR REPLACE TABLE SNOWPIPETABLE (column1 <datatype>,column2 <datatype>,column3 <datatype>); Configure the snowpipe to the above-created table. (Only if the above command is run, then the pipe gets activated in the cloned database).
Step 2: Clone the database.
CREATE DATABASE CLONEDSNOWPIPEDB CLONE SNOWPIPEDB;
ALTER PIPE BLOGPIPE SET PIPE_EXECUTION_PAUSED = FALSE;
(Only if the above command runs does the pipe get activated in the cloned database).
Step 3: Upload a new file with ten records in the cloud region, which in turn gets reflected in the external stage. Now the data in the file should be available in both the source database and the clone database.
Source Database:

Clone Database:

Here, we can observe that the source table has been duplicated, and the clone table remains empty. This is because if the table name to which the pipe has been set up is qualified in the COPY statement (DB_NAME.SCHEMA_NAME.TABLE_NAME or SCHEMA_NAME.TABLE_NAME)while creating the pipe, then the snowpipe duplicates the same data into the source table.
So, while configuring the snowpipe, mention only the table name instead of mentioning the exact location(namespace) of that table in the copy-into command.
BEFORE:
CREATE OR REPLACE PIPE SNOWPIPEDB.SNOWPIPESCHEMA.BLOGPIPE AUTO_INGEST=TRUE AS COPY INTO SNOWPIPEDB.SNOWPIPESCHEMA.SNOWPIPETABLE FROM @SNOWPIPESTAGE FILE_FORMAT=BLOGFORMATCSV;
AFTER:
CREATE OR REPLACE PIPE SNOWPIPEDB.SNOWPIPESCHEMA.BLOGPIPE AUTO_INGEST=TRUE AS COPY INTO SNOWPIPETABLE FROM @SNOWPIPESTAGE FILE_FORMAT=BLOGFORMATCSV; Now clone the database with parameter PIPE_EXECUTION_PAUSED=FALSE CREATE DATABASE CLONEDSNOWPIPEDB CLONE SNOWPIPEDB; ALTER PIPE BLOGPIPE SET PIPE_EXECUTION_PAUSED = FALSE;
Now, whenever a new file is available at the stage, the data gets loaded properly into the source as well as the clone database.
Source Table:

Clone Table:

CONCLUSION:
Hence, whenever a database that is configured with snowpipe is cloned, the above-mentioned two steps, i.e., by setting PIPE_EXECUTION_PAUSED = FALSE and by mentioning only the table name instead of a complete namespace, can be followed to make the snowpipe function appropriately.
REFERENCE:
To configure snowpipe for Amazon S3: https://docs.snowflake.com/en/user-guide/data-load-snowpipe-auto-s3.html