Most of the datasets you will use will come in one of three forms:
CSV files. Unless otherwise specified, datasets in CSV file format will be assumed to be stored on your local machine. This is common for many ad-hoc forms of analysis. The data is ingested via your browser's upload feature and for security reasons can only be triggered by you, the user. This means every time the model will need to ingest or evaluate your source data, you will need to upload it again. This is not a bug, it is a security feature that ensures your data is secure on your local machine. It is also the most flexible way to get started on a quick model. For larger files however, your processing time will be limited by your upload connection speed since the source data originates from your local machine.
Database. In many cases data is stored in a database system such as Snowflake. In this case the G2M platform will assume you've created and/or aggregated your dataset into a single master table. Once you connect your dataset using your database credentials the G2M platform can retrieve the data without user intervention. Note that for security reasons, your password information will be stored locally on your current machine, not remotely. If you want to use your model using a different machine you will need to provide your database password again (once) on the new machine.
Data lake. Data lakes, or cloud storage accounts, are commonly used with larger datasets of typically more than a million rows. At that scale parsing a data file is typically much faster than running a database query. At this time, the G2M platform can parse out-of-the-box files in public data lakes in CSV format.
For other file types or data configurations, please reach out to your account representative or the G2M Support team.