Darmowy szablon automatyzacji

Przeszukuj dane z Arkuszy Google/CSV za pomocą agenta AI, korzystając z PostgreSQL

4863
1 mies. temu
23
bloków

Opis szablonu automatyzacji w n8n

Ten szablon n8n umożliwia wykonywanie zapytań do strukturalnych danych finansowych z Google Sheets lub plików CSV przy użyciu SQL generowanego przez sztuczną inteligencję. W przeciwieństwie do tradycyjnych rozwiązań opartych na bazach danych wektorowych, które często zawodzą przy zapytaniach numerycznych, ten szablon wykorzystuje PostgreSQL do efektywnego przechowywania danych i agenta AI do dynamicznego tworzenia zoptymalizowanych zapytań SQL na podstawie naturalnych poleceń tekstowych.

Funkcjonalności

  • Pobieranie danych z Google Sheets lub plików CSV
  • Automatyczne wykrywanie schematu danych i tworzenie tabeli PostgreSQL
  • Wypełnianie tabeli danymi
  • Tłumaczenie pytań w języku naturalnym na zapytania SQL
  • Szybkie i precyzyjne zwracanie wyników numerycznych

Dlaczego warto użyć?

  • Nie wymaga znajomości SQL - zapytania generuje AI
  • Omija problemy wydajnościowe i kosztowe rozwiązań opartych na bazach wektorowych
  • Łatwo skalowalne bez przeciążania modelu językowego
  • W pełni darmowe i open-source

Wymagania wstępne

  • Działająca instancja PostgreSQL (bez wymaganych dodatkowych rozszerzeń)
  • Dostęp do Google Sheets (publiczny lub współdzielony URL arkusza ze strukturalnymi danymi)
  • Działająca instancja n8n z dostępem do węzłów Google Drive i PostgreSQL

Instrukcja konfiguracji

  1. Dodaj URL swojego Google Sheets w węźle "Google Drive Trigger"
  2. Skonfiguruj połączenie PostgreSQL, wprowadzając dane dostępowe do bazy
  3. Uruchom workflow, aby zainicjować bazę danych
  4. Zadawaj pytania w naturalnym języku, np. "Ile wyniosła sprzedaż w zeszłym tygodniu?"
  5. (Opcjonalnie) Dodaj węzeł "Schedule Trigger" do automatycznej synchronizacji danych

Jak to działa?

Workflow analizuje strukturę danych z Google Sheets lub CSV, tworzy odpowiednią tabelę w PostgreSQL, a następnie wykorzystuje agenta AI do tłumaczenia pytań w języku naturalnym na precyzyjne zapytania SQL. Dzięki zastosowaniu PostgreSQL zamiast metod wektorowych, rozwiązanie zapewnia szybkie i dokładne wyniki dla zapytań numerycznych.

Przykłady zastosowań

Ten szablon automatyzacji znajduje zastosowanie w wielu scenariuszach biznesowych i analitycznych, szczególnie tam, gdzie potrzebny jest szybki dostęp do precyzyjnych danych numerycznych bez konieczności ręcznego pisania zapytań SQL.

  • Automatyczne generowanie raportów finansowych na podstawie danych z arkuszy kalkulacyjnych
  • Szybkie wyszukiwanie konkretnych transakcji lub wyników sprzedaży
  • Analiza trendów sprzedażowych w czasie na podstawie historycznych danych
  • Monitorowanie wydatków i przychodów w czasie rzeczywistym
  • Automatyzacja procesów księgowych i finansowych
  • Generowanie spersonalizowanych dashboardów z kluczowymi wskaźnikami
  • Integracja danych finansowych z innymi systemami poprzez API

Porady dla użytkowników

  • Upewnij się, że nagłówki kolumn w Google Sheets są spójne dla poprawnego wykrycia schematu
  • Rozpocznij od prostych pytań, aby przetestować działanie agenta AI
  • Regularnie aktualizuj dane poprzez zaplanowane uruchomienia workflow

   Skopiuj kod szablonu   
{"id":"7gRbzEzCuOzQKn4M","meta":{"instanceId":"edc0464b1050024ebda3e16fceea795e4fdf67b1f61187c4f2f3a72397278df0","templateCredsSetupCompleted":true},"name":"SHEETS RAG","tags":[],"nodes":[{"id":"a073154f-53ad-45e2-9937-d0a4196c7838","name":"create table query","type":"n8n-nodes-base.code","position":[1280,2360],"parameters":{"jsCode":"// Helper function to check if a string is in MM/DD/YYYY formatnfunction isDateString(value) {n const dateRegex = /^\d{2}\/\d{2}\/\d{4}$/;n if (typeof value !== 'string') return false;n if (!dateRegex.test(value)) return false;n const [month, day, year] = value.split('/').map(Number);n const date = new Date(year, month - 1, day);n return !isNaN(date.getTime());n}nnconst tableName = `ai_table_${$('change_this').first().json.sheet_name}`;nconst rows = $('fetch sheet data').all();nconst allColumns = new Set();nn// Collect column names dynamicallynrows.forEach(row => {n Object.keys(row.json).forEach(col => allColumns.add(col));n});nn// Ensure "ai_table_identifier" is always the first columnnconst originalColumns = ["ai_table_identifier", ...Array.from(allColumns)];nn// Function to detect currency type (unchanged)nfunction detectCurrency(values) {n const currencySymbols = {n '₹': 'INR', '$': 'USD', '€': 'EUR', '£': 'GBP', '¥': 'JPY',n '₩': 'KRW', '฿': 'THB', 'zł': 'PLN', 'kr': 'SEK', 'R$': 'BRL',n 'C$': 'CAD', 'A$': 'AUD'n };nn let detectedCurrency = null;n for (const value of values) {n if (typeof value === 'string' && value.trim() !== '') {n for (const [symbol, code] of Object.entries(currencySymbols)) {n if (value.trim().startsWith(symbol)) {n detectedCurrency = code;n break;n }n }n }n }n return detectedCurrency;n}nn// Function to generate consistent column namesnfunction generateColumnName(originalName, typeInfo) {n if (typeInfo.isCurrency) {n return `${originalName}_${typeInfo.currencyCode.toLowerCase()}`;n }n return originalName;n}nn// Infer column types and transform namesnconst columnMapping = {};noriginalColumns.forEach(col => {n let typeInfo = { type: "TEXT" };nn if (col !== "ai_table_identifier") {n const sampleValues = rowsn .map(row => row.json[col])n .filter(value => value !== undefined && value !== null);nn // Check for currency firstn const currencyCode = detectCurrency(sampleValues);n if (currencyCode) {n typeInfo = { type: "DECIMAL(15,2)", isCurrency: true, currencyCode };n }n // If all sample values match MM/DD/YYYY, treat the column as a daten else if (sampleValues.length > 0 && sampleValues.every(val => isDateString(val))) {n typeInfo = { type: "TIMESTAMP" };n }n }nn const newColumnName = generateColumnName(col, typeInfo);n columnMapping[col] = { newName: newColumnName, typeInfo };n});nn// Final column namesnconst mappedColumns = originalColumns.map(col => columnMapping[col]?.newName || col);nn// Define SQL columns – note that for simplicity, this example still uses TEXT for non-special types,n// but you can adjust it so that TIMESTAMP columns are created with a TIMESTAMP type.nconst columnDefinitions = [`"ai_table_identifier" UUID PRIMARY KEY DEFAULT gen_random_uuid()`]n .concat(mappedColumns.slice(1).map(col => {n // If the column was inferred as TIMESTAMP, use that type in the CREATE TABLE statement.n const originalCol = Object.keys(columnMapping).find(key => columnMapping[key].newName === col);n const inferredType = columnMapping[originalCol]?.typeInfo?.type;n return `"${col}" ${inferredType === "TIMESTAMP" ? "TIMESTAMP" : "TEXT"}`;n }))n .join(", ");nnconst createTableQuery = `CREATE TABLE IF NOT EXISTS ${tableName} (${columnDefinitions});`;nnreturn [{ n query: createTableQuery,n columnMapping: columnMapping n}];n"},"typeVersion":2},{"id":"2beb72c4-dab4-4058-b587-545a8ce8b86d","name":"create insertion query","type":"n8n-nodes-base.code","position":[1660,2360],"parameters":{"jsCode":"const tableName = `ai_table_${$('change_this').first().json.sheet_name}`;nconst rows = $('fetch sheet data').all();nconst allColumns = new Set();nn// Get column mapping from previous nodenconst columnMapping = $('create table query').first().json.columnMapping || {};nn// Collect column names dynamicallynrows.forEach(row => {n Object.keys(row.json).forEach(col => allColumns.add(col));n});nnconst originalColumns = Array.from(allColumns);nconst mappedColumns = originalColumns.map(col => n columnMapping[col] ? columnMapping[col].newName : coln);nn// Helper function to check if a string is a valid timestampnfunction isValidTimestamp(value) {n const date = new Date(value);n return !isNaN(date.getTime());n}nn// Helper to detect currency symbol (unchanged)nfunction getCurrencySymbol(value) {n if (typeof value !== 'string') return null;n n const currencySymbols = ['₹', '$', '€', '£', '¥', '₩', '฿', 'zł', 'kr', 'R$', 'C$', 'A$'];n for (const symbol of currencySymbols) {n if (value.trim().startsWith(symbol)) {n return symbol;n }n }n return null;n}nn// Helper to normalize currency values (unchanged)nfunction normalizeCurrencyValue(value, currencySymbol) {n if (typeof value !== 'string') return null;n if (!currencySymbol) return value;n n const numericPart = value.replace(currencySymbol, '').replace(/,/g, '');n return !isNaN(parseFloat(numericPart)) ? parseFloat(numericPart) : null;n}nn// Helper to normalize percentage values (unchanged)nfunction normalizePercentageValue(value) {n if (typeof value !== 'string') return value;n if (!value.trim().endsWith('%')) return value;n n const numericPart = value.replace('%', '');n return !isNaN(parseFloat(numericPart)) ? parseFloat(numericPart) / 100 : null;n}nn// Function to parse MM/DD/YYYY strings into ISO formatnfunction parseDateString(value) {n const dateRegex = /^\d{2}\/\d{2}\/\d{4}$/;n if (typeof value === 'string' && dateRegex.test(value)) {n const [month, day, year] = value.split('/').map(Number);n const date = new Date(year, month - 1, day);n return !isNaN(date.getTime()) ? date.toISOString() : null;n }n return value;n}nn// Format rows properly based on column mappings and typesnconst formattedRows = rows.map(row => {n const formattedRow = {};nn originalColumns.forEach((col, index) => {n const mappedCol = mappedColumns[index];n let value = row.json[col];n const typeInfo = columnMapping[col]?.typeInfo || { type: "TEXT" };nn if (value === "" || value === null || value === undefined) {n value = null;n } n else if (typeInfo.isCurrency) {n const symbol = getCurrencySymbol(value);n if (symbol) {n value = normalizeCurrencyValue(value, symbol);n } else {n value = null;n }n }n else if (typeInfo.isPercentage) {n if (typeof value === 'string' && value.trim().endsWith('%')) {n value = normalizePercentageValue(value);n } else {n value = !isNaN(parseFloat(value)) ? parseFloat(value) / 100 : null;n }n }n else if (typeInfo.type === "DECIMAL(15,2)" || typeInfo.type === "INTEGER") {n if (typeof value === 'string') {n const cleanedValue = value.replace(/,/g, '');n value = !isNaN(parseFloat(cleanedValue)) ? parseFloat(cleanedValue) : null;n } else if (typeof value === 'number') {n value = parseFloat(value);n } else {n value = null;n }n } n else if (typeInfo.type === "BOOLEAN") {n if (typeof value === 'string') {n const lowercased = value.toString().toLowerCase();n value = lowercased === "true" ? true : n lowercased === "false" ? false : null;n } else {n value = Boolean(value);n }n } n else if (typeInfo.type === "TIMESTAMP") {n // Check if the value is in MM/DD/YYYY format and parse it accordingly.n if (/^\d{2}\/\d{2}\/\d{4}$/.test(value)) {n value = parseDateString(value);n } else if (isValidTimestamp(value)) {n value = new Date(value).toISOString();n } else {n value = null;n }n }n else if (typeInfo.type === "TEXT") {n value = value !== null && value !== undefined ? String(value) : null;n }nn formattedRow[mappedCol] = value;n });nn return formattedRow;n});nn// Generate SQL placeholders dynamicallynconst valuePlaceholders = formattedRows.map((_, rowIndex) =>n `(${mappedColumns.map((_, colIndex) => `$${rowIndex * mappedColumns.length + colIndex + 1}`).join(", ")})`n).join(", ");nn// Build the insert query stringnconst insertQuery = `INSERT INTO ${tableName} (${mappedColumns.map(col => `"${col}"`).join(", ")}) VALUES ${valuePlaceholders};`;nn// Flatten parameter values for PostgreSQL querynconst parameters = formattedRows.flatMap(row => mappedColumns.map(col => row[col]));nnreturn [n {n query: insertQuery,n parameters: parametersn }n];n"},"typeVersion":2},{"id":"ba19c350-ffb7-4fe1-9568-2a619c914434","name":"Google Drive Trigger","type":"n8n-nodes-base.googleDriveTrigger","position":[600,2060],"parameters":{"pollTimes":{"item":[{}]},"triggerOn":"specificFile","fileToWatch":{"__rl":true,"mode":"list","value":"1yGx4ODHYYtPV1WZFAtPcyxGT2brcXM6pl0KJhIM1f_c","cachedResultUrl":"https://docs.google.com/spreadsheets/d/1yGx4ODHYYtPV1WZFAtPcyxGT2brcXM6pl0KJhIM1f_c/edit?usp=drivesdk","cachedResultName":"Spreadsheet"}},"credentials":{"googleDriveOAuth2Api":{"id":"zOt0lyWOZz1UlS67","name":"Google Drive account"}},"typeVersion":1},{"id":"dd2108fe-0cfe-453c-ac03-c0c5b10397e6","name":"execute_query_tool","type":"@n8n/n8n-nodes-langchain.toolWorkflow","position":[1340,1720],"parameters":{"name":"query_executer","schemaType":"manual","workflowId":{"__rl":true,"mode":"list","value":"oPWJZynrMME45ks4","cachedResultName":"query_executer"},"description":"Call this tool to execute a query. Remember that it should be in a postgreSQL query structure.","inputSchema":"{n"type": "object",n"properties": {nt"sql": {ntt"type": "string",ntt"description": "A SQL query based on the users question and database schema."ntt}nt}n}","specifyInputSchema":true},"typeVersion":1.2},{"id":"f2c110db-1097-4b96-830d-f028e08b6713","name":"Google Gemini Chat Model","type":"@n8n/n8n-nodes-langchain.lmChatGoogleGemini","position":[880,1680],"parameters":{"options":{},"modelName":"models/gemini-2.0-flash"},"credentials":{"googlePalmApi":{"id":"Kr5lNqvdmtB0Ybyo","name":"Google Gemini(PaLM) Api account"}},"typeVersion":1},{"id":"2460801c-5b64-41b3-93f7-4f2fbffabfd6","name":"get_postgres_schema","type":"@n8n/n8n-nodes-langchain.toolWorkflow","position":[1160,1720],"parameters":{"name":"get_postgres_schema","workflowId":{"__rl":true,"mode":"list","value":"iNLPk34SeRGHaeMD","cachedResultName":"get database schema"},"description":"Call this tool to retrieve the schema of all the tables inside of the database. A string will be retrieved with the name of the table and its columns, each table is separated by \n\n.","workflowInputs":{"value":{},"schema":[],"mappingMode":"defineBelow","matchingColumns":[],"attemptToConvertTypes":false,"convertFieldsToString":false}},"typeVersion":2},{"id":"4b43ff94-df0d-40f1-9f51-cf488e33ff68","name":"change_this","type":"n8n-nodes-base.set","position":[800,2060],"parameters":{"options":{},"assignments":{"assignments":[{"id":"908ed843-f848-4290-9cdb-f195d2189d7c","name":"table_url","type":"string","value":"https://docs.google.com/spreadsheets/d/1yGx4ODHYYtPV1WZFAtPcyxGT2brcXM6pl0KJhIM1f_c/edit?gid=0#gid=0"},{"id":"50f8afaf-0a6c-43ee-9157-79408fe3617a","name":"sheet_name","type":"string","value":"product_list"}]}},"typeVersion":3.4},{"id":"a27a47ff-9328-4eef-99e8-280452cff189","name":"is not in database","type":"n8n-nodes-base.if","position":[1380,2060],"parameters":{"options":{},"conditions":{"options":{"version":2,"leftValue":"","caseSensitive":true,"typeValidation":"strict"},"combinator":"and","conditions":[{"id":"619ce84c-0a50-4f88-8e55-0ce529aea1fc","operator":{"type":"boolean","operation":"false","singleValue":true},"leftValue":"={{ $('table exists?').item.json.exists }}","rightValue":"true"}]}},"typeVersion":2.2},{"id":"8ad9bc36-08b1-408e-ba20-5618a801b4ed","name":"table exists?","type":"n8n-nodes-base.postgres","position":[1000,2060],"parameters":{"query":"SELECT EXISTS (n SELECT 1 n FROM information_schema.tables n WHERE table_name = 'ai_table_{{ $json.sheet_name }}'n);n","options":{},"operation":"executeQuery"},"credentials":{"postgres":{"id":"KQiQIZTArTBSNJH7","name":"Postgres account"}},"typeVersion":2.5},{"id":"f66b7ca7-ecb7-47fc-9214-2d2b37b0fbe4","name":"fetch sheet data","type":"n8n-nodes-base.googleSheets","position":[1180,2060],"parameters":{"options":{},"sheetName":{"__rl":true,"mode":"name","value":"={{ $('change_this').item.json.sheet_name }}"},"documentId":{"__rl":true,"mode":"url","value":"={{ $('change_this').item.json.table_url }}"}},"credentials":{"googleSheetsOAuth2Api":{"id":"3au0rUsZErkG0zc2","name":"Google Sheets account"}},"typeVersion":4.5},{"id":"11ba5da0-e7c4-49ee-8d35-24c8d3b9fea9","name":"remove table","type":"n8n-nodes-base.postgres","position":[980,2360],"parameters":{"query":"DROP TABLE IF EXISTS ai_table_{{ $('change_this').item.json.sheet_name }} CASCADE;","options":{},"operation":"executeQuery"},"credentials":{"postgres":{"id":"KQiQIZTArTBSNJH7","name":"Postgres account"}},"typeVersion":2.5},{"id":"3936ecb3-f084-4f86-bd5f-abab0957ebc0","name":"create table","type":"n8n-nodes-base.postgres","position":[1460,2360],"parameters":{"query":"{{ $json.query }}","options":{},"operation":"executeQuery"},"credentials":{"postgres":{"id":"KQiQIZTArTBSNJH7","name":"Postgres account"}},"typeVersion":2.5},{"id":"8a3ea239-f3fa-4c72-af99-31f4bd992b58","name":"perform insertion","type":"n8n-nodes-base.postgres","position":[1860,2360],"parameters":{"query":"{{$json.query}}","options":{"queryReplacement":"={{$json.parameters}}"},"operation":"executeQuery"},"credentials":{"postgres":{"id":"KQiQIZTArTBSNJH7","name":"Postgres account"}},"typeVersion":2.5},{"id":"21239928-b573-4753-a7ca-5a9c3aa8aa3e","name":"Execute Workflow Trigger","type":"n8n-nodes-base.executeWorkflowTrigger","position":[1720,1720],"parameters":{},"typeVersion":1},{"id":"c94256a9-e44e-4800-82f8-90f85ba90bde","name":"Sticky Note","type":"n8n-nodes-base.stickyNote","position":[1920,1460],"parameters":{"color":7,"width":500,"height":260,"content":"Place this in a separate workflow named:n### query_executer"},"typeVersion":1},{"id":"daec928e-58ee-43da-bd91-ba8bcd639a4a","name":"Sticky Note1","type":"n8n-nodes-base.stickyNote","position":[1920,1840],"parameters":{"color":7,"width":500,"height":280,"content":"place this in a separate workflow named: n### get database schema"},"typeVersion":1},{"id":"8908e342-fcbe-4820-b623-cb95a55ea5db","name":"When chat message received","type":"@n8n/n8n-nodes-langchain.manualChatTrigger","position":[640,1540],"parameters":{},"typeVersion":1.1},{"id":"d0ae90c2-169e-44d7-b3c2-4aff8e7d4be9","name":"response output","type":"n8n-nodes-base.set","position":[2220,1540],"parameters":{"options":{},"assignments":{"assignments":[{"id":"e2f94fb1-3deb-466a-a36c-e3476511d5f2","name":"response","type":"string","value":"={{ $json }}"}]}},"typeVersion":3.4},{"id":"81c58d9b-ded4-4b74-8227-849e665cbdff","name":"sql query executor","type":"n8n-nodes-base.postgres","position":[2000,1540],"parameters":{"query":"{{ $json.query.sql }}","options":{},"operation":"executeQuery"},"credentials":{"postgres":{"id":"KQiQIZTArTBSNJH7","name":"Postgres account"}},"typeVersion":2.5},{"id":"377d1727-4577-41bb-8656-38273fc4412b","name":"schema finder","type":"n8n-nodes-base.postgres","position":[2000,1920],"parameters":{"query":"SELECT n t.schemaname,n t.tablename,n c.column_name,n c.data_typenFROM n pg_catalog.pg_tables tnJOIN n information_schema.columns cn ON t.schemaname = c.table_scheman AND t.tablename = c.table_namenWHERE n t.schemaname = 'public'nORDER BY n t.tablename, c.ordinal_position;","options":{},"operation":"executeQuery"},"credentials":{"postgres":{"id":"KQiQIZTArTBSNJH7","name":"Postgres account"}},"typeVersion":2.5},{"id":"89d3c59c-2b67-454d-a8f3-e90e75a28a8c","name":"schema to string","type":"n8n-nodes-base.code","position":[2220,1920],"parameters":{"jsCode":"function transformSchema(input) {n const tables = {};n n input.forEach(({ json }) => {n if (!json) return;n n const { tablename, schemaname, column_name, data_type } = json;n n if (!tables[tablename]) {n tables[tablename] = { schema: schemaname, columns: [] };n }n tables[tablename].columns.push(`${column_name} (${data_type})`);n });n n return Object.entries(tables)n .map(([tablename, { schema, columns }]) => `Table ${tablename} (Schema: ${schema}) has columns: ${columns.join(", ")}`)n .join("\n\n");n}nn// Example usagenconst input = $input.all();nnconst transformedSchema = transformSchema(input);nnreturn { data: transformedSchema };"},"typeVersion":2},{"id":"42d1b316-60ca-49db-959b-581b162ca1f9","name":"AI Agent With SQL Query Prompt","type":"@n8n/n8n-nodes-langchain.agent","position":[900,1540],"parameters":{"options":{"maxIterations":5,"systemMessage":"=## RolenYou are a **Database Query Assistant** specializing in generating PostgreSQL queries based on natural language questions. You analyze database schemas, construct appropriate SQL queries, and provide clear explanations of results.nn## Toolsn1. `get_postgres_schema`: Retrieves the complete database schema (tables and columns)n2. `execute_query_tool`: Executes SQL queries with the following input format:n ```jsonn {n "sql": "Your SQL query here"n }n ```nn## Process Flownn### 1. Analyze the Questionn- Identify the **data entities** being requested (products, customers, orders, etc.)n- Determine the **query type** (COUNT, AVG, SUM, SELECT, etc.)n- Extract any **filters** or **conditions** mentionednn### 2. Fetch and Analyze Scheman- Call `get_postgres_schema` to retrieve database structuren- Identify relevant tables and columns that match the entities in the questionn- Prioritize exact matches, then semantic matchesnn### 3. Query Constructionn- Build case-insensitive queries using `LOWER(column) LIKE LOWER('%value%')`n- Filter out NULL or empty values with appropriate WHERE clausesn- Use joins when information spans multiple tablesn- Apply aggregations (COUNT, SUM, AVG) as needednn### 4. Query Executionn- Execute query using the `execute_query_tool` with proper formattingn- If results require further processing, perform calculations as needednn### 5. Result Presentationn- Format results in a conversational, easy-to-understand mannern- Explain how the data was retrieved and any calculations performedn- When appropriate, suggest further questions the user might want to asknn## Best Practicesn- Use parameterized queries to prevent SQL injectionn- Implement proper error handlingn- Respond with "NOT_ENOUGH_INFO" when the question lacks specificityn- Always verify table/column existence before attempting queriesn- Use explicit JOINs instead of implicit joinsn- Limit large result sets when appropriatenn## Numeric Validation (IMPORTANT)nWhen validating or filtering numeric values in string columns:n1. **AVOID** complex regular expressions with `~` operator as they cause syntax errorsn2. Use these safer alternatives instead:n ```sqln -- Simple numeric check without regexn WHERE column_name IS NOT NULL AND trim(column_name) != '' AND column_name NOT LIKE '%[^0-9.]%'n n -- For type casting with validationn WHERE column_name IS NOT NULL AND trim(column_name) != '' AND column_name ~ '[0-9]'n n -- Safe numeric conversionn WHERE CASE WHEN column_name ~ '[0-9]' THEN TRUE ELSE FALSE ENDn ```n3. For simple pattern matching, use LIKE instead of regex when possiblen4. When CAST is needed, always guard against invalid values:n ```sqln SELECT SUM(CASE WHEN column_name ~ '[0-9]' THEN CAST(column_name AS NUMERIC) ELSE 0 END) AS totaln ```nn## Response Structuren1. **Analysis**: Brief mention of how you understood the questionn2. **Query**: The SQL statement used (in code block format)n3. **Results**: Clear presentation of the data foundn4. **Explanation**: Simple description of how the data was retrievednn## Examplesnn### Example 1: Basic Counting Queryn**Question**: "How many products are in the inventory?"nn**Process**:n1. Analyze schema to find product/inventory tablesn2. Construct a COUNT query on the relevant tablen3. Execute the queryn4. Present the count with contextnn**SQL**:n```sqlnSELECT COUNT(*) AS product_count nFROM products nWHERE quantity IS NOT NULL;n```nn**Response**:n"There are 1,250 products currently in the inventory. This count includes all items with a non-null quantity value in the products table."nn### Example 2: Filtered Aggregation Queryn**Question**: "What is the average order value for premium customers?"nn**Process**:n1. Identify relevant tables (orders, customers)n2. Determine join conditionsn3. Apply filters for "premium" customersn4. Calculate averagenn**SQL**:n```sqlnSELECT AVG(o.total_amount) AS avg_order_valuenFROM orders onJOIN customers c ON o.customer_id = c.idnWHERE LOWER(c.customer_type) = LOWER('premium')nAND o.total_amount IS NOT NULL;n```nn**Response**:n"Premium customers spend an average of $85.42 per order. This was calculated by averaging the total_amount from all orders placed by customers with a 'premium' customer type."nn### Example 3: Numeric Calculation from String Columnn**Question**: "What is the total of all ratings?"nn**Process**:n1. Find the ratings table and columnn2. Use safe numeric validationn3. Sum the valuesnn**SQL**:n```sqlnSELECT SUM(CASE WHEN rating ~ '[0-9]' THEN CAST(rating AS NUMERIC) ELSE 0 END) AS total_ratingnFROM ratingsnWHERE rating IS NOT NULL AND trim(rating) != '';n```nn**Response**:n"The sum of all ratings is 4,285. This calculation includes all valid numeric ratings from the ratings table."nn### Example 4: Date Range Aggregation for Revenue n**Question**: "How much did I make last week?" nn**Process**: n1. Identify the sales table and relevant columns (e.g., `sale_date` for dates and `revenue_amount` for revenue). n2. Use PostgreSQL date functions (`date_trunc` and interval arithmetic) to calculate the date range for the previous week. n3. Sum the revenue within the computed date range. nn**SQL**: n```sqlnSELECT SUM(revenue_amount) AS total_revenuenFROM sales_datanWHERE sale_date >= date_trunc('week', CURRENT_DATE) - INTERVAL '1 week'n AND sale_date < date_trunc('week', CURRENT_DATE);n``` nn**Response**: n"Last week's total revenue is calculated by summing the `revenue_amount` for records where the `sale_date` falls within the previous week. This query uses date functions to dynamically determine the correct date range."nnToday's date: {{ $now }}"}},"typeVersion":1.7},{"id":"368d68d0-1fe0-4dbf-9b24-ac28fd6e74c3","name":"Sticky Note2","type":"n8n-nodes-base.stickyNote","position":[560,1420],"parameters":{"color":6,"width":960,"height":460,"content":"## Use a powerful LLM to correctly build the SQL queries, which will be identified from the get schema tool and then executed by the execute query tool."},"typeVersion":1}],"active":false,"pinData":{},"settings":{"executionOrder":"v1"},"versionId":"d8045db4-2852-4bbe-9b97-0d3c0acb53f7","connections":{"change_this":{"main":[[{"node":"table exists?","type":"main","index":0}]]},"create table":{"main":[[{"node":"create insertion query","type":"main","index":0}]]},"remove table":{"main":[[{"node":"create table query","type":"main","index":0}]]},"schema finder":{"main":[[{"node":"schema to string","type":"main","index":0}]]},"table exists?":{"main":[[{"node":"fetch sheet data","type":"main","index":0}]]},"fetch sheet data":{"main":[[{"node":"is not in database","type":"main","index":0}]]},"create table query":{"main":[[{"node":"create table","type":"main","index":0}]]},"execute_query_tool":{"ai_tool":[[{"node":"AI Agent With SQL Query Prompt","type":"ai_tool","index":0}]]},"is not in database":{"main":[[{"node":"create table query","type":"main","index":0}],[{"node":"remove table","type":"main","index":0}]]},"sql query executor":{"main":[[{"node":"response output","type":"main","index":0}]]},"get_postgres_schema":{"ai_tool":[[{"node":"AI Agent With SQL Query Prompt","type":"ai_tool","index":0}]]},"Google Drive Trigger":{"main":[[{"node":"change_this","type":"main","index":0}]]},"create insertion query":{"main":[[{"node":"perform insertion","type":"main","index":0}]]},"Execute Workflow Trigger":{"main":[[{"node":"sql query executor","type":"main","index":0},{"node":"schema finder","type":"main","index":0}]]},"Google Gemini Chat Model":{"ai_languageModel":[[{"node":"AI Agent With SQL Query Prompt","type":"ai_languageModel","index":0}]]},"When chat message received":{"main":[[{"node":"AI Agent With SQL Query Prompt","type":"main","index":0}]]}}}
  • CSV
  • Sheet
  • Spreadsheet
  • GS
  • cpde
  • Javascript
  • JS
  • Python
  • Script
  • Custom Code
  • Function
  • LangChain
  • Chat
  • Conversational
  • Plan and Execute
  • ReAct
  • Tools
Planeta AI 2025 
magic-wandmenu linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram