SQL Table Transformer Macro for Confluence User Guide

Description

The macro provides a seamless and powerful way of analytics and manipulating tables using SQL. The macro supports various table formats as input (i.e. Table Sources):

  • conventional Confluence tables, nested into the macro container
  • HTML-formatted table data, nested into the macro container
  • CSV-formatted table data, nested into the macro container
  • flat JSON-formatted table data, nested into the macro container
  • JIRA-formatted table data, nested into the macro container
  • Tables of the above mentioned formats, that can be downloaded from one or more URLs
  • linked tables from one or more remote DBMS (Oracle, Postgres, H2, MySQL, etc.). Yes, you can mix the DBMS too (for example, you can link a table or more from Oracle and a table or more from Postgres) and execute common SQL queries over all those tables as if they were in the same database.
  • other Table Sources, nested in the macro container, such as Excerpt Include macro, JIRA Issues/Filter macro, Attachments macro, etc..

The macro leverages SQL as a "universal" transformation operator that gets applied to one or more Table Sources to get the desired result. Just configure your Table Sources (put table data or table source URLs into the macro container) and express your desired transformation with one or more regular SQL queries and the macro will do the rest. The results will be built on the resultsets returned by the SQL responses and presented in one of the formats of your choice (for example, in conventional Confluence tables or other supported formats - CSV, JIRA Table, JSON Array, JSON Resultset). The results can be rendered on a Confluence page or further used as input data for some other (parent) macro (such as, for example, Table Filter and Charts for Confluence by Stiltsoft). 

1.1. How It Works

It is as simple as can be! Here is a trivial use case:

  • install the macro to you Confluence instance
  • add the macro container to a Confluence page (you can also put it in inside your other favorite table macro)
  • put one or more tables into the container
  • you are ready to transform your tables: open the macro in Edit mode and put this trivial SQL query into "SQL Query" input parameter field: SELECT * FROM table_1
  • to see the result - click refresh Preview or save the macro and the page: you should see your whole original table (the top in the macro container) brought back by the SQL query.
  • modify the "SQL Query" as you please to achieve the desired transform or analysis results.

For an overview diagram that illustrates how the whole thing works refer to Technical Details section of this document.

1.2. Capabilities

The capabilities include (but not limited with):

  • analytics of one table, analytics of several tables together (cross-table analysis)
  • pre-rotation (input tables)
  • post-rotation (output tables)
  • transformation using arbitrary SQL, such as:
    • filtering
    • sorting (including multiple criteria)
    • column adding (such as computed)
    • column hiding
    • column renaming
    • columns reordering
    • cell patterns replacement / translation / highlighting
    • aggregation
    • math / computation
    • merging
    • splitting
    • finding tables intersection and excerpt
    • user-defined functions and stored procedures
  • declaring Java functions as source code that can be immediately used by SQL
  • dynamic data preparation for charts and reports
  • all combinations of the above capabilities (as long as supported by SQL and the H2 DB Engine)
  • nested macro (sequential) transformations
  • using the result tables within the other table filtering macros, such as "Pivot Table", "Table Filter" and "Charts from Table" that come with Table Filter and Charts Macro (by Stiltsoft)
  • conversion of tables (or SQL query resultset) to CSV, JSON or JIRA Table formats (i.e., bringing output results in Confluence table, csv, JIRA table or JSON formats)
  • fetching remote tables by a URL or by a list of URLs for displaying in the document or further analysis. The following formats of the remote tables are supported (and can be mixed in the list of URLs): HTML, CSV, JIRA Table, JSON.
  • fetching tables from csv attachment files with an ability to fetch from: a) arbitrary space and page; b) multiple versions and perform immediate comparison.
  • fetching data from page attachments (arbitrary space, page; arbitrary versions)
  • etc.

However, after all, the capabilities are defined by the capabilities of the SQL so far as it is supported by the H2 Database Engine.

  • No need to have or setup a DB (the macro seamlessly uses own temporary in-memory DB)
  • Your original tables are not changed in any way (all the results you get are generated dynamically using temporary in-memory tables)
  • You are NOT limited by the number of the tables you want to analyze within the same container. However you should be aware that large tables processing takes time and affects document loading performance and, as well, may hit server resources.
  • You are NOT limited by the size of the tables, number of columns, rows and cells content (while you should be reasonable for not using too large tables as recommended above)
  • You can use all the power of the SQL grammar as supported by H2 (see: H2 SQL Grammar)
  • You can use all the power of H2 Database Engine (here is the list of H2 Built-in Functions)

1.3. Supported Table Sources

  1. Conventional HTML Confluence tables (such as those, added using "Insert Table" macro).
  2. JIRA Issues/Filter macro (such as, tables, created from a JQL query to JIRA) (only from v1.0.19 of the SQL-Powered Table Transformer macro)
  3. Conventional HTML Confluence tables (or other formats, consumable by SQL-Powered Table Transformer) that might be brought to the container by other inner macros. For example:
    1. table excerpt from the other part of the document or other space/document (using "Except Include" or "ConfiDoc" macros).
    2. tables that are brought by Jira Issues Filter macro
    3. tables that are brought by Office Excel or Office Word macros
    4. tables that are brought by attachments macro
    5. tables that are brought by Play SQL Query macro (use the POOR mode)
    6. etc.
  4. HTML, CSV, JSON (from v1.0.35) or JIRA Table formatted data, that can be fetched by the macro from a URL or a list of URLs (only from v1.0.25 of the SQL-Powered Table Transformer macro).
  5. Ad-hoc tables, that you may want to create, using one of the SQL Statements in the SQL Query input parameter. For example:
    1. CREATE TABLE my_csv FROM CSVREAD('file_name.csv')
    2. CREATE TABLE my_tab AS SELECT * FROM table_1 WHERE ...
    3. etc.
  6. Generally, results of one or more inner macro execution that fetches their results in a format, consumable by the SQL-Powered Table Transformer.

1.4. Known Supported External Wrapper Macros

If needed, you can nest the SQL-Powered Table Transformer macro into an external wrapper macro. Known external wrapper macros that should be supported are:

1.5. Youtube Channel

For video presentations please visit "V&B" channel on Youtube: https://www.youtube.com/channel/UCKgmD0zAqUPEhSXShqey-0w

1.6. Current Limitations

1.6.1. Input Tables

The tables that you include into the macro container shall be:

1.6.1.1. Before Macro v1.0.28

  • simple matrix, that is:
    • have no merged cells
    • with ONLY one horizontal header row on the top of the table
    • no vertical headers on the left

      Please note that:
    • in case of header absence on the top of the table, a header will be created automatically (header titles will be according to internal logic of H2 engine)
    • any additional horizontal or vertical headers will be treated as data cells (which may lead to unexpected behavior)

1.6.1.2. Starting with Macro v1.0.28

  • not necessarily simple matrix, that is:
    • can have merged cells (both horizontal and vertical)




1.6.1.3. Supported / Unsupported Tables Cheat Sheet

Table RepresentationExampleBefore v1.0.28From v1.0.28
Supported?CommentsSupported?Comments
Single header on top, no merged cells
















YES

YES


Single header on top with some merged cells















NO

YES


Spanned header cells will be split and auto-named

Vertical headers
















NO

YES


All vertical headers column cells (if the top row is the header: except the top cell) will be treated as data cells for the corresponding rows

NOTE: you can use of the "Pre-rotate Input Tables" input parameter to pre-rotate the input tables so the 1st row becomes headers.



Cells merged horizontally















NO

YES


Spanned cells will be split and will have the same values

Cells merged vertically











NO

YES


Spanned cells will be split and will have the same values

Cells merged both horizontally and vertically













NO

YES


Spanned cells will be split and will have the same values

More than one horizontal headers
















NO

YES


For versions v1.0.28 - v1.0.33: Headers, other than the one in the top row will be treated as data rows

For versions >= v1.0.34: Top adjacent horizontal headers will be normalized to 1 horizontal header (column titles will be concatenated). Other horizontal headers will be treated as data rows.

No horizontal header
















NO

YES


The header row will be auto-created


1.7. Custom User Macros (Wrappers)

Custom User Macro is a wrapper, built over the SQL-Powered Table Transformer macro. Each wrapper provides a simplified custom configuration that realizes one or more use cases. Wrappers are purposed to provide a simpler user experience to the users who are interested in some specific use cases.

  • Split Table into Several wrapper macro has been introduced in v1.0.40. It allows splitting a (large) Confluence table into several tables. See: Split Table into Several Macro
  • Table from CSV URL wrapper macro has been introduced in v1.0.34. It will help you quickly adding a table from a not too large CSV data, located in some URL, into your Confluence document. See: Table from CSV URL Macro
  • Table Scraper from URL wrapper macro has been introduced in v1.0.35. It will help you creating tables from a remote HTML tables, located on a Web page.  See: Table Scraper from URL Macro

2. Quick Start

Prerequisites: SQL-Powered Table Transformer Macro for Confluence should be installed on your Confluence instance.

  1. Locate and Insert the macro container into your document. For that open the document Edit View, then click Insert => Other Macros ("Select macro" dialogue should appear). In the "Select macro" dialogue start typing "SQL-based Table Transformer" into the quick search field.


  2. Click the macro. You will see the following Insert dialogue (don't worry about the 'Table "TABLE_1" not found' warning, we will add some tables into the container later):


  3. Click the Insert button. You will see the just inserted macro container in your document.


  4. Add or copy one or more tables inside the container. Basically, you would want to include all the tables, that you need to be analyzed together within the container.
    You may, as well, include outer tables (from other pages), for example, using Excerpt macro. In this case, make sure the "Remove Surrounding Panel" in the Excerpt Include macro setting is enabled)


  5. Click the macro container and select Edit to open Edit Macro dialogue. In the Edit Macro dialogue provide your query as "SQL Query" input parameter
  6. Optionally set other Input Parameters
  7. Click Refresh Preview and check the result.
  8. If pleased, save the macro and the page
  9. Verify your results on the saved page

3. Input Parameters

Status "Experimental" means: the functionality is considered experimental (not yet officially accepted). It can be used, however, it can be updated or even removed in future.

NameKeyTypeMandatoryDefaultDescriptionStatus
Input Body Type
selectoryesConfluence Wiki

Select the type of content that you are going to place inside the macro container (for ex., if you want to parse direct HTML code, set to "Direct HTML"). Leave default if unsure.

Supported options:

  • Confluence Wiki - container content is in regular Confluence Wiki markup format.
  • Direct HTML - container content is HTML code (you can select tables to be placed into the DB using Tables Selector parameter)
  • Direct Text - container content is a multi-line text (such as a log). The Macro will attempt treating the text as 1 column table wherein the first line of the text will become the column header, all other lines will become the rows in this column. Note: this option has been introduced starting from v1.0.31. If you want to put multiple pieces of Text data into container - separate them with "_CONTAINER_BODY_FILE_DELIMITER_" (in this case, tables will be named TABLE_1 for 1st Text data, TABLE_2 for 2nd Text data, etc.)
  • Direct CSV - container content is CSV text (TABLE_1 will be created using the content from the container). If you want to put multiple pieces of CSV data into container - separate them with "_CONTAINER_BODY_FILE_DELIMITER_" (in this case, tables will be named TABLE_1 for 1st CSV data, TABLE_2 for 2nd CSV data, etc.)
  • Direct JIRA Table - container content is JIRA Table format (that is, plain text, using "|" delimiter). Usage of multiple tables separator and tables naming are similar to the description of "Direct CSV" option.
  • Direct JSON - (supported from v1.0.35) container content is JSON. Usage of multiple tables separator and tables naming are similar to the description of "Direct CSV" option.
  • List of URLs - container content is a list of URLs where the data shall be taken from. Put one URL on each line. The macro will fetch data from each of the URLs, try guessing what the data format is and then - put the data into DB tables. The following formats are supported:
    • HTML - the data, located by the URL, is in HTML format (you can select tables to be placed into the DB using Tables Selector parameter)
    • CSV - the data, located by the URL, is a CSV
    • JIRA Table - the data, located by the URL, is a table in JIRA format
    • JSON - the data, located by the URL, is a JSON (supported from v1.0.35)
    • Plain Text - the data, located by the URL, is plain text (supported from v1.0.35)
    If the macro does not guess the type properly, you can specify the format to be used by appending to the URL a query parameter named "handle_as".
    Corresponding possible values for "handle_as" are "HTML", "CSV", "JIRA_Table", "JSON", "Plain_TEXT".
    Example: http://www.mydomain.com/path_to_data_file?handle_as=HTML
    The "handle_as" parameter value will override the auto-detected type.


    By default, when fetching URLs content, it will be handled as "UTF-8" charset. However, starting from v1.0.29, if needed, this can be overridden by adding "use_charset" parameter to the URL. For example: http://www.mydomain.com/path_to_data_file?use_charset=CP1251


    Often the links and images within the fetched URL use relative paths to sources (that is, href attributes for links and src attributes for images). To make such links work and images load properly you can specify the "set_base_uri" parameter in the URL, so that the paths would become resolved to absolute. For example, let's say there is an image <img src="/images/img.png"> within the http://www.mydomain.com/path_to_data_file document. Modify the url as follows: http://www.mydomain.com/path_to_data_file?set_base_uri=http://www.mydomain.com and the image code within your result will become <img src="http://www.mydomain.com/images/img.png"> and will load properly. Full list of elements that will be set to their absolute paths are:
    All <a>, <area>, <link>, etc. elements that might have href attribute (except <base> element) will have their href attribute set to the absolute value.
    All <audio>, <embed>, <iframe>, <img>, <input>, <script>, <source>, <track>, <video> elements, etc., that might have src attribute will have their src attribute set to the absolute value.

    From v1.0.34 you can also use ?set_base_target=<> to force all anchors to use the desired target attribute. For example, to force all links to be opened in a new window use: ?set_base_target=_blank


    From v1.0.39:
    - you can use ?set_path_to_json_element=<> to indicate the path to specific JSON array that you need to be selected in the JSON that is located by this URL. Use JSON Pointer format for the value, for example, "/foo". The parameter value will override the value of global the "Path to JSON Element" input parameter.
    - you can use ?input_csv_separator=<> to indicate the CSV separator used in the data, located by this URL. This parameter overrides the global "Input CSV Separator" parameter for this specific url.

    - you can use ?get_file_from_zip=<file name>. The parameter indicates that the data, located by this URL, is compressed using zip (such as a zipped archive file) and you want to extract and process only the specific file.
    Notes:
    1. unzipping of only one file is supported per each URL.
    2. If your file is located within a directory you should use the following format of the get_file_from_zip=<dir_1>/.../<dir_n>/<desired file name>
    3. You can also use filename_charset=<> parameter to indicate the charset that is used for the name entry encoding within the zipped file (by default 'UTF-8' will be used). 

    4. use_charset parameter will be ignored if get_file_from_zip is NOT null.
    5. for the zipped file data only UTF-8 encoding is supported.


    If needed, you can also use Wiki formatted paths to attachment files OR Confluence pages, however, to instruct the macro to parse Wiki paths the following shall be done:
    1. the URL in the macro container must be surrounded with '
    2. the parameter "Resolve Wiki Paths in Input URLs" must be enabled.
      An example of a Wiki path formatted URL:
      'SPC:Document Name^' - path to the page, named "Document Name" in the space "SPC"
      'SPC:Document Name^#5' - path to version 5 of the page, named "Document Name" in the space "SPC"

      'SPC:Document Name^attachmentFileName.csv' - path to the attachment "attachmentFileName.csv" of the page, named "Document Name" in the space "SPC"
      'SPC:Document Name^attachmentFileName.csv#5' - path to version 5 of the attachment "attachmentFileName.csv" of the page, named "Document Name" in the space "SPC"
      For further details see "Resolve Wiki Paths in Input URLs" input parameter description.
supported from v1.0.25

SQL Query


textyesSELECT * FROM table_1

A valid SQL query or queries to be executed against your temp in-memory DB (that is, against your tables). Use table_1, table_2, ..., table_N as table names (where 1...N are indexes of the tables in the container in the order of appearance). If needed, use "SHOW TABLES" SQL query to get the names of the tables. Separate multiple queries with the Query Splitter as you can indicate below.

Example: if you put 2 tables into the macro container, then you can do:

SELECT * FROM table_1, table_2

Starting from some version the macro supports Helper Snippets within the SQL Query editor. The full list of the available snippets is provided in the "Helper Snippets" section of this document.

supported
Query Splitter
textno

<empty>

Indicates the splitter to be used for splitting multiple query string (for example, put ';' and use ';' to split queries in the SQL Query. If empty - no split will be made and your SQL Query will be attempted as a whole.

Example: ;

supported
Show Queries
checkboxnodisabled

Enable this checkbox if you want to see the SQL Query in the output (related query will appear next to each of the macro results in the preview and on the page view).

Example:

enabled
disabled
SELECT * FROM table_1 WHERE A=1
A
B
C
122
A
B
C
122


supported
Format SQL with Code-Block
checkboxnodisabled

Enable to format the displayed SQL with the "code-block" macro (makes sense only if "Show Queries" parameter is enabled).

enabled
disabled
SELECT * FROM table_1 WHERE A=1
A
B
C
122
SELECT * FROM table_1 WHERE A=1
A
B
C
122


supported
Treat Empty Cells as nulls
checkboxnoenabled

Helps dealing with empty cells. Confluence uses '&nbsp;' values in the table cells where visually those are empty. Enable this checkbox to treat values that are equal to '&nbsp;' as nulls - this will allow easier selection of empty values in SQL (i.e. as: ... WHERE col_name IS NULL).

Example:

enabled
disabled
IS NULL works fine
SELECT * FROM table_1 WHERE E IS NULL
A
D
E
LABELS
567
L4 L1 L5
IS NULL will not work
SELECT * FROM table_1 WHERE E IS NULL
ADELABELS
Instead you'll have to compare with '&nbsp;'
(note: '&nbsp;' appears as ' ' in the query)
SELECT * FROM table_1 WHERE E=' '
A
D
E
LABELS
567
L4 L1 L5


supported
Show Comments as Titles
checkboxnodisabled

SQL Query may include comments, formatted as follows /*comment text*/. If you enable this checkbox, the first comment in each of the queries will be shown in the output as the result title.

Example:

enabled
disabled
My Pretty Result
/*My Pretty Result*/
SELECT * FROM table_1 WHERE A=1
A
B
C
122
/*My Pretty Result*/
SELECT * FROM table_1 WHERE A=1
A
B
C
122


supported
Strip Comments from Query
checkboxnodisabled

Can be used to strip comments from the SQL Queries before showing them in the output (if Show Queries is enabled).

Example:

enabled
disabled
My Pretty Result
SELECT * FROM table_1 WHERE A=1
A
B
C
122
My Pretty Result
/*My Pretty Result*/
SELECT * FROM table_1 WHERE A=1
A
B
C
122


supported

Resolve Wiki Paths in Query

(from v1.0.25 renamed to Resolve Wiki Paths in Input URLs)


checkboxnodisabled

Some H2 functions, such as CSVREAD and FILE_READ deal with file paths. This parameter enables support and auto-resolution of Wiki-formatted paths to attachments in SQL Queries, so that those could be understood by the functions. As well, in case of a Wiki-formatted path in an input URL, it will be parsed too if the parameter is enabled

Enable to auto-resolve Wiki-formatted paths (space:page^attachment#version) in you SQL Query and input URLs.

#
Element
Mandatory
Default
Description
Example
1spaceKeynoCurrent page's space key

The space key where the page with the attachment is located

Empty means current page's space key.

Since v1.0.17: ' can be escaped as ''.

~bob.smith
2:yes
Delimiter:
3pageTitlenoCurrent page's title

The page title where the attachment is located

Empty means current page title.

Since v1.0.17: ' can be escaped as ''.

Bob''s Page
4^yes
Delimiter^
5attachmentFilenameyes

The file name of the attachment (as seen in the page attachments list)

Since v1.0.17: ' can be escaped as ''.

sample.csv
6#no
Delimiter#
7attachmentVersionNumbernoCurrent attachment version

Version of the attachment file to be retrieved.

0 means current version

3


 Possible formats of Wiki paths:

Format
Example SQL Query
Meaning
Note
spaceKey:pageTitle^attachmentFilename#attachmentVersionNumberCALL CSVREAD('~bob.smith:Bob's Page^sample.csv#0')No assumptions - attachment is obtained by the spaceKey, pageTitle, attachment file name and the version number specified (0 means the latest)Supported since v1.0.16
:pageTitle^attachmentFilename#attachmentVersionNumberCALL CSVREAD(':Bob's Page^sample.csv#0')Assumes current page space: attachment is obtained from the current page space by pageTitle, attachment file name and the version number specified (0 means the latest)Supported since v1.0.16
:^attachmentFilename#attachmentVersionNumberCALL CSVREAD(':^sample.csv#0')Assumes current page space and current page: attachment is obtained by attachment file name and the version number specified (0 means the latest)Supported since v1.0.16
:^attachmentFilenameCALL CSVREAD(':^sample.csv')Assumes current page space and current page: attachment is obtained by attachment file name and the latest version numberSupported since v1.0.17

supported from v1.0.16

renamed in v1.0.25

SQL Compatibility Mode
selectornoempty

Allows adjusting SQL compatibility mode to some extent as supported by H2 DB Engine.

The compatibility mode can be also set ad hoc within SQL as needed by the following query "SET MODE <mode>;".

See more details on the implemented compatibility at H2 DB Engine site.

Examples: <empty> (H2 regular mode), DB2, Derby, HSQLDB, MSSQLServer, MySQL, Oracle, PostgreSQL, Ignite

supported
Force All Tables Processing
checkboxnodisabled

By default, for optimization purposes, the macro predicts the tables, required for your SQL Query and avoids processing those it thinks are not required (for that, it looks for table names like 'table_xx' within the provided SQL Query string) to save time and server's resources. However, this optimization works well not in all cases. Enabling this parameter will switch off this optimization and force all tables processing. If enabled, all tables in the container will always get fully processed by the macro, even if the query mentions only some of those. That said, enabling this option is not recommended in general. And in particular, if you have tables inside the container that are not used in your query. In this case enabling the option will result in wasting of time for unnecessary calculations and wasting server's resources.

Nevertheless, there are cases where you have to enable it: i.e, when you expect your SQL Query to return an information that requires all your tables to be processed, while the SQL Query itself does not directly contain all the tables names. Some known examples are:

  • SHOW TABLES
  • SHOW SCHEMAS
supported
Bring Results As
selectoryesTable

Select the format of the output results.

Supported options:

  • Table - results will be presented as conventional Confluence tables
  • JSON Array - (only from v1.0.36) results will be formatted as flat JSON array
  • ResultSet JSON - results will be presented in format of JSON string (see also "Path(s) to Element" parameter)
  • Table SVS - results will be presented in CSV format (can be copy/pasted into a file and imported to MS Excel or OpenOffice Calc)
  • CSV as List - (only from v1.0.37) result will be taken from the Table CSV option, then converted and presented as unordered or ordered HTML list (to toggle ordered and unordered presentation use "Ordered List" checkbox).
  • JIRA Table - (only from v1.0.22) - result will be presented in the format of table, supported by Atlassian Jira (can be copied to Jira issue)
  • Documentation - (only from v1.0.40) - result will be presented in the documentation format (each column will have own section in the document)
Table
JSON Array
My Pretty Result
SELECT * FROM table_1 WHERE A=1
A
B
C
122
My Pretty Result
SELECT * FROM table_1 WHERE A=1
[{
    "a": "1",
    "b": "2",
    "c": "2"
}]
ResultSet JSONTable CSV
My Pretty Result
SELECT * FROM table_1 WHERE A=1
{
  "metaData": [
    "A",
    "B",
    "C"
  ],
  "resultSet": [
    [
      "1",
      "2",
      "2"
    ]
  ],
  "locator": {
    "headerFromNum": {
      "0": "A",
      "1": "B",
      "2": "C"
    },
    "headerFromName": {
      "a": 0,
      "b": 1,
      "c": 2
    },
    "throwExceptions": true,
    "NOT_IN_HEADER": -3467651
  }
}
My Pretty Result
SELECT * FROM table_1 WHERE A=1
A,B,C
1,2,2
CSV as ListJIRA Table
My Pretty Result
SELECT * FROM table_1 WHERE A=1
A , B , C
  • 1 , 2 , 2
My Pretty Result
SELECT * FROM table_1 WHERE A=1
|| A || B || C ||
| 1 | 2 | 2 |
Documentation
SELECT A, C FROM table_1 WHERE A>4

A

7

C

9


A

10

C

12



supported from v1.0.20
Result Format OptionsresultFormatOptionselectornoempty

Used only with "Bring Results As" set to "Documentation".

Select the desired options of result formatting.

Supported values are:

  • <empty> - formatting of sections (that is, rows) will be done using simple HTML
  • Tables from Rows - each section (row) will be formatted as a separate table
supported from v1.0.40
Column Index to Use as HeadercolumnIndexToUseAsHeadertextno<empty>

Used only with "Bring Results As" set to "Documentation".

Used only when "Bring Results As" is set to "Documentation". Indicate the index of the column to be used as section header (1 - means the 1st column). If empty - no header will be used.

supported from v1.0.40
Tag to Use for HeadertagToUseForHeaderselectornoh1

Used only when "Bring Results As" is set to "Documentation" and "Column Index to Use as Header" indicated a column.

Select the html tag for the section headers.

Supported values are:

  • h1, h2, ... h6 - the section header will be wrapped with <h1>...</h1>, or <h2>...</h2>, or ..., or <h6>...</h6>
  • strong - the section header will be wrapped with <strong>...</strong>
  • p - the section header will be wrapped with <p>...</p>
  • none - no tag will be used
supported from v1.0.40
Keep Header Column Within SectionkeepHeaderColumnWithinSectionbooleannofalse

Used only when "Bring Results As" is set to "Documentation" and "Column Index to Use as Header" indicated a column.

Enable to keep the column, used for header, within the section data.

supported from v1.0.40
Display Column Name Within HeaderdisplayColumnNameWithinHeaderbooleannofalse

Used only when "Bring Results As" is set to "Documentation" and "Column Index to Use as Header" indicated a column.

Enable to display the column name within header (such as "<col name>: <col value>").

supported from v1.0.40
Insert Page BreaksinsertPageBreaksbooleannofalse

Used only when "Bring Results As" is set to "Documentation".

Enable to insert page breaks before each section (i.e., in PDF or Word export each section will be on a separate page).

supported from v1.0.40
Wrap Result with TablewrapWithTablebooleannofalse

Used only when "Bring Results As" is set to "Documentation".

Enable to additionally wrap the output with table (each section will be presented as a row of that table).

disabled
enabled
My Pretty Result
SELECT * FROM table_1 WHERE A>4
A: 7
B8
C9
A: 10
B11
C12
My Pretty Result
SELECT * FROM table_1 WHERE A>4
Header
Content
A: 7
B8
C9
A: 10
B11
C12



supported from v1.0.40
Ordered List
checkboxnodisabled

Used only with "Bring Results As" set to "CSV as List".

Enable to present the result as ordered list, disable to present the result as unordered list.

supported from v1.0.37
Auto-merge Cells of Result Tables
checkboxnodisabled

Used only with "Bring Results As" set to "Table".

Enable to auto-merge equal adjacent cells (NOTE: performed after possible Post-rotate). Select "Auto-merge Cells Strategy" as well.

disabledenabled
My Pretty Result
SELECT * FROM table_1
A
B
C
122
446
789
101112
My Pretty Result
SELECT * FROM table_1
ABC
12
46
789
101112
supported from v1.0.28
Auto-merge Cells Strategy
selectornoMerge Within Rows

Used only when "Auto-merge Cells of Result Tables" is enabled. Select the method to merge the cells of the Result Tables.

Supported options:

  • Merge Within Rows - merge will be performed within each individual row
  • Merge Within Columns - merge will be performed within each individual column
supported from v1.0.28
Include CSV Column Names
checkboxnoenabledUsed only with "Bring Results As" set to one of CSV formats. Enable to include column names into the result.supported from v1.0.22
Remove Spaces from CSV
checkboxnodisabledUsed only with "Bring Results As" set to one of CSV formats. Enable to remove spaces from the result.supported from v1.0.22
Quote All CSV Values
checkboxnodisabledUsed only with "Bring Results As" set to one of CSV formats. Enable to quote all CSV values in the result.supported from v1.0.22
CSV Separator
charno,

Used only with "Bring Results As" set to one of CSV formats. Indicate which separator to use in CSV (for a special symbol paste it or use its HTML code).

Such as, if you want to separate with tabulator, set it to "&#9;" (tabulator HTML code)

supported from v1.0.22
CSV Quote Character
charno"

Used only with "Bring Results As" set to one of CSV formats. Indicate which quote character to use in CSV (for a special symbol paste it or use its HTML code).

Such as, if you don't want wrapping with quotes, set it to "&#8203;" (zero-width space HTML code)

supported from v1.0.22
CSV Escape Character
charno"

Used only with "Bring Results As" set to one of CSV formats. Indicate which escape character to use in CSV (for a special symbol paste it or use its HTML code).

Such as, if you don't want to escape anything, set it to "&#8203;" (zero-width space HTML code)

supported from v1.0.22
CSV Line End
textno&#13;Used only with "Bring Results As" set to one of CSV formats. Indicate line ending to use in CSV (for special symbols use HTML codes).supported from v1.0.22
Path(s) to Element
textnoempty

Used only together with "Bring Results As" set to "ResultSet JSON". Indicate specific path to the element in the JSON to be returned (such as, "resultSet", "metaData", "resultSet[1][1]", etc.). Use | splitter to specify path for each of the queries (order will correspond to the order of the queries). Examples:

  • empty - whole JSON will be returned as the result for each of the queries
  • resultSet[1] - only element 1 of the resultSet will be returned for the 1st query; empty is assumed for all other queries
  • resultSet[1][2]|metaData - only element 2 of the element 1 of the resultSet will be returned for the 1st query; only metaData element will be returned for the second query; empty is assumed for all other queries
empty
resultSet
resultSet[1][2]
My Pretty Result
SELECT * FROM table_1 WHERE A=1
A
B
C
122
My Pretty Result
SELECT * FROM table_1 WHERE A=1
[
  [
    "1",
    "2",
    "2"
  ]
]
My Pretty Result
SELECT * FROM table_1 WHERE A=1
2


supported from v1.0.20
Strip HTML Tags
checkboxnodisabled

Makes sense if "Escape HTML" is disabled: Enable to remove HTML tags from the result.

disabled
enabled
SQL Query
SELECT
   '<a href="http://dummy.com">My Link</a>'
AS "My value with HTML"
My value with HTML
My Link
SQL Query
SELECT
   '<a href="http://dummy.com">My Link</a>'
AS "My value with HTML"
My value with HTML
My Link
SQL Query
SELECT
   '<a href="http://dummy.com">My Link</a>'
AS "My value with HTML"
ResultSet JSON, element: resultSet[1][1]
<a href="http://dummy.com">My Link</a>
SQL Query
SELECT
   '<a href="http://dummy.com">My Link</a>'
AS "My value with HTML"
ResultSet JSON, element: resultSet[1][1]
My Link
supported from v1.0.20
Escape HTML
checkboxnodisabledEnable to escape HTML in results so that HTML codes are displayed in the Browser.supported from v1.0.23
Preformat Result
checkboxnodisabled

Used only when "Wrap Result With Code-Block" is disabled: Enable to preformat result.

supported from v1.0.22
Wrap Result With Expand
checkboxnodisabled

Enable to wrap the result with the "expand" macro.

enabled
disabled
SQL Query
SELECT * FROM table_1
 Click to expand result...
SQL Query
SELECT * FROM table_1
A
B
C
122
446
789
101112


supported from v1.0.20
Wrap Result With Code-Block
checkboxnoenabledUsed only together with "Bring Results As" NOT set to "Table". Enable to format the result with the "code-block" macro.supported from v1.0.20
Result Code-Block Wrapper Language
selectornoemptyUsed only together with "Wrap Result With Code-Block" parameter is enabled. Select a syntax highlighting used by the Code-Block wrapper.supported from v1.0.20
Collapse Code-Blocks
checkboxnodisabled

Enable to make Code-Blocks (if any used within the macro) Collapsible.

enabled
disabled
 Expand source
A
B
C
122
SELECT * FROM table_1 WHERE A=1
A
B
C
122


supported from v1.0.20
Show Code-Block Title
checkboxnodisabled

Enable to show generic titles for Code-Blocks (if any used within the macro).

enabled
disabled
SQL Query
SELECT * FROM table_1 WHERE A=1
A
B
C
122
SELECT * FROM table_1 WHERE A=1
A
B
C
122


supported from v1.0.20
Show Code-Block Line Numbers
checkboxnodisabled

Enable to show line numbers in Code-Blocks (if any used within the macro).

enabled
disabled
SQL Query
1
2
3
SELECT *
   FROM table_1
   WHERE A=1
Table CSV
1
2
A,B,C
1,2,2
SQL Query
SELECT *
   FROM table_1
   WHERE A=1
Table CSV
A,B,C
1,2,2


supported from v1.0.20
Tables Selector
textno

Before v1.0.30:

div.table-wrap > table, div[id~=jira-issues-] > table

From v1.0.30:

div.table-wrap > table, div[id~=jira-issues-] > table, table.attachments

From v1.0.33:

div.table-wrap > table, div[id~=jira-issues-] > table, table.attachments, .office-document > div > table

The parameter uses CSS Query selector format and defines the tables to be parsed within the container content. For example, if you want to parse some external HTML, then you will need to specify custom tables selector. If in doubt - leave defaults. If empty, then defaults will be applied, i.e. selection of conventional Confluence tables and JIRA Issues/Filter macro-generated tables.

A table selector can be obtained, for example, in the following way:

  • in Chrome browser: on a html page with the table, right click on the table header and click "Inspect". In the Elements tab of the source panel locate the needed <table ...> tag. Right click on the tag and select: Copy -> Copy selector. The copied selector can be used as Tables Selector input parameter.

If you need multiple selectors - separate those with comma.

supported from v1.0.24
Allow Inner Tables Selection
checkboxnoenabledUsed only when "Input Body Type" is set to "Confluence Wiki": enable to allow selection of inner tables (such as, if a cell of your table contains another table, then both tables will be selected for processing).supported from v1.0.40
Show Customization
checkboxnodisabled

Enable to present all customized input parameters with values (i.e. those, that differ from the defaults).

This parameter is useful mostly for demonstration purposes and/or while tuning the macro.

enabled
disabled
Customization Info
1
2
3
4
5
6
7
8
9
{
  "showCodeBlockTitle": "true",
  "showQuery": "true",
  "showCustomization": "true",
  "sqlQuery": "SELECT * \n   FROM table_1\n   WHERE A\u003d1",
  "wrapQueryWithCodeBlockMacro": "true",
  "resultFormat": "Table CSV",
  "showCodeBlockLineNumbers": "true"
}
SQL Query
1
2
3
SELECT *
   FROM table_1
   WHERE A=1
Table CSV
1
2
A,B,C
1,2,2
SQL Query
1
2
3
SELECT *
   FROM table_1
   WHERE A=1
Table CSV
1
2
A,B,C
1,2,2
supported from v1.0.25
Pre-rotate Input Tables
selectorno0

Select the angle (in degrees) to pre-rotate input tables clockwise before running SQL against those. For ex, this can be useful when input tables have vertical header in the 1st column instead of horizontal header in the 1st row. So, rotation may help fixing this, however, note that SQL Query, except generic ones (like "SELECT * FROM table_1"), will require refactoring.

Supported values: 0 - don't rotate, 90 (degrees), 180 (degrees), 270 (degrees)

0
90
180
270
SELECT * FROM table_1
A
B
C
122
446
789
101112
SELECT * FROM table_1
10
7
4
1
A
11842B
12962C
SELECT * FROM table_1
12
11
10
987
644
221
CBA
SELECT * FROM table_1
C
2
6
9
12
B24811
A14710


 

Note: when Input Type is set to "Direct Text", there is no much sense in setting the "Pre-rotate Input Tables" parameter to something other than 0. However, if the "Pre-rotate Input Tables" is set to a value other than 0, rotation will still be attempted.

supported from v1.0.28
Exclude Headers
checkboxnodisabled

Enable to skip headers in the input tables. This can be useful if you want to pre-rotate input tables. For example, in this case if a left or right column, or the bottom row becomes the top row after the rotation, its cells values will be treated as column names in the the temp DB table.

Source Table
disabled
enabled


Old Header 1
Old Header 2
Old Header 3
New Header 1<val 1><val 2>
New Header 2<val 3><val 4>



Rotated 90 degrees CW (using "Pre-rotate Input Tables" input parameter).
NOTE: Old Header cells may appear to be redundant.
SELECT * FROM table_1
New Header 2
New Header 1
Old Header 1
<val 3><val 1>Old Header 2
<val 4><val 2>Old Header 3

Rotated 90 degrees CW (using "Pre-rotate Input Tables" input parameter). Plus additionally enabled "Exclude Headers" parameter to remove the old redundant headers.
SELECT * FROM table_1
New Header 2
New Header 1
<val 3>
<val 4>


supported from v1.0.28
Output Columns Order Options
selectornoempty

Used only when Bring Results As is set to "Table", "JIRA Table" or "Documentation". Select an option for Columns order in the output tables. If empty, then no additional columns ordering will be applied.

NOTE: these options are applied prior to "Post-rotate Output Tables".

supported from v1.0.28
Post-rotate Output Tables
selectorno0

Used only when Bring Results As is set to "Table" or, starting from v.1.0.33 "Table CSV" or "JIRA Table" or, starting from v1.0.37 "CSV as List". Select the angle to post-rotate all output tables clockwise.

NOTE: the following examples are given for the option "Bring Results As" set to "Table":

0
90
180
270
SELECT * FROM table_1
A
B
C
122
446
789
101112
SELECT * FROM table_1
10741A
11842B
12962C
SELECT * FROM table_1
121110
987
644
221
CBA
SELECT * FROM table_1
C26912
B24811
A14710


supported from v1.0.28
Base URI
textnoempty

Used only when "Input Body Type" set to "Direct HTML". Defines base URI to be used for the HTML.

For example, the HTML code contains an embedded video:

<table>
  <tr>
    <th>Embedded Video</th>
  </tr>
  <tr>
    <td><iframe width="420" height="315" src="embed/tgbNymZ7vqY"></iframe></td>
  </tr>
</table>

Set the Base URI input parameter to "http://www.youtube.com", so that the video source would get by the absolute path and display correctly within the results.

supported from v1.0.29
Path to JSON Element
textnoemptyUsed only when "Input Body Type" is set to "Direct JSON" or "List of URLs" (for URLs that bring JSON). Points to the desired JSON array in the input JSON that needs to be selected and used for further processing. If empty or "/" then the top JSON element will be used (whole JSON). The value needs to be specified in JSON Pointer format. Examples: "/foo", "foo/0/foo1".supported from v1.0.39
Input CSV Separator
charno,

Used only when "Input Body Type" is set to "Direct CSV" or "List of URLs" (for urls that bring CSV). Defines the separator that is used within the input CSVs. For a special symbol paste it or use its HTML code.

For the "List of URLs" can be overridden per URL by using input_csv_separator query parameter. 

supported from v1.0.39
Case Sensitive Column Names
checkboxnofalse

Enable to use strict case sensitivity for column names (exactly as in the source data).

supported from v1.0.39
Output Layout
selectornoStandard

Select the desired layout of the output.

Supported values: Standard - standard Confluence layout is used for the results, Floating Left - the results will be floating left next to each other (the first result is on the left), Floating Right - the results will be floating right next to each other (the first is on the right).

supported from v1.0.40

3.1. Per SQL Query Configuration

Per SQL Query configuration is available through JSON added into comments of each SQL Query (that is, like /* {...} */). The following configuration properties can be used:

You can display a helper snippet that will bring you all the possible properties with some example values by starting typing "custom..." within the SQL Query editor and then selecting the "customize output per SQL" snippet 

Name
Type
Default
Description
Example
Status
hide_table_headerbooleanfalse

Relevant only if "Bring Results As" set to "Table"

Controls whether to display or not the result's table header:

  • true: table header will not be displayed
  • false: table header will be displayed
true
false
SQL Query
1
2
3
4
5
6
/*
{"hide_table_header":true}
*/
SELECT *
   FROM table_1
   WHERE A=1
122
SQL Query
1
2
3
4
5
6
/*
{"hide_table_header":false}
*/
SELECT *
   FROM table_1
   WHERE A=1
A
B
C
122

Experimental from v1.0.23

use_confluence_classbooleantrue

Relevant only if "Bring Results As" set to "Table"

Controls whether to use or not standard Confluence CSS class for the result's table:

  • true: standard CSS classes will be used
  • false: standard CSS classes will not be used
true
false
SQL Query
1
2
3
4
5
6
/*
{"use_confluence_class":true}
*/
SELECT *
   FROM table_1
   WHERE A=1
A
B
C
122
SQL Query
1
2
3
4
5
6
/*
{"use_confluence_class":false}
*/
SELECT *
   FROM table_1
   WHERE A=1
A
B
C
122

Experimental from v1.0.23

hide_this_sqlbooleanfalse

Overrides "Show Queries" Input Parameter for the SQL Query

Controls whether to hide or not this particular SQL Query in the results:

  • true: the SQL Query will be hidden
  • false: the SQL Query will be displayed or not, depending on the "Show Queries" Input Parameter setting

Experimental from v1.0.24
auto_merge_cells_of_result_tablebooleanfalse

Overrides the global "Auto-merge Cells of Result Tables" Input Parameter.

Controls whether to apply the Auto-merge Cells of Result Tables functionality for this particular SQL Query results regardless of the global "Auto-merge Cells of Result Tables" Input Parameter setting


Experimental from v1.0.28
post_rotate_output_tablesstringnull

Overrides the global "Post-rotate Output Tables" Input Parameter.

Specifies the angle to post-rotate the output tables of this particular SQL Query.

SQL Query
1
2
3
4
/*
{"post_rotate_output_tables":"0"}
*/
SELECT * FROM table_1
A
B
C
122
446
789
101112
SQL Query
1
2
3
4
/*
{"post_rotate_output_tables":"-90"}
*/
SELECT * FROM table_1
C26912
B24811
A14710
SQL Query
1
2
3
4
/*
{"post_rotate_output_tables":"90"}
*/
SELECT * FROM table_1
10741A
11842B
12962C
supported from v1.0.28
output_columns_order_optionsstringnull

Overrides the global "Output Columns Order Options" Input Parameter.

Specifies the order of columns in the output tables for this particular SQL Query.


supported from v1.0.28
bring_result_asstringnull

Overrides the global "Bring Results As" Input Parameter.

Defines the format of the results output. Possible values are the same as for "Bring Results As" global Input Parameter. From v1.0.37 an additional possible value is "Nothing". If "Nothing" is set, the query will be executed, but no results will be displayed.


supported from v1.0.36
result_format_optionstringnull

Overrides the global "Result Format Options" Input Parameter.

For now used only when "Bring Results As" set to "Documentation". With this setting the results are formatted as document and each row of the result table is represented as a separate document section.

Possible values:

  • "Tables from Rows" -  each of the document sections will be formatted as a separate table.
  • "", null or absent - each of the document sections will be formatted using simple html tags.

Unsupported values will be treated as the default.


supported from v1.0.40
column_index_to_be_used_as_headerstringnull

Overrides the global "Column Index to Use as Header" Input Parameter.

For now used only when "Bring Results As" set to "Documentation". With this setting the results are formatted as document and each row of the result table is represented as a separate document section.

The parameter defines the column of the result table whose value needs to be used as section title.


supported from v1.0.40
keep_header_column_within_sectionbooleanfalse

Overrides the global "Keep Header Column Within Section" Input Parameter.

If the column for the title is specified (see column_index_to_be_used_as_header), then:

  • if keep_header_column_within_section is set to false, then the column itself will not appear within the document section (be used only as section title).
  • if keep_header_column_within_section is set to true, then the column itself will appear within the document section (be used both as section title and appear within the document section).

supported from v1.0.40
display_column_name_within_headerbooleanfalse

Overrides the global "Display Column Name Within Header" Input Parameter.

For now used only when "Bring Results As" set to "Documentation" and column_index_to_be_used_as_header Per SQL Query Configuration parameter set to some index.

If set to true, then the section title will additionally contain column name as follows: <col name>: <col value>


supported from v1.0.40
notesJSON array of objectsnull

Allows defining an array of arbitrary notes that will be shown along with the results of the SQL Query.

Each element of the array may include the following properties:

  • "showAs" (string) - defines the format to be used for the notes, defined within this element. Supported values are:
    • "horizontal_table" - the notes will be displayed over the results as table with horizontal headers
    • "vertical_table" - the notes will be displayed over the results as table with vertical headers
    • "unordered_list" - the notes will be displayed over the results as unordered list
  • "title" (string) - defines the title for the notes, defined within this element
  • "items" (array of objects) - defines the notes to be displayed. Each element may include:
    • "name" (string) - the name of the note
    • "value" (string) - the value of the note (the text)
/*
{
    "notes": [
        {
            "title": "Columns Legend:",
            "showAs": "horizontal_table",
            "items": [
                {"name": "A", "value": "Description of column A"},
                {"name": "B", "value": "Description of column B"},
                {"name": "C", "value": "Description of column C"}
            ]
        },
        {
            "title": "Values Legend:",
            "showAs": "unordered_list",
            "items": [
                {"name": "1", "value": "Meaning of value 1"},
                {"name": "2", "value": "Meaning of value 2"},
                {"name": "4", "value": "Meaning of value 4"}
            ]
        }
    ]
}
*/
SELECT * FROM table_1
Columns Legend:
A
B
C
Description of column ADescription of column BDescription of column C
Values Legend:
  • 1: Meaning of value 1
  • 2: Meaning of value 2
  • 4: Meaning of value 4
A
B
C
122
446
789
101112
supported from v1.0.41

4. Helper Snippets

Starting from some version, the macro supports Helper Snippets - some pre-configured pieces of code that can be quickly added to SQL Query editor by starting typing the name of the snippet in the editor. The following snippets are available in current version of the macro:

Snippet Name
DESCRIPTION
attachment_csvreadA snippet to bring content of the latest version of a file, attached to the current page (using CSVREAD)
attachment_filereadA snippet to bring content of the latest version of a file, attached to the current page (using FILE_READ)
binA snippet that creates an ad-hoc function BIN that converts decimal to binary.
NOTE: set Query Splitter to ";;". Use this splitter between SQL queries.
columnsA snippet to show an in-memory DB table columns
customizeA snippet of possible (JSON-formatted) "Per SQL Query Configuration" properties that can be inserted immediately before an SQL query which will additionally instructs how to process this specific query. See "Per SQL Query Configuration" section in this document.
get_matchA snippet that creates an ad-hoc function GET_MATCH that gets the match from a string by a regular expression.
NOTE: set Query Splitter to ";;". Use this splitter between SQL queries.
h2infoA snippet to show information on H2 engine, used by the macro
highlight

A snippet that creates an ad-hoc function HIGHLIGHT that highlights text patterns.
NOTE: set Query Splitter to ";;". Use this splitter between SQL queries.

link_h2_tableA snippet to link a table from a remote H2 database. See instructions within the snippet comments.
link_oracle_tableA snippet to link a table from a remote Oracle DB. See instructions within the snippet comments.
link_postgres_tableA snippet to link a table from a remote Postgres DB. See instructions within the snippet comments.
rownumber_1A snippet to add row with row number to an in-memory DB table (option 1).
NOTE: set Query Splitter to ";;". Use this splitter between SQL queries.
rownumber_2A snippet to add row with row number to an in-memory DB table (option 2).
s*A snippet to bring all data from an in-memory DB table
schemasA snippet to show in-memory DB schemas
split_table

A snippet that helps splitting a table into several tables by rows. The provided snippet splits your input table into 4 output tables, however, it can be easily updated for any number of output tables.
NOTE: you can also use "Split Table into Several" Custom User Macro instead, that provides the same functionality in a more convenient way, but may have less options.

strip_htmlA snippet that creates an ad-hoc function STRIP_HTML that strips HTML codes from string.
NOTE: set Query Splitter to ";;". Use this splitter between SQL queries.
tablesA snippet to show in-memory DB tables

5. Some Basic Examples

Below we provide some examples that show how one can achieve one or another goal. Note that each of the goals can be achieved in a variety of ways using SQL while we provide just one or only some of those.

5.1. Original Tables

We will use the following tables in our examples below.

Note that Table 2 may be referred to as table_1 within the macro SQL (as well as, Table 1 may be referred to as table_2). This is because the reference within the macro SQL depends only on the table order number within the container. That said, if you added Table 1 as the second table into the macro container, you need to refer to it as table_2. This also means, that if you decided to re-order tables within the macro container, you will have to change the references according to the new order within your SQL.

5.1.1. Table 1

5.1.2. Table 2


A
B
C
122
446
789
101112


A
D
E
Labels
686L1 L2
425L2 L4 L3
887L3
567
L4 L1 L5
424242L5

5.2. Simple Filtering

Let's apply the following query to Table 1:

SELECT * FROM table_1 WHERE A>1 AND C<=9
A
B
C
446
789

5.3. Adding a Column with a Row Number

Let's add a column to Table 1 with automatically computed row number:

5.3.1. Approach 1


Use a dedicated built-in function ROW_NUMBER()
SELECT ROW_NUMBER() OVER () AS "#", * FROM table_1
#
A
B
C
1122
2446
3789
4101112


5.3.2. Approach 2

Define a variable
SET @row_num = 0
Now transform the table using @row_num
SELECT @row_num := @row_num + 1 AS "#", * FROM table_1
#
A
B
C
1122
2446
3789
4101112

5.4. Column Hiding

5.4.1. Approach 1

Below, we use a query to hide (not to select) column B of Table 1

SELECT A,C FROM table_1
A
C
12
46
79
1012

5.4.2. Approach 2

After all, we are always dealing with temp in-memory tables that are dropped as soon as the macro processing ends. Therefore you can just alter the table by removing the unnecessary columns, then just select the whole table:

ALTER TABLE table_1 DROP COLUMN "B"
SELECT * FROM table_1
A
C
12
46
79
1012

This approach might be useful if you plan to use the altered table in the subsequent sequential queries. If you still need the original table in a subsequent query, you should use the Approach 1 or the Approach 3.

5.4.3. Approach 3

If you prefer the Approach 2, but want to keep the original table_1 unaltered - use a temporary table as follows:

CREATE TABLE aux_table AS SELECT * FROM table_1
ALTER TABLE aux_table DROP COLUMN "B"
SELECT * FROM aux_table
A
C
12
46
79
1012

5.5. Column Renaming

Columns of Table 1 can be renamed as follows:

SELECT A AS "New A", C AS "New C" FROM table_1
New A
New C
12
46
79
1012

Well, as you could see in the previous example, you can achieve the same by altering the table_1 itself or using a temp table, then selecting from the altered or the temp table.

5.6. Columns Reordering

Let's reorder the rows of Table 1 as we wish:

SELECT A, C, B FROM table_1
A
C
B
122
464
798
101211

5.7. Columns Reordering and Sorting

Now, lets reorder columns of Table 2 and additionally apply some sorting:

SELECT D, E, A FROM table_1 ORDER BY CAST(D AS INTEGER)
D
E
A
254
7
56
866
878
424242

5.8. Find Values from a Column that are Absent in the other Column

For Table 1 gets values of col A that are absent in col B
SELECT A FROM table_1 MINUS SELECT B FROM table_1
A
1
10
7

Surely, you can do the same cross-tables.

5.9. Math

Add a column to Table 2 with some calculated values:

SELECT D, SQRT(D) FROM table_1
D
SQRT(D)
82.8284271247461903
21.4142135623730951
82.8284271247461903
72.6457513110645907
426.48074069840786

5.10. Multiple Queries

We can execute more than 1 query against each of the tables in the macro container. Each of the queries that implies a result set in the output will result in at least 1 separate table. Below we apply several queries to Table 1:

SELECT *, '<span style="color:red">Here A!=B</span>' AS "Description"
FROM table_1
WHERE A!=B
A
B
C
Description
122Here A!=B
789Here A!=B
101112Here A!=B
SELECT *, '<span style="color:green">Here A=B</span>' AS "Description"
FROM table_1
WHERE A=B
A
B
C
Description
446Here A=B
SELECT
    *,
    CASEWHEN(A=B, '<span style="color:green">true</span>',
    '<span style="color:red">false</span>') AS "A=B?",
    CASEWHEN(B=C, '<span style="color:green">true</span>',
    '<span style="color:red">false</span>') AS "B=C?"
FROM table_1
A
B
C
A=B?
B=C?
122falsetrue
446truefalse
789falsefalse
101112falsefalse

5.11. Cross Joining

We can execute one query against several tables. To demonstrate it, let's Cross Join Table 1 and Table 2:

SELECT * FROM table_1 CROSS JOIN table_2
A
B
C
A
D
E
LABELS
122186L1 L2
122125L2 L4 L3
122187L3
12217
L4 L1 L5
12214242L5
446486L1 L2
446425L2 L4 L3
446487L3
44647
L4 L1 L5
44644242L5
789786L1 L2
789725L2 L4 L3
789787L3
78977
L4 L1 L5
78974242L5
1011121086L1 L2
1011121025L2 L4 L3
1011121087L3
101112107
L4 L1 L5
101112104242L5

5.12.  Right Joining

OK, let's execute RIGHT JOIN and do some cell auto-merging:

SELECT * FROM table_1 RIGHT JOIN table_2
ABCADELABELS
122186L1 L2
4464
7897
10111210
122125L2 L4 L3
4464
7897
10111210
122187L3
4464
7897
10111210
12217
L4 L1 L5
4464
7897
10111210
12214242L5
4464
7897
10111210

5.13. INSERT into the Table 1 then SELECT

We can modify our tables before selecting from those. Here we insert a row into Table 1 and then show the whole resulted table:

INSERT INTO table_1 VALUES('My 1', 'My 2', 'My 3')
SELECT * FROM table_1
A
B
C
122
446
789
101112
My 1My 2My 3

5.14. DELETE from the Table 1 then SELECT

Same as above, we can delete some rows, then show the result

DELETE FROM table_1 WHERE C=6
SELECT * FROM table_1
A
B
C
122
789
101112

5.15. ALTER and UPDATE the Table 1 then SELECT

We can even alter the table scheme and then show the result

ALTER TABLE table_1 ADD Sum INT
UPDATE table_1 SET Sum=CAST(A AS INT)+CAST(B AS INT)+CAST(C AS INT)
SELECT * FROM table_1
A
B
C
SUM
1225
44614
78924
10111233

5.16. Filter-out Rows from Table 2 by Label

The example below shows a convenient way to label rows of the table, and then select the rows by label values:

Result shows rows labeled L1 AND L4
SELECT * FROM table_1 WHERE Labels LIKE '%L1%' AND Labels LIKE '%L4%'
A
D
E
LABELS
567
L4 L1 L5
Result shows rows labeled with L1 OR L4
SELECT * FROM table_1 WHERE Labels LIKE '%L1%' OR Labels LIKE '%L4%'
A
D
E
LABELS
686L1 L2
425L2 L4 L3
567
L4 L1 L5

5.16.1. A Real Life Example for a Business Analyst

Imagine, you have a table of various requirements for a product or feature and each label means that the requirement affects some component (or several components). So, each row (requirement) would be labeled as: Component 1, Component 2, ... , Component N.

Now, you want to create component-specific documents with a table where only the requirements that affect this specific component are filtered from the global table. This would be an easy task if you use the below approach:

  • wrap the global table with the Excerpt macro and save to some document, then
  • create component-specific document (say, for Component 1)
  • add the global table using the Excerpt Include macro to the document, then
  • wrap the Excerpt Include macro with the SQL-Powered Table Transformer macro and
  • configure the SQL-Powered Table Transformer macro to filter the global table with an SQL query that will use "... WHERE Labels LIKE "%Component 1%".

5.17. Documents Statuses Stats Tracking

Assume you have numerous documents that have various statuses. The statuses set can be absolutely arbitrary, however, for this example let those be "NOT STARTED", "IN PROGRESS", "DRAFT" and "READY". The status of each document may be updated any time by the author.

Now, you want a document that would show a table with current summary of all those documents' statuses.

Here are the recipe:

  • add the status into each of your documents, using, for example, Status macro, and wrapping it with the Excerpt macro as follows:

use an appropriate status value (from the statuses set) within the Status macro for each of the document (smile)

  • now, in the document where you want to see the statistics, add a 1-column table, named STATUS.
  • add rows according to the number of your documents to be tracked.
  • into each of the rows add the status of each 1 document, using Excerpt Include macro
  • now wrap the table with SQL-Powered Table Transformer macro
  • configure the macro by adding the following SQL Query input parameter (if you want your own statuses set, you would need to modify the status values):
SELECT * FROM
    (SELECT count(*) AS "Not Started" FROM table_1 WHERE STATUS LIKE '%NOT STARTED%'),
    (SELECT count(*) AS "In Progress" FROM table_1 WHERE STATUS LIKE '%IN PROGRESS%'),
    (SELECT count(*) AS "Draft" FROM table_1 WHERE STATUS LIKE '%DRAFT%'),
    (SELECT count(*) AS "Ready" FROM table_1 WHERE STATUS LIKE '%READY%'),
    (SELECT count(*) AS "Total" FROM table_1)
  • save the macro
  • save the document
  • here is an example of the dynamic result you should see in your document:

If you want, you may use the resulted table for charting (see the next example).

5.18. Dynamic Data Preparation and Charting

We can apply the needed transformation and then use the transformed table results for charting. Below we process Table 1 and then draw a chart on the resulted data using the Chart from Table macro.

  • A
  • A
  • SQRT(A)
  • LN(A)
  • LOG10(A)
12345678910012345678910ASQRT(A)LN(A)LOG10(A)
Data prepared for Chart
set @row_num = 0
select @row_num := @row_num + 1 as "#", A, SQRT(A), LN(A), LOG10(A) from table_1
#
A
SQRT(A)
LN(A)
LOG10(A)
111.00.00.0
242.01.38629436111989060.6020599913279624
372.64575131106459071.94591014905531320.8450980400142568
4103.16227766016837952.3025850929940461.0

5.19. Transform Tables, obtained by JIRA Issues/Filter Macro

Starting from v1.0.19 you can use tables, created by JIRA Issues/Filter macro, within the SQL-Powered Table Transformer macro container. This allows to:

  • create multiple views, based on the same dataset, obtained from JIRA.
  • compare/analyze datasets from the same or different connected instances of JIRA.
  • compare/analyze JIRA datasets against other non JIRA tables, included into container or created by the macro during execution (such as, auto-created from a remote or attached csv).
  • etc.

6. Advanced Examples

6.1. Nested Transformations

Here is a dummy example. Let's assume you have one table, let it be Table 1. If you need, you may apply sequential transformations to Table 1 using macro nesting.

You can put the Table 1 into a SQL-Powered Table Transformer Macro (1), which in turn you will put into another SQL-Powered Table Transformer (2).

Let's say, in the inner macro (1) you want to split Table 1 into 2 tables:

Macro 1 (inner) splits Table 1 into 2 tables
SELECT A FROM table_1
A
1
4
7
10
SELECT B FROM table_1
B
2
4
8
11

So, since the inner macro (1) produces 2 tables, the outer macro (2) will have to deal with the two tables as its input. In SQL of the outer macro (2) you will be referencing those as table_1 and table_2 (as usual, in the order they appear in the result of macro (1)):

Macro 2 (outer) has 2 tables as input
SELECT * FROM table_1
A
1
4
7
10
SELECT * FROM table_2
B
2
4
8
11

6.2. Ad hoc Java Code Execution

Even if SQL and H2 functions are not enough for your needs, you can include your own custom Java functions into the SQL Query and use them immediately.

NOTE: see "Known Problems and Workarounds" section, #7.

6.2.1. Split String

Create custom Java function
SQL Query
CREATE ALIAS SPLIT_STRING AS $$
@CODE
    String[] splitString(String str, String delimiter) {
        return str.split(delimiter);
    }$$
Define a dummy string
SQL Query
SET @comma_separated = 'comma, separated'
Use the function to split the string
SQL Query
SELECT
    @comma_separated AS ORIGINAL,
    SPLIT_STRING(@comma_separated, ',\s{0,}') AS SPLIT
ORIGINAL
SPLIT
comma, separated[comma, separated]

6.2.2. Strip HTML Codes from String

Here is how you can strip the unseen codes from a cell value. This is useful when in SQL 'WHERE' you want to compare against what you actually see (and HTML codes might interfere).

Original Table
SELECT * FROM table_1
A
B
C
123
456
You can see the below query brings nothing, though you see the a row with A=4 in the table. This is because of the HTML code that may get into cell value
SELECT * FROM table_1 WHERE A='4'
ABC
So, let's sort it out using custom Java function
CREATE ALIAS STRIP_HTML AS $$
    @CODE
    String stripHtml(String str) {
        String res = null;
        if(str != null) {
            res = str.replaceAll("\\<.*?\\>", "");
        }
        return res;
    }$$;
Now use the function in the query. See everything is alright now:
SELECT * FROM table_1 WHERE STRIP_HTML("A")='4'
A
B
C
456


6.2.3. Converting Integer to Binary String

In the example below we create an auxiliary Java function, named BIN, that will help us converting integer values to binary strings (usually there is no such direct function in SQL).

Surely, this is possible using SQL alone, however, in our case probably much easier would be to use Java ad hoc:

First we create an ad hoc Java function named BIN:
CREATE ALIAS BIN AS $$@CODEString toBinary(int dec) throws Exception {
    return Integer.toBinaryString(dec);
}$$;
Now we use the function in the query:
SELECT
    CONCAT(A, ' (', BIN(A), ')') AS A,
    CONCAT(B, ' (', BIN(B), ')') AS B,
    CONCAT(BITAND(A,B), ' (', BIN(BITAND(A,B)), ')') AS "A and B",
    CONCAT(BITOR(A,B), ' (', BIN(BITOR(A,B)), ')') AS "A or B",
    CONCAT(BITXOR(A,B), ' (', BIN(BITXOR(A,B)), ')') AS "A xor B"
FROM table_1
A
B
A and B
A or B
A xor B
1 (1)2 (10)0 (0)3 (11)3 (11)
4 (100)4 (100)4 (100)4 (100)0 (0)
7 (111)8 (1000)0 (0)15 (1111)15 (1111)
10 (1010)11 (1011)10 (1010)11 (1011)1 (1)
Now, execute the same query, but let's find where "A and B" is 1010
SELECT * FROM (
    SELECT CONCAT(A, ' (', BIN(A), ')') AS A,
        CONCAT(B, ' (', BIN(B), ')') AS B,
        CONCAT(BITAND(A,B), ' (', BIN(BITAND(A,B)), ')') AS "A and B",
        CONCAT(BITOR(A,B), ' (', BIN(BITOR(A,B)), ')') AS "A or B",
        CONCAT(BITXOR(A,B), ' (', BIN(BITXOR(A,B)), ')') AS "A xor B"
    FROM table_1
)
WHERE "A and B" LIKE '%(1010)%'
A
B
A and B
A or B
A xor B
10 (1010)11 (1011)10 (1010)11 (1011)1 (1)

Note: since Java code uses ";" as line endings, we have to use another Query Splitter, for example, "::" will do.

6.2.4. Download a File By URL

There is FILE_READ H2 built-in function, however, it looks that it does not work fine with remote files. Let's have our own function that will bring external files. Here is an example:

Customization Info
{
  "showCodeBlockTitle": "true",
  "showQuery": "true",
  "showCustomization": "true",
  "sqlQuery": "\tCREATE ALIAS GETFILE AS $$\n\timport java.io.*;\n\timport java.net.URL;\n\t@CODE\n\tString getFile(String requestURL) {\n\t\tString res \u003d \"\";\n        try {\n            URL url \u003d new URL(requestURL);\n            InputStream is \u003d url.openStream();\n            BufferedReader br \u003d new BufferedReader(new InputStreamReader(is));\n             \n            String line;\n            while ( (line \u003d br.readLine()) !\u003d null)\n                res +\u003d line + \"\\n\";\n             \n            br.close();\n            is.close();\n             \n        } catch (Exception e) {\n            e.printStackTrace();\n        }\n\t\treturn res;\n    }$$;\n::\nSELECT GETFILE(\u0027http://www.robotstxt.org/robots.txt\u0027) AS FILE_DATA",
  "wrapQueryWithCodeBlockMacro": "true",
  "resultFormat": "ResultSet JSON",
  "pathToJsonElement": "|resultSet[1][1]",
  "querySplitter": "::"
}
SQL Query
CREATE ALIAS GETFILE AS $$
    import java.io.*;
    import java.net.URL;
    @CODE
    String getFile(String requestURL) {
        String res = "";
        try {
            URL url = new URL(requestURL);
            InputStream is = url.openStream();
            BufferedReader br = new BufferedReader(new InputStreamReader(is));
              
            String line;
            while ( (line = br.readLine()) != null)
                res += line + "\n";
              
            br.close();
            is.close();
              
        } catch (Exception e) {
            e.printStackTrace();
        }
        return res;
    }$$;
SQL Query
SELECT GETFILE('http://www.robotstxt.org/robots.txt') AS FILE_DATA
ResultSet JSON, element: resultSet[1][1]
User-agent: *
Disallow:

6.3. Fetching and Working with Remote Tables

Analyzing Tables in Files, Downloaded by URL

6.3.1. Approach 1

You can dynamically fetch one or more tables from a URL or a list of URLs. To do so just put one or more URLs into the macro container (one per line) and set the "Input Body Type" to "List of URLs". Each of the URLs can have content in one of the supported formats: CSV, HTML or JIRA Table. In the example below we have put the following 2 URLs into the container:

  1. http://insight.dev.schoolwires.com/HelpAssets/C2Assets/C2Files/C2ImportGroupsSample.csv (this is pure CSV)
  2. https://www.w3schools.com/html/html_tables.asp (this page contains HTML tables, so we additionally indicated which table to select, using Tables Selector parameter (set to #customers)

and immediately output the 2 remote tables as Confluence tables on this page:

Customization Info
{
  "tablesSelectorCssQuery": "#customers",
  "showQuery": "true",
  "showCustomization": "true",
  "sqlQuery": "SHOW TABLES;\nSELECT * FROM table_1;\nSELECT * FROM table_2",
  "wrapQueryWithCodeBlockMacro": "true",
  "inputBodyType": "List of URLs",
  "querySplitter": ";"
}
SHOW TABLES
TABLE_NAME
TABLE_SCHEMA
TABLE_1PUBLIC
TABLE_2PUBLIC
SELECT * FROM table_1
GROUPNAME
GROUPCODE
GROUPOWNER
GROUPCATEGORYID
System Administratorssysadmin13456100
Independence High TeachersHS Teachers
101
John Glenn Middle TeachersMS Teachers13458102
Liberty Elementary TeachersElem Teachers13559103
1st Grade Teachers1stgrade
104
2nd Grade Teachers2nsgrade13561105
3rd Grade Teachers3rdgrade13562106
Guidance Departmentguidance
107
Independence Math TeachersHS Math13660108
Independence English TeachersHS English13661109
John Glenn 8th Grade Teachers8thgrade
110
John Glenn 7th Grade Teachers7thgrade13452111
Elementary ParentsElem Parents
112
Middle School ParentsMS Parents18001113
High School ParentsHS Parents18002114
SELECT * FROM table_2
COMPANY
CONTACT
COUNTRY
Alfreds FutterkisteMaria AndersGermany
Centro comercial MoctezumaFrancisco ChangMexico
Ernst HandelRoland MendelAustria
Island TradingHelen BennettUK
Laughing Bacchus WinecellarsYoshi TannamuriCanada
Magazzini Alimentari RiunitiGiovanni RovelliItaly

6.3.2. Approach 2

Let's assume you want to download a remote HTML with HTML tables and analyze those with the macro.

Or, you would like to get 2 or more versions of a Confluence document and analyze table differences.

Or, you might want to get 2 different documents from Confluence.

Or, analyze outer document[s] against Confluence document[s].

(note, that in case of Confluence documents you will need to provide user and password credentials in the URL)

If you need only the latest versions of the Confluence document, you may use the Page Include macro as the inner macro instead of the inner SQL-Powered Table Transformer macro.

For all the above tasks, you will need to use SQL-Powered Table Transformer macro as described in the Approach 1 above, OR in a nested way, described below:

  • the inner macro[s] will be responsible for bringing the remote HTML data with tables (by URL) to the outer macro.
  • the outer macro will select the needed tables and apply the required SQL analysis to those.

Here is a recipe:

  1. Create the outer SQL-Powered Table Transformer macro container.
  2. Put a nested SQL-Powered Table Transformer macro container into the outer one (you may put several of those).
  3. Configure the nested macro container[s] to bring the data as in example "Download a File by URL" (use the download URL you need, "Bring Results As" set to "ResultSet JSON", "Path(s) to Element" set to "|resultSet[1][1]"). Verify that the inner macro brings the required data.
  4. Configure the outer macro: set Tables Selector input parameter so that the outer macro could find and select the needed table[s] in the code, brought by the inner macro[s]. If everything is OK, then the default SQL query of the outer macro will bring you the table as table_1.
  5. Finally, specify the desired SQL Query in the outer macro.

In the example below, we downloaded the content of https://www.w3schools.com/html/html_tables.asp using an inner SQL-Powered Table Transformer macro, then the outer SQL-Powered Table Transformer macro selects a table in the content (using Tables Selector set to "#customers") and presents the selected table. The below shows only the outer macro Customization Info and results (settings and the results of the inner macro are hidden, however, those are similar to those in previous example).

Customization Info
1
2
3
4
5
6
7
8
9
10
11
{
  "showCommentAsTitle": "true",
  "tablesSelectorCssQuery": "#customers",
  "showQuery": "true",
  "showCustomization": "true",
  "sqlQuery": "/*Shows the tables names that got processed from the inner macro content:*/\nSHOW TABLES;\n\n/*So, 1 table has been selected and processed and it\u0027s name is TABLE_1. Let\u0027s show it:*/\nSELECT * FROM TABLE_1",
  "wrapQueryWithCodeBlockMacro": "true",
  "showCodeBlockLineNumbers": "true",
  "stripCommentsFromQuery": "true",
  "querySplitter": ";"
}
Shows the tables names that got processed from the inner macro content:
1
SHOW TABLES
TABLE_NAME
TABLE_SCHEMA
TABLE_1PUBLIC
So, 1 table has been selected and processed and it's name is TABLE_1. Let's show it:
1
SELECT * FROM TABLE_1
COMPANY
CONTACT
COUNTRY
Alfreds FutterkisteMaria AndersGermany
Centro comercial MoctezumaFrancisco ChangMexico
Ernst HandelRoland MendelAustria
Island TradingHelen BennettUK
Laughing Bacchus WinecellarsYoshi TannamuriCanada
Magazzini Alimentari RiunitiGiovanni RovelliItaly

6.4. Creating a Temporary Table from your Request

You can create a temporary table from an arbitrary SQL Query or with own data. The table can be immediately used for further manipulations:

Creating my_temp_table from an arbitrary query
CREATE TABLE my_temp_table AS SELECT A,C FROM table_1 WHERE A=1
Now my_temp_table is available for further queries, let's show the whole table
SELECT * FROM my_temp_table
A
C
12
Let's insert something into the temp table
INSERT INTO my_temp_table VALUES ('my_1', 'my_2')
Let's select only C from the temp table
SELECT C FROM my_temp_table
C
2
my_2

6.5. Creating a Table from CSV

6.5.1. From Direct CSV Placed inside the Macro Container

Set Input Body Type parameter to "Direct CSV" and put some CSV data inside the macro container, for example:

GroupName,Groupcode,GroupOwner,GroupCategoryID
System Administrators,sysadmin,13456,100
Independence High Teachers,HS Teachers,,101

Your DB table will look as follows:

Customization Info
{
  "showQuery": "true",
  "showCustomization": "true",
  "wrapQueryWithCodeBlockMacro": "true",
  "inputBodyType": "Direct CSV"
}
SELECT * FROM table_1
GROUPNAME
GROUPCODE
GROUPOWNER
GROUPCATEGORYID
System Administratorssysadmin13456100
Independence High TeachersHS Teachers
101

6.5.2. From a Remote or Local CSV

You can use a remote CSV file to build up a table for you.

Creates a temp table using a remove CSV file
CREATE TABLE temp AS
    SELECT *
Now show the temp table (with some sorting applied)
SELECT * FROM temp ORDER BY "Menu Name"
Section Name
Menu Name
Channel ID
Editor (User Name)
Jason JonesJason Jones3711vcraig
Jessica RogersJessica Rogers3711vcraig
Joe SmithJoe Smith3711kcraig
MathematicsMathematics3710
MusicMusic3710
Rick JonesRick Jones3711scraig
Social StudiesSocial Studies3710
Visual ArtsVisual Arts3710
Zachery SmithZachery Smith3711scraig

6.5.3. From a Confluence or Jira Attachment CSV

6.5.3.1. Confluence (using 'space:page^attachment#version' notation)

Note: enable "Resolve Wiki Paths in Query" (from v1.0.25 "Resolve Wiki Paths in Input URLs") input parameter

6.5.3.1.1. Using H2 CSVREAD Function
Customization Info
{
  "showCommentAsTitle": "true",
  "resolveWikiPathsInQuery": "true",
  "showQuery": "true",
  "showCustomization": "true",
  "sqlQuery": "/*Get the latest version of example.csv attached to current page*/\nCREATE TABLE temp_1 AS\n    SELECT * FROM CSVREAD(\u0027:^example.csv#0\u0027);\n\nSELECT * FROM temp_1;\n\n/*Get the 1st version of example.csv attached to current page*/\nCREATE TABLE temp_2 AS\n    SELECT * FROM CSVREAD(\u0027:^example.csv#1\u0027);\n\nSELECT * FROM temp_2;\n\n/*Get the difference between the latest version and the version 1*/\nSELECT * FROM temp_1 MINUS SELECT * FROM temp_2",
  "wrapQueryWithCodeBlockMacro": "true",
  "resolveWikiPathsInURLs": "true",
  "stripCommentsFromQuery": "true",
  "querySplitter": ";"
}
Get the latest version of example.csv attached to current page
CREATE TABLE temp_1 AS
    SELECT * FROM CSVREAD(':^example.csv#0')
SELECT * FROM temp_1
YEAR
MAKE
MODEL
NOTES
PRICE
1997FordE350ac, abs, moon3000.00
2000FordE150maroon2000.00
1999ChevyVenture "Extended Edition"
4900.00
1996JeepGrand CherokeeMUST SELL! air, moon roof, loaded4799.00
Get the 1st version of example.csv attached to current page
CREATE TABLE temp_2 AS
    SELECT * FROM CSVREAD(':^example.csv#1')
SELECT * FROM temp_2
YEAR
MAKE
MODEL
NOTES
PRICE
1997FordE350ac, abs, moon3000.00
1999ChevyVenture "Extended Edition"
4900.00
1996JeepGrand CherokeeMUST SELL! air, moon roof, loaded4799.00
Get the difference between the latest version and the version 1
SELECT * FROM temp_1 MINUS SELECT * FROM temp_2
YEAR
MAKE
MODEL
NOTES
PRICE
2000FordE150maroon2000.00
6.5.3.1.2. Using "List of URLs" Option of "Input Body Type" Input Parameter

Starting from v1.0.25 you can also achieve the same result using the "List of URLs" option and adding the list of the needed CSV sources directly into the macro container. In the example below we added the following URLs into the container:

':^example.csv#0' - to get the latest version of the example.csv, attached to the current page

':^example.csv#1' - to get version 1 of the example.csv, attached to the current page

Then we compare those 2 tables as in the example above.

Customization Info
{
  "showQuery": "true",
  "showCustomization": "true",
  "sqlQuery": "SELECT * FROM table_1;\nSELECT * FROM table_2;\nSELECT * FROM table_1 MINUS SELECT * FROM table_2",
  "wrapQueryWithCodeBlockMacro": "true",
  "resolveWikiPathsInURLs": "true",
  "inputBodyType": "List of URLs",
  "querySplitter": ";"
}
SELECT * FROM table_1
YEAR
MAKE
MODEL
NOTES
PRICE
1997FordE350ac, abs, moon3000.00
2000FordE150maroon2000.00
1999ChevyVenture "Extended Edition"
4900.00
1996JeepGrand CherokeeMUST SELL! air, moon roof, loaded4799.00
SELECT * FROM table_2
YEAR
MAKE
MODEL
NOTES
PRICE
1997FordE350ac, abs, moon3000.00
1999ChevyVenture "Extended Edition"
4900.00
1996JeepGrand CherokeeMUST SELL! air, moon roof, loaded4799.00
SELECT * FROM table_1 MINUS SELECT * FROM table_2
YEAR
MAKE
MODEL
NOTES
PRICE
2000FordE150maroon2000.00

6.5.3.2. Workarounds

Other ways that may work better for you:

6.5.3.2.1. Workaround 1

Use the same approach but provide the URL to the CSV in the following format:

Note that you will have to put credentials into the URL. If the macro and the attachment you want to use are on different instances of Confluence (or JIRA) application, then this may be a security breach since the credentials will be passed in plain text within the URL.

6.5.3.2.1.1. Confluence
https://<confl.domain>/download/attachments/<attachment_id>/<file_name.csv>?os_username=<confl_user>&os_password=<confl_password>
6.5.3.2.1.2. JIRA

Use similar approach for a JIRA attachment:

https://<jira.domain>/secure/attachment/<attachment_id>/<file_name.csv>?os_username=<jira_user>&os_password=<jira_password>
6.5.3.2.2. Workaround 2

Use the same approach but provide the file system to the needed attachment.

6.5.3.2.2.1. Confluence

For example, for Confluence, if it the configured "Attachment storage" is "Filesystem" it should be something like:

/spool/confluence-data/attachments/ver003/103/182/88932353/151/218/132218901/133268016/1

where:

  • /spool/confluence-data/attachments - Confluence Attachments storage path

  • /ver003/103/182/88932353/151/218/132218901/133268016/1 - the path, constructed follows:

level

Derived From

Value from the example
Notes

1 (top)

Always 'ver003' indicating the Confluence version 3 storage format

ver003

2

The least significant 3 digits of the space id, modulo 250

103

3

The next 3 least significant digits of the space id, modulo 250

182

4

The full space id

88932353

5

The least significant 3 digits of the content id of the page the file is attached to, modulo 250

151

6

The next 3 least significant digits of the content id of the page the file is attached to, modulo 250

218

7

The full content id of the page the file is attached to

132218901

8

The full content id of the attached file

133268016
9

These are the files, named with the version number of the file, e.g. 1, 2, 6.

11 means version 1 of your target attachment

For more details on the storage path structure visit: https://confluence.atlassian.com/doc/hierarchical-file-system-attachment-storage-704578486.html

6.5.3.3. JIRA

If the attachment you need is local JIRA attachment, then use something like:

/opt/atlassian/jira_home/data/attachments/PRG/10000/PRG-11/10100

that is: <JIRA_HOME>/data/attachments/<PROJECT>/<BUCKET>/<ISSUE_ID>/<ATTACHMENT_ID>

For more details see: https://confluence.atlassian.com/jirakb/finding-jira-attachments-on-file-system-859487788.html

6.6. Creating a Table from JSON-formatted Text

6.6.1. From Direct JSON-formatted Text Placed inside the Macro Container

Set Input Body Type parameter to "Direct JSON" and put some JSON-formatted text inside the macro container, for example:

[
{"type":"Office","number":"909-555-7307"},
{"type":"Mobile","number":"415-555-1234"}
]

Your table will look as follows:

Customization Info
{
  "showQuery": "true",
  "showCustomization": "true",
  "sqlQuery": "SELECT * FROM table_1",
  "wrapQueryWithCodeBlockMacro": "true",
  "inputBodyType": "Direct JSON"
}
SELECT * FROM table_1
TYPE
NUMBER
Office909-555-7307
Mobile415-555-1234

6.6.2. Using "List of URLs" Option of "Input Body Type" Input Parameter

You can load JSON-formatted tables from a URL using the "List of URLs" option and adding the list of the needed JSON-formatted sources directly into the macro container.

In the example below we added the following URL into the container:

':^example.json'

The file content is:

[
    {
      "name": "Molecule Man",
      "age": 29,
      "secretIdentity": "Dan Jukes",
      "powers": [
        "Radiation resistance",
        "Turning tiny",
        "Radiation blast"
      ]
    },
    {
      "name": "Madame Uppercut",
      "age": 39,
      "secretIdentity": "Jane Wilson",
      "powers": [
        "Million tonne punch",
        "Damage resistance",
        "Superhuman reflexes"
      ]
    },
    {
      "name": "Eternal Flame",
      "age": 1000000,
      "secretIdentity": "Unknown",
      "powers": [
        "Immortality",
        "Heat Immunity",
        "Inferno",
        "Teleportation",
        "Interdimensional travel"
      ]
    }
]


Here is the result you'll get

Customization Info
{
  "showQuery": "true",
  "showCustomization": "true",
  "sqlQuery": "SELECT * FROM table_1",
  "wrapQueryWithCodeBlockMacro": "true",
  "resolveWikiPathsInURLs": "true",
  "inputBodyType": "List of URLs"
}
SELECT * FROM table_1
NAME
AGE
SECRETIDENTITY
POWERS
Molecule Man29Dan Jukes["Radiation resistance","Turning tiny","Radiation blast"]
Madame Uppercut39Jane Wilson["Million tonne punch","Damage resistance","Superhuman reflexes"]
Eternal Flame1000000Unknown["Immortality","Heat Immunity","Inferno","Teleportation","Interdimensional travel"]

6.7. Creating a Table from JIRA-formatted Text

6.7.1. From Direct JIRA-formatted Text Placed inside the Macro Container

Set Input Body Type parameter to "Direct JIRA Table" and put some JIRA-table-formatted text inside the macro container, for example:

||A||B||C||

|1|2|3|

|4|5|6|

Your table will look as follows:

Customization Info
{
  "showQuery": "true",
  "showCustomization": "true",
  "wrapQueryWithCodeBlockMacro": "true",
  "inputBodyType": "Direct JIRA Table"
}
SELECT * FROM table_1
A
B
C
123
456

6.7.2. Using "List of URLs" Option of "Input Body Type" Input Parameter

You can load JIRA-formatted tables from a URL using the "List of URLs" option and adding the list of the needed JIRA-formatted sources directly into the macro container.

In the example below we added the following URLs into the container:

':^example_jira.txt#0' - to get the latest version of the example_jira.txt, attached to the current page

':^example_jira.txt#1' - to get version 1 of the example_jira.txt, attached to the current page

Then we compare those 2 tables.

Customization Info
{
  "showQuery": "true",
  "showCustomization": "true",
  "sqlQuery": "SELECT * FROM table_1;\nSELECT * FROM table_2;\nSELECT * FROM table_1 MINUS SELECT * FROM table_2",
  "wrapQueryWithCodeBlockMacro": "true",
  "resolveWikiPathsInURLs": "true",
  "inputBodyType": "List of URLs",
  "querySplitter": ";"
}
SELECT * FROM table_1
A
B
C
123
456
789
SELECT * FROM table_2
A
B
C
123
456
SELECT * FROM table_1 MINUS SELECT * FROM table_2
A
B
C
789

6.8. Converting a Table to CSV Format

Conversion of Confluence Table into CSV format is a special case of the common ability to convert the macro's SQL execution results into CSV. Here is the recipe:

  • Add the SQL-Powered Table Transformer macro container to your page
  • Put your table into the container (well, surely, it can be, as well, a table brought by "JIRA Issues/Filter" macro or and excerpt table included from another page)
  • Leave the SQL Query default (SELECT * FROM table_1) (well, surely, you can put several tables into the container and/or put more complex SQL Query)
  • Set "Bring Results As" to "Table CSV"
  • Optionally, set other parameters as you wish (for example, you may want to enable/disable "Wrap Result With Code-Block", "Strip HTML Tags" or other checkboxes)
  • Save the macro and the page
  • Now, when your page loads, it will display your table as CSV text that can be copied, for example, to a file and used further with MS Excel, LibreOffice, OpenOffice, etc.

So, if your table is:

Col 1
Col 2
Col 3
val 1val 2val 3
val 4val 5val 6

Then the result you will get is:

Col 1,Col 2,Col 3
val 1,val 2,val 3
val 4,val 5,val 6

6.9. Getting a Specific Value from a Table

You may get just a specific cell value as a result of the macro execution.

Let's say you have a table inside the SQL-Powered Table Transformer macro container:

Col 1
Col 2
Col 3
val 1val 2val 3
val 4val 5val 6

To get the pure value from row 1 and column 3:

  • Leave the default SQL Query
  • Set the "Bring Results As" input parameter to "Resultset JSON"
  • Set the "Path(s) to Element" input parameter to "resultSet[1][3]"
  • Disable the "Preformat Result" checkbox (since we want the pure value)
  • Optionally set other parameters as you wish
  • Save the macro and the page
  • Now, when your page loads, the macro will display just the one value as follows:
val 3

To get the pure value of column 3 name:

  • follow the same steps as above, but instead, put "metaData[3]" into "Path(s) to Element" input parameter. You'll get:
Col 3

6.10. Returning Your Table or Results in SQL Format

In addition to other supported formats (i.e. Table, CSV, JSON, etc.) you can always get your tables or results converted to SQL.

In the examples below:

  • set the "Bring Results As" parameter to "Table CSV" and also adjust a little bit the CSV parameters:
  • set "CSV Separator" to "&#9;" (tabulator)
  • set "CSV Quote Character" to "&#8203;" (zero-length space)
  • set "CSV Escape Character" to "&#8203;" (zero-length space)
  • optionally enable "Wrap Result with Code-Block", "Show Code-Block Title", "Show Code-Block Line Numbers"

6.10.1. Dumping Your Original Tables as SQL Script

Original Table
SQL Query
1
SELECT * FROM table_1
Table CSV
1
2
3
4
1   2   2
4   4   6
7   8   9
10  11  12
Get the SQL Script
SQL Query
1
SCRIPT TABLE table_1
Table CSV
1
2
3
4
5
6
7
8
9
10
11
12
CREATE USER IF NOT EXISTS "" SALT '' HASH '' ADMIN;
​CREATE MEMORY TABLE "PUBLIC"."TABLE_1"(
    "A" VARCHAR,
    "B" VARCHAR,
    "C" VARCHAR
);​
-- 4 +/- SELECT COUNT(*) FROM PUBLIC.TABLE_1;
​INSERT INTO "PUBLIC"."TABLE_1" VALUES
('1', '2', '2'),
('4', '4', '6'),
('7', '8', '9'),
('10', '11', '12');​

6.10.2. Dumping Your Results as SQL Script

Original Table
SQL Query
1
SELECT * FROM table_1
Table CSV
1
2
3
4
1   2   2
4   4   6
7   8   9
10  11  12
Create some result and put to a temp table
SQL Query
1
2
CREATE TABLE my_temp AS
  SELECT * FROM table_1 WHERE A!=B
Get the SQL Script
SQL Query
1
SCRIPT TABLE my_temp
Table CSV
1
2
3
4
5
6
7
8
9
10
11
CREATE USER IF NOT EXISTS "" SALT '' HASH '' ADMIN;
​CREATE MEMORY TABLE "PUBLIC"."MY_TEMP"(
    "A" VARCHAR,
    "B" VARCHAR,
    "C" VARCHAR
);​
-- 3 +/- SELECT COUNT(*) FROM PUBLIC.MY_TEMP;
​INSERT INTO "PUBLIC"."MY_TEMP" VALUES
('1', '2', '2'),
('7', '8', '9'),
('10', '11', '12');​


6.11. Show Content of an Attached File on the Page

  1. Add the macro container to your desired page. Then you will need to configure it a bit:
  2. While editing the page, click the macro container, then click Edit. In the "Input" tab of the macro container configuration enable the "Resolve Wiki Paths in Input URLs" checkbox.
  3. Click the "SQL" tab. Replace the default SQL with one of the following (the options will give slightly different results):

    Option A
    Option B
    SELECT FILE_READ(':^attached_file_name.ext#0', NULL) "Content";
    SELECT * FROM CSVREAD(':^attached_file_name.ext#0');
    Note: for further details, please refer to H2 FILE_READ functionNote: for further details, please refer to H2 CSVREAD function

    where:

    • attached_file_name.ext - is the name of your attached file (here is the assumption is that it is attached to the same page, but it can be also brought from attachments of any space/page as well)
    • #0 - instructs to read the current latest version of the file (you can instruct to read a specific version, for example, if you want to read version 42 of the attachment - change #0 to #42.
    • NULL - (specifies the character set to be used while reading the file) means the default character set for this system.
  4. Optionally: click the "Output" tab and set "Bring Results As" selector to "Table CSV". You can play with other available settings as well.

  5. Try it (click Refresh) in the Preview.

  6. If satisfied, save the macro configuration and the page.

  7. The macro should bring you the content of the attachment file on your page.

Example output:


Shows the content of the lates version (0) of example.csv
file, attached to the current page
example.csv
Year,Make,Model,Notes,Price
1997,Ford,E350,"ac, abs, moon",3000.00
2000,Ford,E150,"maroon",2000.00
1999,Chevy,"Venture ""Extended Edition""","",4900.00
1996,Jeep,Grand Cherokee,"MUST SELL! air, moon roof, loaded",4799.00


6.12. Linking One or More Tables from Databases

You can link one or more tables from other databases and process those with SQL within the macro container as if those were the same temporary database.

You can link tables from various databases together at the same time, such as, tables from Oracle DB and tables from Postgress or other DB.

NOTE: While linking tables from a DB you can still configure the macro to include other tables as well, such as:

  • you can put one or more Confluence tables into the macro body. These tables will still be accessible (along with the linked ones) from your SQL Query via names TABLE_1, TABLE_2, ...
  • you can put CSV or JIRA-formatted tables into the macro body. The tables will be accessible under TABLE_1, TABLE_2, ...
  • you can configure the macro to bring additional tables from URL or attachments.
  • etc.

NOTE: some options, such as Pre-rotation, will not work for the linked tables.

6.12.1. Linking a Table from a Postgres DB

NOTE: make sure your Confluence server has postgresql jdbc driver installed.

  • Add the macro container to the page.
  • Put ; into "Query Splitter" field.
  • Put the following into the "SQL Query" field:
CREATE LINKED TABLE
    LINK(
        'org.postgresql.Driver',
        'jdbc:postgresql://<db02.mydb-host.com:5432>/<mydb_name>',
        '<mydb_user_name>',
        '<mydb_password>',
        '<mydb_table_name>'
    ) READONLY;
 
SELECT FROM LINK LIMIT 20;
  • Replace the parameters within <> with your desired connection properties.
  • Save the macro configuration.
  • Now the macro will bring you some data of the linked table (in our example we limited it to 20 rows).

6.12.2. Linking a Table from an Oracle DB

NOTE: make sure your Confluence server has Oracle jdbc driver installed.

  • Add the macro container to the page.
  • Put ; into "Query Splitter" field.
  • Put the following into the "SQL Query" field:
CREATE LINKED TABLE
    LINK(
        'oracle.jdbc.OracleDriver',
        'jdbc:oracle:thin:@<db01.mydb-host.com:1522>/<mydb_name>',
        '<mydb_user_name>',
        '<mydb_password>',
        '<mydb_table_name>'
    ) READONLY;
 
SELECT FROM LINK LIMIT 20;
  • Replace the parameters within <> with your desired connection properties.
  • Save the macro configuration.
  • Now the macro will bring you some data of the linked table (in our example we limited it to 20 rows).

6.12.3. Linking Tables from Other DBMS

Linking tables from other databases (such as MySQL, H2, etc.) can be achieved in similar ways (see examples above).

7. Some Technological Examples

In case you want to know more what is behind the scene.

7.1. Schemas and Tables

Note: make sure 'Force All Tables Processing' Input Parameter is enabled

Shows schemas avaiable at runtime
SQL Query
SHOW SCHEMAS
SCHEMA_NAME
INFORMATION_SCHEMA
PUBLIC
Shows tables within your macro container
SQL Query
SHOW TABLES
TABLE_NAME
TABLE_SCHEMA
TABLE_1PUBLIC
TABLE_2PUBLIC
Shows some schema details on table_1
SQL Query
SELECT
  TABLE_SCHEMA,
  TABLE_NAME,
  COLUMN_NAME,
  IS_NULLABLE,
  TYPE_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME='TABLE_1'
TABLE_SCHEMA
TABLE_NAME
COLUMN_NAME
IS_NULLABLE
TYPE_NAME
PUBLICTABLE_1AYESVARCHAR
PUBLICTABLE_1BYESVARCHAR
PUBLICTABLE_1CYESVARCHAR

7.2. Query Plan

A query plan explained
SQL Query
EXPLAIN SELECT table_1.A, E FROM table_1, table_2
Table CSV
​SELECT
    ""TABLE_1"".""A"",
    ""E""
FROM ""PUBLIC"".""TABLE_1""
    /* PUBLIC.TABLE_1.tableScan */
INNER JOIN ""PUBLIC"".""TABLE_2""
    /* PUBLIC.TABLE_2.tableScan */
    ON 1=1

8. Known Problems and Workarounds

#
Problem
Explanation
Reason
Workaround
1

Selection of rows using ... WHERE col='val' brings no rows

SELECT * FROM table_X WHERE col='SOME VALUE' returns no rows while a row with the "SOME VALUE" does exist.

Confluence table cells, that you added into the macro container, may contain some hidden html codes, spaces, etc. while WHERE col='SOME VALUE' compares against everything between <td>...</td> tags of a cell.

In fact, this may result in unexpected (at a glance) behavior in various cases.

You should account for those hidden codes in your SQL or make sure to remove those hidden elements from the source table.

Check if there are (hidden) html codes or spaces around the "SOME VALUE" within the cell of your source for table_X, such as <span>SOME VALUE</span> or <br>. If there are html codes use one of the workarounds below:

  1. remove those hidden html codes, spaces, etc. from the soruce table cell[s] (via editing source), OR
  2. in your query use "WHERE col LIKE '%SOME VALUE%'" instead of WHERE col='SOME VALUE', OR
  3. in your query, in the "WHERE var=..." condition, use the value with the exact html codes as you see it in the source table_X cell (that is, everything between <td>...</td> tags of the cell[s]).

Note, that <span>...</span> codes may be unexpectedly added by Confluence if you use "copy/paste". So, if you copy a value (without the codes) from one cell and then paste it to another cell, you might get surprised to find the codes added. So, avoid using copy/paste in such cases.

2Selection of rows using ... WHERE col='' (or LIKE '') brings no rowsSELECT * FROM table_X WHERE col='' (or col LIKE '') returns no rows while a row with the empty value does exist.This is similar to #1. In this case Confluence has probably put &nbsp; into the cells that look empty.
  1. Use "Treat Empty Cells as nulls" input parameter (see Input Parameters section)
  2. Use Workaround 3 for Problem #1.
3

SHOW TABLES or

SHOW SCHEMAS

query does not show results

This may happen since, by default, tables, which names are not directly present in the SQL Query (that is, there are no table_1, table_2, etc within the query) are not processed and not added to the temporary DB.Optimization

Enable "Force All Tables Processing" Input Parameter.

Alternatively, keep the "Force All Tables Processing" disabled, but list the table names you want to be processed explicitly as a comment (in a free form) within the SQL Query, such as:

SHOW TABLES /*table_1 table_2*/

so that the macro would see the needed names within the query and process the corresponding tables.

4Macro seems to work and display result tables, however instead of cell values shows <demo mode>Macro operates in demo mode.Absence or expiration of license.To enable full functionality, provide a valid license for the macro in the "Manage add-ons" Confluence configuration page.
5"Insert JIRA/Issue Filter" macro is used as a source of data for "SQL-Powered Table Transformer" macro and brings table cell or cells with embedded tables. It is not clear what are table names to be used in the SQL.

"SQL-Powered Table Transformer" selects the embedded tables from those cells brought by JIRA/Issue Filter macro.

The embedded tables are Confluence tables that are by default get selected too for processing.

To exclude the undesired tables from the selection, modify "Tables Selector" input parameter from:

div.table-wrap > table, div[id~=jira-issues-] > table, table.attachments

to: div[id~=jira-issues-] > table, table.attachments


And/or:

To see the actual table names that were selected, use SHOW TABLES query (along with "Force All Tables Processing" enabled).

6When using JSON as input data, such as with "Input Body Type" set to "Direct JSON" or when JSON is downloaded by a URL remotely, such as with "Input Body Type" set to "List of URLs", and converted to in-memory DB, the data types of original properties gets lost and always become VARCHAR in the DB.

"SQL-Powered Table Transformer" does not preserve datatypes of the input data in the in-memory DB.

The macro always converts the input data to CSV before placing it to in-memory DB (then it uses H2 CSVREAD function to put it to DB). Therefore, when the conversion is done datatype information is basically lost. You will not be able to distinct, for example, "true" and true OR "123" and 123 in the DB, unless you explicitly add the required datatype information into the DB.

As of now, you cannot analyze the datatypes of original input data with SQL. Though, you can explicitly set the datatype of each column, using SQL, for example, if you know that column A is boolean and B is number in your JSON, alter the table like this before executing you main SQL query:

ALTER TABLE table_1
ALTER COLUMN "A" boolean;

ALTER TABLE table_1
ALTER COLUMN "B" number;

7

Cannot execute custom Java functions and the following error is displayed:

IO Exception: "java.io.IOException: Cannot run program ""javac"": error=2, No such file or directory";
H2 engine cannot find Java for the custom functions execution.There is no JDK installed or properly configured on your Confluence instance.Install and/or properly configure JDK on your Confluence instance

9. Technical Details

9.1. How It Works

Here is an overview diagram that illustrates how the whole thing works:

Consider that the all the macro input tables represent a temporary DB for the duration of the macro execution, so that, while it exists, you can execute queries against one or several of those tables. Once the macro execution completes, the DB gets dropped.

An Input Table can be:

  1. Conventional Table inserted into the macro container
  2. Since the macro allows tables autocreation from CSV (using H2 CSVREAD function) or from any SQL query Input Table can also be:
    1. An In-memory H2 DB Table that you would create using SQL
    2. An In-memory H2 DB Table that can be automatically created using a CSV file that is attached to a Confluence page (or Jira issue) (see details in the User's Guide)
    3. In general, an In-memory H2 DB Table that can be automatically created using remote or local CSV (see details in the User's Guide)
  3. Since the macro allows downloading external html content, Input Table can also be any HTML table in the downloaded document.

When the macro gets run, it creates the DB and places all the needed Conventional Tables into the DB, then it executes all the SQL Queries, provided in the SQL Query input parameter (this is the phase when you can also create any temp tables if needed). Once all the queries are executed the macro drops the DB and presents the results by converting resultsets into Conventional Tables format (or other output formats as selected).