Add support for editing of resultsets in the Query Tool, if the data can be identified as updatable. Fixes #1760

When a query is run in the Query Tool, check if the source of the columns
can be identified as being from a single table, and that we have all
columns that make up the primary key. If so, consider the resultset to
be editable and allow the user to edit data and add/remove rows in the
grid. Changes to data are saved using SAVEPOINTs as part of any
transaction that's in progress, and rolled back if there are integrity
violations, without otherwise affecting the ongoing transaction.

Implemented by Yosry Muhammad as a Google Summer of Code project.
This commit is contained in:
Yosry Muhammad 2019-07-17 11:45:20 +01:00 committed by Dave Page
parent beb06a4c76
commit 710d520631
38 changed files with 1868 additions and 605 deletions

View File

@ -42,13 +42,15 @@ The top row of the data grid displays the name of each column, the data type,
and if applicable, the number of characters allowed. A column that is part of
the primary key will additionally be marked with [PK].
.. _modifying-data-grid:
To modify the displayed data:
* To change a numeric value within the grid, double-click the value to select
the field. Modify the content in the square in which it is displayed.
* To change a non-numeric value within the grid, double-click the content to
access the edit bubble. After modifying the contentof the edit bubble, click
the *Save* button to display your changes in the data grid, or *Cancel* to
the *Ok* button to display your changes in the data grid, or *Cancel* to
exit the edit bubble without saving.
To enter a newline character, click Ctrl-Enter or Shift-Enter. Newline
@ -70,9 +72,7 @@ quotes to the table, you need to escape these quotes, by typing \'\'
To delete a row, press the *Delete* toolbar button. A popup will open, asking
you to confirm the deletion.
To commit the changes to the server, select the *Save* toolbar button.
Modifications to a row are written to the server automatically when you select
a different row.
To commit the changes to the server, select the *Save Data* toolbar button.
**Geometry Data Viewer**

BIN
docs/en_US/images/query_output_data.png Executable file → Normal file

Binary file not shown.

Before

Width:  |  Height:  |  Size: 82 KiB

After

Width:  |  Height:  |  Size: 49 KiB

BIN
docs/en_US/images/query_tool.png Executable file → Normal file

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 7.6 KiB

View File

@ -154,6 +154,8 @@ When using the Query Tool, the following shortcuts are available:
+==========================+====================+===================================+
| F5 | F5 | Execute query |
+--------------------------+--------------------+-----------------------------------+
| F6 | F6 | Save data changes |
+--------------------------+--------------------+-----------------------------------+
| F7 | F7 | EXPLAIN query |
+--------------------------+--------------------+-----------------------------------+
| Shift + F7 | Shift + F7 | EXPLAIN ANALYZE query |

View File

@ -320,6 +320,10 @@ Use the fields on the *Options* panel to manage editor preferences.
editor will prompt the user to saved unsaved query modifications when exiting
the Query Tool.
* When the *Prompt to commit/rollback active transactions?* switch is set to
*True*, the editor will prompt the user to commit or rollback changes when
exiting the Query Tool while the current transaction is not committed.
* Use the *Tab size* field to specify the number of spaces per tab character in
the editor.

View File

@ -12,11 +12,15 @@ allows you to:
* Issue ad-hoc SQL queries.
* Execute arbitrary SQL commands.
* Edit the result set of a SELECT query if it is
:ref:`updatable <updatable-result-set>`.
* Displays current connection and transaction status as configured by the user.
* Save the data displayed in the output panel to a CSV file.
* Review the execution plan of a SQL statement in either a text or a graphical format.
* Review the execution plan of a SQL statement in either a text or a graphical
format.
* View analytical information about a SQL statement.
.. image:: images/query_tool.png
:alt: Query tool window
:align: center
@ -120,6 +124,28 @@ You can:
set query execution options.
* Use the *Download as CSV* icon to download the content of the *Data Output*
tab as a comma-delimited file.
* Edit the data in the result set of a SELECT query if it is updatable.
.. _updatable-result-set:
A result set is updatable if:
* All the columns belong to the same table.
* All the primary keys of the table are selected.
* No columns are duplicated.
An updatable result set can be modified just like in
:ref:`View/Edit Data <modifying-data-grid>` mode.
If Auto-commit is off, the data changes are made as part of the ongoing
transaction, if no transaction is ongoing a new one is initiated. The data
changes are not committed to the database unless the transaction is committed.
If any errors occur during saving (for example, trying to save NULL into a
column with NOT NULL constraint) the data changes are rolled back to an
automatically created SAVEPOINT to ensure any previously executed queries in
the ongoing transaction are not rolled back.
All rowsets from previous queries or commands that are displayed in the *Data
Output* panel will be discarded when you invoke another query; open another

View File

@ -31,7 +31,7 @@ File Options
+======================+===================================================================================================+================+
| *Open File* | Click the *Open File* icon to display a previously saved query in the SQL Editor. | Accesskey + O |
+----------------------+---------------------------------------------------------------------------------------------------+----------------+
| *Save* | Click the *Save* icon to perform a quick-save of a previously saved query, or to access the | Accesskey + S |
| *Save File* | Click the *Save* icon to perform a quick-save of a previously saved query, or to access the | Accesskey + S |
| | *Save* menu: | |
| | | |
| | * Select *Save* to save the selected content of the SQL Editor panel in a file. | |
@ -50,6 +50,8 @@ Editing Options
+----------------------+---------------------------------------------------------------------------------------------------+----------------+
| Icon | Behavior | Shortcut |
+======================+===================================================================================================+================+
| *Save Data* | Click the *Save Data* icon to save data changes in the Data Output Panel to the server. | F6 |
+----------------------+---------------------------------------------------------------------------------------------------+----------------+
| *Find* | Use the *Find* menu to search, replace, or navigate the code displayed in the SQL Editor: | |
| +---------------------------------------------------------------------------------------------------+----------------+
| | Select *Find* to provide a search target, and search the SQL Editor contents. | Cmd+F |
@ -67,11 +69,10 @@ Editing Options
| | Select *Jump* to navigate to the next occurrence of the search target. | Alt+G |
+----------------------+---------------------------------------------------------------------------------------------------+----------------+
| *Copy* | Click the *Copy* icon to copy the content that is currently highlighted in the Data Output panel. | Accesskey + C |
| | when in View/Edit data mode. | |
+----------------------+---------------------------------------------------------------------------------------------------+----------------+
| *Paste* | Click the *Paste* icon to paste a previously row into a new row when in View/Edit data mode. | Accesskey + P |
| *Paste* | Click the *Paste* icon to paste a previously row into a new row. | Accesskey + P |
+----------------------+---------------------------------------------------------------------------------------------------+----------------+
| *Delete* | Click the *Delete* icon to delete the selected rows when in View/Edit data mode. | Accesskey + D |
| *Delete* | Click the *Delete* icon to delete the selected rows. | Accesskey + D |
+----------------------+---------------------------------------------------------------------------------------------------+----------------+
| *Edit* | Use options on the *Edit* menu to access text editing tools; the options operate on the text | |
| | displayed in the SQL Editor panel when in Query Tool mode: | |

View File

@ -9,6 +9,7 @@ This release contains a number of bug fixes and new features since the release o
New features
************
| `Issue #1760 <https://redmine.postgresql.org/issues/1760>`_ - Add support for editing of resultsets in the Query Tool, if the data can be identified as updatable.
| `Issue #4335 <https://redmine.postgresql.org/issues/4335>`_ - Add EXPLAIN options for SETTINGS and SUMMARY.
| `Issue #4318 <https://redmine.postgresql.org/issues/4318>`_ - Set the mouse cursor appropriately based on the layout lock state.

View File

@ -65,7 +65,8 @@ class CheckFileManagerFeatureTest(BaseFeatureTest):
self.page.open_query_tool()
def _create_new_file(self):
self.page.find_by_css_selector(QueryToolLocatorsCss.btn_save).click()
self.page.find_by_css_selector(QueryToolLocatorsCss.btn_save_file)\
.click()
# Set the XSS value in input
self.page.find_by_css_selector('.change_file_types')
self.page.fill_input_by_css_selector("input#file-input-path",

View File

@ -1,5 +1,5 @@
class QueryToolLocatorsCss:
btn_save = "#btn-save"
btn_save_file = "#btn-save-file"
btn_execute_query = "#btn-flash"
btn_query_dropdown = "#btn-query-dropdown"
btn_auto_rollback = "#btn-auto-rollback"

View File

@ -7,6 +7,7 @@
#
##########################################################################
import sys
import pyperclip
import random
@ -28,11 +29,24 @@ class QueryToolJourneyTest(BaseFeatureTest):
]
test_table_name = ""
test_editable_table_name = ""
def before(self):
self.test_table_name = "test_table" + str(random.randint(1000, 3000))
test_utils.create_table(
self.server, self.test_db, self.test_table_name)
self.test_editable_table_name = "test_editable_table" + \
str(random.randint(1000, 3000))
create_sql = '''
CREATE TABLE "%s" (
pk_column NUMERIC PRIMARY KEY,
normal_column NUMERIC
);
''' % self.test_editable_table_name
test_utils.create_table_with_query(
self.server, self.test_db, create_sql)
self.page.add_server(self.server)
def runTest(self):
@ -40,9 +54,21 @@ class QueryToolJourneyTest(BaseFeatureTest):
self._execute_query(
"SELECT * FROM %s ORDER BY value " % self.test_table_name)
print("Copy rows...", file=sys.stderr, end="")
self._test_copies_rows()
print(" OK.", file=sys.stderr)
print("Copy columns...", file=sys.stderr, end="")
self._test_copies_columns()
print(" OK.", file=sys.stderr)
print("History tab...", file=sys.stderr, end="")
self._test_history_tab()
print(" OK.", file=sys.stderr)
print("Updatable resultsets...", file=sys.stderr, end="")
self._test_updatable_resultset()
print(" OK.", file=sys.stderr)
def _test_copies_rows(self):
pyperclip.copy("old clipboard contents")
@ -162,6 +188,27 @@ class QueryToolJourneyTest(BaseFeatureTest):
.perform()
self._assert_clickable(query_we_need_to_scroll_to)
def _test_updatable_resultset(self):
self.page.click_tab("Query Editor")
# Insert data into test table
self.__clear_query_tool()
self._execute_query(
"INSERT INTO %s VALUES (1, 1), (2, 2);"
% self.test_editable_table_name
)
# Select all data (contains the primary key -> should be editable)
self.__clear_query_tool()
query = "SELECT pk_column, normal_column FROM %s" \
% self.test_editable_table_name
self._check_query_results_editable(query, True)
# Select data without primary keys -> should not be editable
self.__clear_query_tool()
query = "SELECT normal_column FROM %s" % self.test_editable_table_name
self._check_query_results_editable(query, False)
def __clear_query_tool(self):
self.page.click_element(
self.page.find_by_xpath("//*[@id='btn-clear-dropdown']")
@ -179,6 +226,7 @@ class QueryToolJourneyTest(BaseFeatureTest):
self.page.toggle_open_tree_item('Databases')
self.page.toggle_open_tree_item(self.test_db)
self.page.open_query_tool()
self.page.wait_for_spinner_to_disappear()
def _execute_query(self, query):
self.page.fill_codemirror_area_with(query)
@ -188,6 +236,33 @@ class QueryToolJourneyTest(BaseFeatureTest):
def _assert_clickable(self, element):
self.page.click_element(element)
def _check_query_results_editable(self, query, should_be_editable):
self._execute_query(query)
self.page.wait_for_spinner_to_disappear()
# Check if the first cell in the first row is editable
is_editable = self._check_cell_editable(1)
self.assertEqual(is_editable, should_be_editable)
# Check that new rows cannot be added
can_add_rows = self._check_can_add_row()
self.assertEqual(can_add_rows, should_be_editable)
def _check_cell_editable(self, cell_index):
xpath = '//div[contains(@class, "slick-cell") and ' \
'contains(@class, "r' + str(cell_index) + '")]'
cell_el = self.page.find_by_xpath(xpath)
cell_classes = cell_el.get_attribute('class')
cell_classes = cell_classes.split(" ")
self.assertFalse('editable' in cell_classes)
ActionChains(self.driver).double_click(cell_el).perform()
cell_classes = cell_el.get_attribute('class')
cell_classes = cell_classes.split(" ")
return 'editable' in cell_classes
def _check_can_add_row(self):
return self.page.check_if_element_exist_by_xpath(
'//div[contains(@class, "new-row")]')
def after(self):
self.page.close_query_tool()
self.page.remove_server(self.server)

View File

@ -304,7 +304,7 @@ CREATE TABLE public.nonintpkey
)
time.sleep(0.2)
self._update_cell(cell_xpath, data[str(idx)])
self.page.find_by_id("btn-save").click() # Save data
self.page.find_by_id("btn-save-data").click() # Save data
# There should be some delay after save button is clicked, as it
# takes some time to complete save ajax call otherwise discard unsaved
# changes dialog will appear if we try to execute query before previous

View File

@ -205,6 +205,7 @@ function keyboardShortcutsQueryTool(
let toggleCaseKeys = sqlEditorController.preferences.toggle_case;
let commitKeys = sqlEditorController.preferences.commit_transaction;
let rollbackKeys = sqlEditorController.preferences.rollback_transaction;
let saveDataKeys = sqlEditorController.preferences.save_data;
if (this.validateShortcutKeys(executeKeys, event)) {
this._stopEventPropagation(event);
@ -233,6 +234,9 @@ function keyboardShortcutsQueryTool(
this._stopEventPropagation(event);
queryToolActions.executeRollback(sqlEditorController);
}
} else if (this.validateShortcutKeys(saveDataKeys, event)) {
this._stopEventPropagation(event);
queryToolActions.saveDataChanges(sqlEditorController);
} else if ((
(this.isMac() && event.metaKey) ||
(!this.isMac() && event.ctrlKey)

View File

@ -37,7 +37,8 @@ export function callRenderAfterPoll(sqlEditor, alertify, res) {
const msg = sprintf(
gettext('Query returned successfully in %s.'), sqlEditor.total_time);
res.result += '\n\n' + msg;
sqlEditor.update_msg_history(true, res.result, false);
sqlEditor.update_msg_history(true, res.result, true);
sqlEditor.reset_data_store();
if (isNotificationEnabled(sqlEditor)) {
alertify.success(msg, sqlEditor.info_notifier_timeout);
}

View File

@ -70,6 +70,8 @@ class ExecuteQuery {
let httpMessageData = result.data;
self.removeGridViewMarker();
self.updateSqlEditorLastTransactionStatus(httpMessageData.data.transaction_status);
if (ExecuteQuery.isSqlCorrect(httpMessageData)) {
self.loadingScreen.setMessage('Waiting for the query to complete...');
@ -118,6 +120,8 @@ class ExecuteQuery {
})
).then(
(httpMessage) => {
self.updateSqlEditorLastTransactionStatus(httpMessage.data.data.transaction_status);
// Enable/Disable commit and rollback button.
if (httpMessage.data.data.transaction_status == 2 || httpMessage.data.data.transaction_status == 3) {
self.enableTransactionButtons();
@ -126,6 +130,10 @@ class ExecuteQuery {
}
if (ExecuteQuery.isQueryFinished(httpMessage)) {
if (this.sqlServerObject.close_on_idle_transaction &&
httpMessage.data.data.transaction_status == 0)
this.sqlServerObject.check_needed_confirmations_before_closing_panel();
self.loadingScreen.setMessage('Loading data from the database server and rendering...');
self.sqlServerObject.call_render_after_poll(httpMessage.data.data);
@ -296,6 +304,10 @@ class ExecuteQuery {
this.sqlServerObject.info_notifier_timeout = messageData.info_notifier_timeout;
}
updateSqlEditorLastTransactionStatus(transactionStatus) {
this.sqlServerObject.last_transaction_status = transactionStatus;
}
static isSqlCorrect(httpMessageData) {
return httpMessageData.data.status;
}

View File

@ -156,6 +156,11 @@ let queryToolActions = {
sqlEditorController.special_sql = 'ROLLBACK;';
self.executeQuery(sqlEditorController);
},
saveDataChanges: function (sqlEditorController) {
sqlEditorController.close_on_save = false;
sqlEditorController.save_data();
},
};
module.exports = queryToolActions;

View File

@ -29,7 +29,7 @@ function updateUIPreferences(sqlEditor) {
.attr('title', shortcut_accesskey_title('Open File',preferences.btn_open_file))
.attr('accesskey', shortcut_key(preferences.btn_open_file));
$el.find('#btn-save')
$el.find('#btn-save-file')
.attr('title', shortcut_accesskey_title('Save File',preferences.btn_save_file))
.attr('accesskey', shortcut_key(preferences.btn_save_file));
@ -97,6 +97,10 @@ function updateUIPreferences(sqlEditor) {
.attr('title',
shortcut_title('Download as CSV',preferences.download_csv));
$el.find('#btn-save-data')
.attr('title',
shortcut_title('Save Data Changes',preferences.save_data));
$el.find('#btn-commit')
.attr('title',
shortcut_title('Commit',preferences.commit_transaction));

View File

@ -56,7 +56,7 @@
bottom: $footer-height-calc !important;
}
.ajs-wrap-text {
word-break: break-all;
word-break: normal;
word-wrap: break-word;
}
/* Removes padding from alertify footer */

View File

@ -21,7 +21,7 @@
tabindex="0">
<i class="fa fa-folder-open-o sql-icon-lg" aria-hidden="true"></i>
</button>
<button id="btn-save" type="button" class="btn btn-sm btn-secondary"
<button id="btn-save-file" type="button" class="btn btn-sm btn-secondary"
title=""
accesskey=""
disabled>
@ -44,6 +44,14 @@
</li>
</ul>
</div>
<div class="btn-group mr-1" role="group" aria-label="">
<button id="btn-save-data" type="button" class="btn btn-sm btn-secondary"
title=""
accesskey=""
tabindex="0" disabled>
<i class="icon-save-data-changes sql-icon-lg" aria-hidden="true"></i>
</button>
</div>
<div class="btn-group mr-1" role="group" aria-label="">
<button id="btn-find" type="button" class="btn btn-sm btn-secondary" title="{{ _('Find (Ctrl/Cmd+F)') }}">
<i class="fa fa-search sql-icon-lg" aria-hidden="true" tabindex="0"></i>

View File

@ -352,6 +352,8 @@ def poll(trans_id):
rset = None
has_oids = False
oids = None
additional_messages = None
notifies = None
# Check the transaction and connection status
status, error_msg, conn, trans_obj, session_obj = \
@ -390,6 +392,22 @@ def poll(trans_id):
st, result = conn.async_fetchmany_2darray(ON_DEMAND_RECORD_COUNT)
# There may be additional messages even if result is present
# eg: Function can provide result as well as RAISE messages
messages = conn.messages()
if messages:
additional_messages = ''.join(messages)
notifies = conn.get_notifies()
# Procedure/Function output may comes in the form of Notices
# from the database server, so we need to append those outputs
# with the original result.
if result is None:
result = conn.status_message()
if (result != 'SELECT 1' or result != 'SELECT 0') and \
result is not None and additional_messages:
result = additional_messages + result
if st:
if 'primary_keys' in session_obj:
primary_keys = session_obj['primary_keys']
@ -406,10 +424,22 @@ def poll(trans_id):
)
session_obj['client_primary_key'] = client_primary_key
if columns_info is not None:
# If trans_obj is a QueryToolCommand then check for updatable
# resultsets and primary keys
if isinstance(trans_obj, QueryToolCommand):
trans_obj.check_updatable_results_pkeys()
pk_names, primary_keys = trans_obj.get_primary_keys()
# If primary_keys exist, add them to the session_obj to
# allow for saving any changes to the data
if primary_keys is not None:
session_obj['primary_keys'] = primary_keys
command_obj = pickle.loads(session_obj['command_obj'])
if hasattr(command_obj, 'obj_id'):
if columns_info is not None:
# If it is a QueryToolCommand that has obj_id attribute
# then it should also be editable
if hasattr(trans_obj, 'obj_id') and \
(not isinstance(trans_obj, QueryToolCommand) or
trans_obj.can_edit()):
# Get the template path for the column
template_path = 'columns/sql/#{0}#'.format(
conn.manager.version
@ -417,7 +447,7 @@ def poll(trans_id):
SQL = render_template(
"/".join([template_path, 'nodes.sql']),
tid=command_obj.obj_id,
tid=trans_obj.obj_id,
has_oids=True
)
# rows with attribute not_null
@ -492,26 +522,8 @@ def poll(trans_id):
status = 'NotConnected'
result = error_msg
# There may be additional messages even if result is present
# eg: Function can provide result as well as RAISE messages
additional_messages = None
notifies = None
if status == 'Success':
messages = conn.messages()
if messages:
additional_messages = ''.join(messages)
notifies = conn.get_notifies()
# Procedure/Function output may comes in the form of Notices from the
# database server, so we need to append those outputs with the
# original result.
if status == 'Success' and result is None:
result = conn.status_message()
if (result != 'SELECT 1' or result != 'SELECT 0') and \
result is not None and additional_messages:
result = additional_messages + result
transaction_status = conn.transaction_status()
return make_json_response(
data={
'status': status, 'result': result,
@ -700,7 +712,8 @@ def save(trans_id):
trans_obj is not None and session_obj is not None:
# If there is no primary key found then return from the function.
if (len(session_obj['primary_keys']) <= 0 or
if ('primary_keys' not in session_obj or
len(session_obj['primary_keys']) <= 0 or
len(changed_data) <= 0) and \
'has_oids' not in session_obj:
return make_json_response(
@ -713,32 +726,39 @@ def save(trans_id):
manager = get_driver(
PG_DEFAULT_DRIVER).connection_manager(trans_obj.sid)
default_conn = manager.connection(did=trans_obj.did)
if hasattr(trans_obj, 'conn_id'):
conn = manager.connection(did=trans_obj.did,
conn_id=trans_obj.conn_id)
else:
conn = manager.connection(did=trans_obj.did) # default connection
# Connect to the Server if not connected.
if not default_conn.connected():
status, msg = default_conn.connect()
if not conn.connected():
status, msg = conn.connect()
if not status:
return make_json_response(
data={'status': status, 'result': u"{}".format(msg)}
)
status, res, query_res, _rowid = trans_obj.save(
changed_data,
session_obj['columns_info'],
session_obj['client_primary_key'],
default_conn)
conn)
else:
status = False
res = error_msg
query_res = None
_rowid = None
transaction_status = conn.transaction_status()
return make_json_response(
data={
'status': status,
'result': res,
'query_result': query_res,
'_rowid': _rowid
'_rowid': _rowid,
'transaction_status': transaction_status
}
)

View File

@ -19,6 +19,9 @@ from flask import render_template
from flask_babelex import gettext
from pgadmin.utils.ajax import forbidden
from pgadmin.utils.driver import get_driver
from pgadmin.tools.sqleditor.utils.is_query_resultset_updatable \
import is_query_resultset_updatable
from pgadmin.tools.sqleditor.utils.save_changed_data import save_changed_data
from config import PG_DEFAULT_DRIVER
@ -668,269 +671,11 @@ class TableCommand(GridCommand):
else:
conn = default_conn
status = False
res = None
query_res = dict()
count = 0
list_of_rowid = []
operations = ('added', 'updated', 'deleted')
list_of_sql = {}
_rowid = None
pgadmin_alias = {
col_name: col_info['pgadmin_alias']
for col_name, col_info in columns_info
.items()
}
if conn.connected():
# Start the transaction
conn.execute_void('BEGIN;')
# Iterate total number of records to be updated/inserted
for of_type in changed_data:
# No need to go further if its not add/update/delete operation
if of_type not in operations:
continue
# if no data to be save then continue
if len(changed_data[of_type]) < 1:
continue
column_type = {}
column_data = {}
for each_col in columns_info:
if (
columns_info[each_col]['not_null'] and
not columns_info[each_col]['has_default_val']
):
column_data[each_col] = None
column_type[each_col] =\
columns_info[each_col]['type_name']
else:
column_type[each_col] = \
columns_info[each_col]['type_name']
# For newly added rows
if of_type == 'added':
# Python dict does not honour the inserted item order
# So to insert data in the order, we need to make ordered
# list of added index We don't need this mechanism in
# updated/deleted rows as it does not matter in
# those operations
added_index = OrderedDict(
sorted(
changed_data['added_index'].items(),
key=lambda x: int(x[0])
)
)
list_of_sql[of_type] = []
# When new rows are added, only changed columns data is
# sent from client side. But if column is not_null and has
# no_default_value, set column to blank, instead
# of not null which is set by default.
column_data = {}
pk_names, primary_keys = self.get_primary_keys()
has_oids = 'oid' in column_type
for each_row in added_index:
# Get the row index to match with the added rows
# dict key
tmp_row_index = added_index[each_row]
data = changed_data[of_type][tmp_row_index]['data']
# Remove our unique tracking key
data.pop(client_primary_key, None)
data.pop('is_row_copied', None)
list_of_rowid.append(data.get(client_primary_key))
# Update columns value with columns having
# not_null=False and has no default value
column_data.update(data)
sql = render_template(
"/".join([self.sql_path, 'insert.sql']),
data_to_be_saved=column_data,
pgadmin_alias=pgadmin_alias,
primary_keys=None,
object_name=self.object_name,
nsp_name=self.nsp_name,
data_type=column_type,
pk_names=pk_names,
has_oids=has_oids
)
select_sql = render_template(
"/".join([self.sql_path, 'select.sql']),
object_name=self.object_name,
nsp_name=self.nsp_name,
primary_keys=primary_keys,
has_oids=has_oids
)
list_of_sql[of_type].append({
'sql': sql, 'data': data,
'client_row': tmp_row_index,
'select_sql': select_sql
})
# Reset column data
column_data = {}
# For updated rows
elif of_type == 'updated':
list_of_sql[of_type] = []
for each_row in changed_data[of_type]:
data = changed_data[of_type][each_row]['data']
pk_escaped = {
pk: pk_val.replace('%', '%%') if hasattr(
pk_val, 'replace') else pk_val
for pk, pk_val in
changed_data[of_type][each_row]['primary_keys']
.items()
}
sql = render_template(
"/".join([self.sql_path, 'update.sql']),
data_to_be_saved=data,
pgadmin_alias=pgadmin_alias,
primary_keys=pk_escaped,
object_name=self.object_name,
nsp_name=self.nsp_name,
data_type=column_type
)
list_of_sql[of_type].append({'sql': sql, 'data': data})
list_of_rowid.append(data.get(client_primary_key))
# For deleted rows
elif of_type == 'deleted':
list_of_sql[of_type] = []
is_first = True
rows_to_delete = []
keys = None
no_of_keys = None
for each_row in changed_data[of_type]:
rows_to_delete.append(changed_data[of_type][each_row])
# Fetch the keys for SQL generation
if is_first:
# We need to covert dict_keys to normal list in
# Python3
# In Python2, it's already a list & We will also
# fetch column names using index
keys = list(
changed_data[of_type][each_row].keys()
)
no_of_keys = len(keys)
is_first = False
# Map index with column name for each row
for row in rows_to_delete:
for k, v in row.items():
# Set primary key with label & delete index based
# mapped key
try:
row[changed_data['columns']
[int(k)]['name']] = v
except ValueError:
continue
del row[k]
sql = render_template(
"/".join([self.sql_path, 'delete.sql']),
data=rows_to_delete,
primary_key_labels=keys,
no_of_keys=no_of_keys,
object_name=self.object_name,
nsp_name=self.nsp_name
)
list_of_sql[of_type].append({'sql': sql, 'data': {}})
for opr, sqls in list_of_sql.items():
for item in sqls:
if item['sql']:
item['data'] = {
pgadmin_alias[k] if k in pgadmin_alias else k: v
for k, v in item['data'].items()
}
row_added = None
def failure_handle():
conn.execute_void('ROLLBACK;')
# If we roll backed every thing then update the
# message for each sql query.
for val in query_res:
if query_res[val]['status']:
query_res[val]['result'] = \
'Transaction ROLLBACK'
# If list is empty set rowid to 1
try:
if list_of_rowid:
_rowid = list_of_rowid[count]
else:
_rowid = 1
except Exception:
_rowid = 0
return status, res, query_res, _rowid
try:
# Fetch oids/primary keys
if 'select_sql' in item and item['select_sql']:
status, res = conn.execute_dict(
item['sql'], item['data'])
else:
status, res = conn.execute_void(
item['sql'], item['data'])
except Exception as _:
failure_handle()
raise
if not status:
return failure_handle()
# Select added row from the table
if 'select_sql' in item:
status, sel_res = conn.execute_dict(
item['select_sql'], res['rows'][0])
if not status:
conn.execute_void('ROLLBACK;')
# If we roll backed every thing then update
# the message for each sql query.
for val in query_res:
if query_res[val]['status']:
query_res[val]['result'] = \
'Transaction ROLLBACK'
# If list is empty set rowid to 1
try:
if list_of_rowid:
_rowid = list_of_rowid[count]
else:
_rowid = 1
except Exception:
_rowid = 0
return status, sel_res, query_res, _rowid
if 'rows' in sel_res and len(sel_res['rows']) > 0:
row_added = {
item['client_row']: sel_res['rows'][0]}
rows_affected = conn.rows_affected()
# store the result of each query in dictionary
query_res[count] = {
'status': status,
'result': None if row_added else res,
'sql': sql, 'rows_affected': rows_affected,
'row_added': row_added
}
count += 1
# Commit the transaction if there is no error found
conn.execute_void('COMMIT;')
return status, res, query_res, _rowid
return save_changed_data(changed_data=changed_data,
columns_info=columns_info,
command_obj=self,
client_primary_key=client_primary_key,
conn=conn)
class ViewCommand(GridCommand):
@ -1114,18 +859,84 @@ class QueryToolCommand(BaseCommand, FetchedRowTracker):
self.auto_rollback = False
self.auto_commit = True
# Attributes needed to be able to edit updatable resultsets
self.is_updatable_resultset = False
self.primary_keys = None
self.pk_names = None
def get_sql(self, default_conn=None):
return None
def get_all_columns_with_order(self, default_conn=None):
return None
def get_primary_keys(self):
return self.pk_names, self.primary_keys
def can_edit(self):
return False
return self.is_updatable_resultset
def can_filter(self):
return False
def check_updatable_results_pkeys(self):
"""
This function is used to check whether the last successful query
produced updatable results and sets the necessary flags and
attributes accordingly.
Should be called after polling for the results is successful
(results are ready)
"""
# Fetch the connection object
driver = get_driver(PG_DEFAULT_DRIVER)
manager = driver.connection_manager(self.sid)
conn = manager.connection(did=self.did, conn_id=self.conn_id)
# Get the path to the sql templates
sql_path = 'sqleditor/sql/#{0}#'.format(manager.version)
self.is_updatable_resultset, self.primary_keys, pk_names, table_oid = \
is_query_resultset_updatable(conn, sql_path)
# Create pk_names attribute in the required format
if pk_names is not None:
self.pk_names = ''
for pk_name in pk_names:
self.pk_names += driver.qtIdent(conn, pk_name) + ','
if self.pk_names != '':
# Remove last character from the string
self.pk_names = self.pk_names[:-1]
# Add attributes required to be able to update table data
if self.is_updatable_resultset:
self.__set_updatable_results_attrs(sql_path=sql_path,
table_oid=table_oid,
conn=conn)
def save(self,
changed_data,
columns_info,
client_primary_key='__temp_PK',
default_conn=None):
if not self.is_updatable_resultset:
return False, gettext('Resultset is not updatable.'), None, None
else:
driver = get_driver(PG_DEFAULT_DRIVER)
if default_conn is None:
manager = driver.connection_manager(self.sid)
conn = manager.connection(did=self.did, conn_id=self.conn_id)
else:
conn = default_conn
return save_changed_data(changed_data=changed_data,
columns_info=columns_info,
conn=conn,
command_obj=self,
client_primary_key=client_primary_key,
auto_commit=self.auto_commit)
def set_connection_id(self, conn_id):
self.conn_id = conn_id
@ -1134,3 +945,28 @@ class QueryToolCommand(BaseCommand, FetchedRowTracker):
def set_auto_commit(self, auto_commit):
self.auto_commit = auto_commit
def __set_updatable_results_attrs(self, sql_path,
table_oid, conn):
# Set template path for sql scripts and the table object id
self.sql_path = sql_path
self.obj_id = table_oid
if conn.connected():
# Fetch the Namespace Name and object Name
query = render_template(
"/".join([self.sql_path, 'objectname.sql']),
obj_id=self.obj_id
)
status, result = conn.execute_dict(query)
if not status:
raise Exception(result)
self.nsp_name = result['rows'][0]['nspname']
self.object_name = result['rows'][0]['relname']
else:
raise Exception(gettext(
'Not connected to server or connection with the server '
'has been closed.')
)

View File

@ -291,7 +291,7 @@ input.editor-checkbox:focus {
background-image: url('../img/disconnect.svg');
}
.icon-commit, .icon-rollback {
.icon-commit, .icon-rollback, .icon-save-data-changes {
display: inline-block;
align-content: center;
vertical-align: middle;
@ -311,6 +311,10 @@ input.editor-checkbox:focus {
background-image: url('../img/rollback.svg') !important;
}
.icon-save-data-changes {
background-image: url('../img/save_data_changes.svg') !important;
}
.ajs-body .warn-header {
font-size: 13px;
font-weight: bold;

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 14 KiB

File diff suppressed because it is too large Load Diff

View File

@ -209,6 +209,13 @@ li.CodeMirror-hint-active {
color: $text-muted;
}
/* Highlighted (modified or new) cell */
.grid-canvas .highlighted_grid_cells {
background: $color-gray-lighter;
font-weight: bold;
}
/* Override selected row color */
#datagrid .slick-row .slick-cell.selected {
background-color: $table-bg-selected;

View File

@ -1,6 +1,6 @@
{# ============= Fetch the primary keys for given object id ============= #}
{% if obj_id %}
SELECT at.attname, ty.typname
SELECT at.attname, at.attnum, ty.typname
FROM pg_attribute at LEFT JOIN pg_type ty ON (ty.oid = at.atttypid)
WHERE attrelid={{obj_id}}::oid AND attnum = ANY (
(SELECT con.conkey FROM pg_class rel LEFT OUTER JOIN pg_constraint con ON con.conrelid=rel.oid

View File

@ -1,8 +1,8 @@
{# ============= Fetch the primary keys for given object id ============= #}
{% if obj_id %}
SELECT at.attname, ty.typname
SELECT at.attname, at.attnum, ty.typname
FROM pg_attribute at LEFT JOIN pg_type ty ON (ty.oid = at.atttypid)
WHERE attrelid={{obj_id}}::oid AND attnum = ANY (
(SELECT con.conkey FROM pg_class rel LEFT OUTER JOIN pg_constraint con ON con.conrelid=rel.oid
AND con.contype='p' WHERE rel.relkind IN ('r','s','t') AND rel.oid = {{obj_id}}::oid)::oid[])
{% endif %}
{% endif %}

View File

@ -0,0 +1,41 @@
##########################################################################
#
# pgAdmin 4 - PostgreSQL Tools
#
# Copyright (C) 2013 - 2019, The pgAdmin Development Team
# This software is released under the PostgreSQL Licence
#
##########################################################################
import json
# Utility functions used by tests
# Executes a query and polls for the results, then returns them
def execute_query(tester, query, start_query_tool_url, poll_url):
# Start query tool and execute sql
response = tester.post(start_query_tool_url,
data=json.dumps({"sql": query}),
content_type='html/json')
if response.status_code != 200:
return False, None
# Poll for results
return poll_for_query_results(tester=tester, poll_url=poll_url)
# Polls for the result of an executed query
def poll_for_query_results(tester, poll_url):
# Poll for results until they are successful
while True:
response = tester.get(poll_url)
if response.status_code != 200:
return False, None
response_data = json.loads(response.data.decode('utf-8'))
status = response_data['data']['status']
if status == 'Success':
return True, response_data
elif status == 'NotConnected' or status == 'Cancel':
return False, None

View File

@ -0,0 +1,125 @@
##########################################################################
#
# pgAdmin 4 - PostgreSQL Tools
#
# Copyright (C) 2013 - 2019, The pgAdmin Development Team
# This software is released under the PostgreSQL Licence
#
##########################################################################
import json
from pgadmin.browser.server_groups.servers.databases.tests import utils as \
database_utils
from pgadmin.utils.route import BaseTestGenerator
from regression import parent_node_dict
from regression.python_test_utils import test_utils as utils
from .execute_query_utils import execute_query
class TestQueryUpdatableResultset(BaseTestGenerator):
""" This class will test the detection of whether the query
result-set is updatable. """
scenarios = [
('When selecting all columns of the table', dict(
sql='SELECT * FROM test_for_updatable_resultset;',
primary_keys={
'pk_col1': 'int4',
'pk_col2': 'int4'
}
)),
('When selecting all primary keys of the table', dict(
sql='SELECT pk_col1, pk_col2 FROM test_for_updatable_resultset;',
primary_keys={
'pk_col1': 'int4',
'pk_col2': 'int4'
}
)),
('When selecting some of the primary keys of the table', dict(
sql='SELECT pk_col2 FROM test_for_updatable_resultset;',
primary_keys=None
)),
('When selecting none of the primary keys of the table', dict(
sql='SELECT normal_col1 FROM test_for_updatable_resultset;',
primary_keys=None
)),
('When renaming a primary key', dict(
sql='SELECT pk_col1 as some_col, '
'pk_col2 FROM test_for_updatable_resultset;',
primary_keys=None
)),
('When renaming a column to a primary key name', dict(
sql='SELECT pk_col1, pk_col2, normal_col1 as pk_col1 '
'FROM test_for_updatable_resultset;',
primary_keys=None
))
]
def setUp(self):
self._initialize_database_connection()
self._initialize_query_tool()
self._initialize_urls()
self._create_test_table()
def runTest(self):
is_success, response_data = \
execute_query(tester=self.tester,
query=self.sql,
poll_url=self.poll_url,
start_query_tool_url=self.start_query_tool_url)
self.assertEquals(is_success, True)
# Check primary keys
primary_keys = response_data['data']['primary_keys']
self.assertEquals(primary_keys, self.primary_keys)
def tearDown(self):
# Disconnect the database
database_utils.disconnect_database(self, self.server_id, self.db_id)
def _initialize_database_connection(self):
database_info = parent_node_dict["database"][-1]
self.server_id = database_info["server_id"]
self.db_id = database_info["db_id"]
db_con = database_utils.connect_database(self,
utils.SERVER_GROUP,
self.server_id,
self.db_id)
if not db_con["info"] == "Database connected.":
raise Exception("Could not connect to the database.")
def _initialize_query_tool(self):
url = '/datagrid/initialize/query_tool/{0}/{1}/{2}'.format(
utils.SERVER_GROUP, self.server_id, self.db_id)
response = self.tester.post(url)
self.assertEquals(response.status_code, 200)
response_data = json.loads(response.data.decode('utf-8'))
self.trans_id = response_data['data']['gridTransId']
def _initialize_urls(self):
self.start_query_tool_url = \
'/sqleditor/query_tool/start/{0}'.format(self.trans_id)
self.poll_url = '/sqleditor/poll/{0}'.format(self.trans_id)
def _create_test_table(self):
create_sql = """
DROP TABLE IF EXISTS test_for_updatable_resultset;
CREATE TABLE test_for_updatable_resultset(
pk_col1 SERIAL,
pk_col2 SERIAL,
normal_col1 VARCHAR,
normal_col2 VARCHAR,
PRIMARY KEY(pk_col1, pk_col2)
);
"""
is_success, _ = \
execute_query(tester=self.tester,
query=create_sql,
start_query_tool_url=self.start_query_tool_url,
poll_url=self.poll_url)
self.assertEquals(is_success, True)

View File

@ -0,0 +1,347 @@
##########################################################################
#
# pgAdmin 4 - PostgreSQL Tools
#
# Copyright (C) 2013 - 2019, The pgAdmin Development Team
# This software is released under the PostgreSQL Licence
#
##########################################################################
import json
from pgadmin.browser.server_groups.servers.databases.tests import utils as \
database_utils
from pgadmin.utils.route import BaseTestGenerator
from regression import parent_node_dict
from regression.python_test_utils import test_utils as utils
from .execute_query_utils import execute_query
class TestSaveChangedData(BaseTestGenerator):
""" This class tests saving data changes in the grid to the database """
scenarios = [
('When inserting new valid row', dict(
save_payload={
"updated": {},
"added": {
"2": {
"err": False,
"data": {
"pk_col": "3",
"__temp_PK": "2",
"normal_col": "three"
}
}
},
"staged_rows": {},
"deleted": {},
"updated_index": {},
"added_index": {"2": "2"},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False},
{"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False}
]
},
save_status=True,
check_sql='SELECT * FROM test_for_save_data WHERE pk_col = 3',
check_result=[[3, "three"]]
)),
('When inserting new invalid row', dict(
save_payload={
"updated": {},
"added": {
"2": {
"err": False,
"data": {
"pk_col": "1",
"__temp_PK": "2",
"normal_col": "four"
}
}
},
"staged_rows": {},
"deleted": {},
"updated_index": {},
"added_index": {"2": "2"},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False},
{"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False}
]
},
save_status=False,
check_sql=None,
check_result=None
)),
('When updating a row in a valid way', dict(
save_payload={
"updated": {
"1":
{"err": False,
"data": {"normal_col": "ONE"},
"primary_keys":
{"pk_col": 1}
}
},
"added": {},
"staged_rows": {},
"deleted": {},
"updated_index": {"1": "1"},
"added_index": {},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False},
{"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False}
]
},
save_status=True,
check_sql='SELECT * FROM test_for_save_data WHERE pk_col = 1',
check_result=[[1, "ONE"]]
)),
('When updating a row in an invalid way', dict(
save_payload={
"updated": {
"1":
{"err": False,
"data": {"pk_col": "2"},
"primary_keys":
{"pk_col": 1}
}
},
"added": {},
"staged_rows": {},
"deleted": {},
"updated_index": {"1": "1"},
"added_index": {},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False},
{"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False}
]
},
save_status=False,
check_sql=None,
check_result=None
)),
('When deleting a row', dict(
save_payload={
"updated": {},
"added": {},
"staged_rows": {"1": {"pk_col": 2}},
"deleted": {"1": {"pk_col": 2}},
"updated_index": {},
"added_index": {},
"columns": [
{
"name": "pk_col",
"display_name": "pk_col",
"column_type": "[PK] integer",
"column_type_internal": "integer",
"pos": 0,
"label": "pk_col<br>[PK] integer",
"cell": "number",
"can_edit": True,
"type": "integer",
"not_null": True,
"has_default_val": False,
"is_array": False},
{"name": "normal_col",
"display_name": "normal_col",
"column_type": "character varying",
"column_type_internal": "character varying",
"pos": 1,
"label": "normal_col<br>character varying",
"cell": "string",
"can_edit": True,
"type": "character varying",
"not_null": False,
"has_default_val": False,
"is_array": False}
]
},
save_status=True,
check_sql='SELECT * FROM test_for_save_data WHERE pk_col = 2',
check_result='SELECT 0'
)),
]
def setUp(self):
self._initialize_database_connection()
self._initialize_query_tool()
self._initialize_urls_and_select_sql()
self._create_test_table()
def runTest(self):
# Execute select sql
is_success, _ = \
execute_query(tester=self.tester,
query=self.select_sql,
start_query_tool_url=self.start_query_tool_url,
poll_url=self.poll_url)
self.assertEquals(is_success, True)
# Send a request to save changed data
response = self.tester.post(self.save_url,
data=json.dumps(self.save_payload),
content_type='html/json')
self.assertEquals(response.status_code, 200)
# Check that the save is successful
response_data = json.loads(response.data.decode('utf-8'))
save_status = response_data['data']['status']
self.assertEquals(save_status, self.save_status)
if self.check_sql:
# Execute check sql
is_success, response_data = \
execute_query(tester=self.tester,
query=self.check_sql,
start_query_tool_url=self.start_query_tool_url,
poll_url=self.poll_url)
self.assertEquals(is_success, True)
# Check table for updates
result = response_data['data']['result']
self.assertEquals(result, self.check_result)
def tearDown(self):
# Disconnect the database
database_utils.disconnect_database(self, self.server_id, self.db_id)
def _initialize_database_connection(self):
database_info = parent_node_dict["database"][-1]
self.server_id = database_info["server_id"]
self.db_id = database_info["db_id"]
db_con = database_utils.connect_database(self,
utils.SERVER_GROUP,
self.server_id,
self.db_id)
if not db_con["info"] == "Database connected.":
raise Exception("Could not connect to the database.")
def _initialize_query_tool(self):
url = '/datagrid/initialize/query_tool/{0}/{1}/{2}'.format(
utils.SERVER_GROUP, self.server_id, self.db_id)
response = self.tester.post(url)
self.assertEquals(response.status_code, 200)
response_data = json.loads(response.data.decode('utf-8'))
self.trans_id = response_data['data']['gridTransId']
def _initialize_urls_and_select_sql(self):
self.start_query_tool_url = \
'/sqleditor/query_tool/start/{0}'.format(self.trans_id)
self.save_url = '/sqleditor/save/{0}'.format(self.trans_id)
self.poll_url = '/sqleditor/poll/{0}'.format(self.trans_id)
self.select_sql = 'SELECT * FROM test_for_save_data;'
def _create_test_table(self):
create_sql = """
DROP TABLE IF EXISTS test_for_save_data;
CREATE TABLE test_for_save_data(
pk_col INT PRIMARY KEY,
normal_col VARCHAR);
INSERT INTO test_for_save_data VALUES
(1, 'one'),
(2, 'two');
"""
is_success, _ = \
execute_query(tester=self.tester,
query=create_sql,
start_query_tool_url=self.start_query_tool_url,
poll_url=self.poll_url)
self.assertEquals(is_success, True)

View File

@ -0,0 +1,120 @@
##########################################################################
#
# pgAdmin 4 - PostgreSQL Tools
#
# Copyright (C) 2013 - 2019, The pgAdmin Development Team
# This software is released under the PostgreSQL Licence
#
##########################################################################
"""
Check if the result-set of a query is updatable, A resultset is
updatable (as of this version) if:
- All columns belong to the same table.
- All the primary key columns of the table are present in the resultset
- No duplicate columns
"""
from flask import render_template
try:
from collections import OrderedDict
except ImportError:
from ordereddict import OrderedDict
def is_query_resultset_updatable(conn, sql_path):
"""
This function is used to check whether the last successful query
produced updatable results.
Args:
conn: Connection object.
sql_path: the path to the sql templates.
"""
columns_info = conn.get_column_info()
if columns_info is None or len(columns_info) < 1:
return return_not_updatable()
table_oid = _check_single_table(columns_info)
if not table_oid:
return return_not_updatable()
if not _check_duplicate_columns(columns_info):
return return_not_updatable()
if conn.connected():
primary_keys, primary_keys_columns, pk_names = \
_get_primary_keys(conn=conn,
table_oid=table_oid,
sql_path=sql_path)
if not _check_primary_keys_uniquely_exist(primary_keys_columns,
columns_info):
return return_not_updatable()
return True, primary_keys, pk_names, table_oid
else:
return return_not_updatable()
def _check_single_table(columns_info):
table_oid = columns_info[0]['table_oid']
for column in columns_info:
if column['table_oid'] != table_oid:
return None
return table_oid
def _check_duplicate_columns(columns_info):
column_numbers = \
[col['table_column'] for col in columns_info]
is_duplicate_columns = len(column_numbers) != len(set(column_numbers))
if is_duplicate_columns:
return False
return True
def _check_primary_keys_uniquely_exist(primary_keys_columns, columns_info):
for pk in primary_keys_columns:
pk_exists = False
for col in columns_info:
if col['table_column'] == pk['column_number']:
pk_exists = True
# If the primary key column is renamed
if col['display_name'] != pk['name']:
return False
# If a normal column is renamed to a primary key column name
elif col['display_name'] == pk['name']:
return False
if not pk_exists:
return False
return True
def _get_primary_keys(sql_path, table_oid, conn):
query = render_template(
"/".join([sql_path, 'primary_keys.sql']),
obj_id=table_oid
)
status, result = conn.execute_dict(query)
if not status:
return return_not_updatable()
primary_keys_columns = []
primary_keys = OrderedDict()
pk_names = []
for row in result['rows']:
primary_keys[row['attname']] = row['typname']
primary_keys_columns.append({
'name': row['attname'],
'column_number': row['attnum']
})
pk_names.append(row['attname'])
return primary_keys, primary_keys_columns, pk_names
def return_not_updatable():
return False, None, None, None

View File

@ -175,6 +175,17 @@ def RegisterQueryToolPreferences(self):
)
)
self.show_prompt_commit_transaction = self.preference.register(
'Options', 'prompt_commit_transaction',
gettext("Prompt to commit/rollback active transactions?"), 'boolean',
True,
category_label=gettext('Options'),
help_str=gettext(
'Specifies whether or not to prompt user to commit or rollback '
'an active transaction on Query Tool exit.'
)
)
self.csv_quoting = self.preference.register(
'CSV_output', 'csv_quoting',
gettext("CSV quoting"), 'options', 'strings',
@ -302,6 +313,24 @@ def RegisterQueryToolPreferences(self):
fields=shortcut_fields
)
self.preference.register(
'keyboard_shortcuts',
'save_data',
gettext('Save data changes'),
'keyboardshortcut',
{
'alt': False,
'shift': False,
'control': False,
'key': {
'key_code': 117,
'char': 'F6'
}
},
category_label=gettext('Keyboard shortcuts'),
fields=shortcut_fields
)
self.preference.register(
'keyboard_shortcuts',
'explain_query',

View File

@ -0,0 +1,310 @@
##########################################################################
#
# pgAdmin 4 - PostgreSQL Tools
#
# Copyright (C) 2013 - 2019, The pgAdmin Development Team
# This software is released under the PostgreSQL Licence
#
##########################################################################
from flask import render_template
from pgadmin.tools.sqleditor.utils.constant_definition import TX_STATUS_IDLE
try:
from collections import OrderedDict
except ImportError:
from ordereddict import OrderedDict
def save_changed_data(changed_data, columns_info, conn, command_obj,
client_primary_key, auto_commit=True):
"""
This function is used to save the data into the database.
Depending on condition it will either update or insert the
new row into the database.
Args:
changed_data: Contains data to be saved
command_obj: The transaction object (command_obj or trans_obj)
conn: The connection object
columns_info: session_obj['columns_info']
client_primary_key: session_obj['client_primary_key']
auto_commit: If the changes should be commited automatically.
"""
status = False
res = None
query_res = dict()
count = 0
list_of_rowid = []
operations = ('added', 'updated', 'deleted')
list_of_sql = {}
_rowid = None
pgadmin_alias = {
col_name: col_info['pgadmin_alias']
for col_name, col_info in columns_info.items()
}
if conn.connected():
is_savepoint = False
# Start the transaction if the session is idle
if conn.transaction_status() == TX_STATUS_IDLE:
conn.execute_void('BEGIN;')
else:
conn.execute_void('SAVEPOINT save_data;')
is_savepoint = True
# Iterate total number of records to be updated/inserted
for of_type in changed_data:
# No need to go further if its not add/update/delete operation
if of_type not in operations:
continue
# if no data to be save then continue
if len(changed_data[of_type]) < 1:
continue
column_type = {}
column_data = {}
for each_col in columns_info:
if (
columns_info[each_col]['not_null'] and
not columns_info[each_col]['has_default_val']
):
column_data[each_col] = None
column_type[each_col] = \
columns_info[each_col]['type_name']
else:
column_type[each_col] = \
columns_info[each_col]['type_name']
# For newly added rows
if of_type == 'added':
# Python dict does not honour the inserted item order
# So to insert data in the order, we need to make ordered
# list of added index We don't need this mechanism in
# updated/deleted rows as it does not matter in
# those operations
added_index = OrderedDict(
sorted(
changed_data['added_index'].items(),
key=lambda x: int(x[0])
)
)
list_of_sql[of_type] = []
# When new rows are added, only changed columns data is
# sent from client side. But if column is not_null and has
# no_default_value, set column to blank, instead
# of not null which is set by default.
column_data = {}
pk_names, primary_keys = command_obj.get_primary_keys()
has_oids = 'oid' in column_type
for each_row in added_index:
# Get the row index to match with the added rows
# dict key
tmp_row_index = added_index[each_row]
data = changed_data[of_type][tmp_row_index]['data']
# Remove our unique tracking key
data.pop(client_primary_key, None)
data.pop('is_row_copied', None)
list_of_rowid.append(data.get(client_primary_key))
# Update columns value with columns having
# not_null=False and has no default value
column_data.update(data)
sql = render_template(
"/".join([command_obj.sql_path, 'insert.sql']),
data_to_be_saved=column_data,
pgadmin_alias=pgadmin_alias,
primary_keys=None,
object_name=command_obj.object_name,
nsp_name=command_obj.nsp_name,
data_type=column_type,
pk_names=pk_names,
has_oids=has_oids
)
select_sql = render_template(
"/".join([command_obj.sql_path, 'select.sql']),
object_name=command_obj.object_name,
nsp_name=command_obj.nsp_name,
primary_keys=primary_keys,
has_oids=has_oids
)
list_of_sql[of_type].append({
'sql': sql, 'data': data,
'client_row': tmp_row_index,
'select_sql': select_sql
})
# Reset column data
column_data = {}
# For updated rows
elif of_type == 'updated':
list_of_sql[of_type] = []
for each_row in changed_data[of_type]:
data = changed_data[of_type][each_row]['data']
pk_escaped = {
pk: pk_val.replace('%', '%%') if hasattr(
pk_val, 'replace') else pk_val
for pk, pk_val in
changed_data[of_type][each_row]['primary_keys'].items()
}
sql = render_template(
"/".join([command_obj.sql_path, 'update.sql']),
data_to_be_saved=data,
pgadmin_alias=pgadmin_alias,
primary_keys=pk_escaped,
object_name=command_obj.object_name,
nsp_name=command_obj.nsp_name,
data_type=column_type
)
list_of_sql[of_type].append({'sql': sql, 'data': data})
list_of_rowid.append(data.get(client_primary_key))
# For deleted rows
elif of_type == 'deleted':
list_of_sql[of_type] = []
is_first = True
rows_to_delete = []
keys = None
no_of_keys = None
for each_row in changed_data[of_type]:
rows_to_delete.append(changed_data[of_type][each_row])
# Fetch the keys for SQL generation
if is_first:
# We need to covert dict_keys to normal list in
# Python3
# In Python2, it's already a list & We will also
# fetch column names using index
keys = list(
changed_data[of_type][each_row].keys()
)
no_of_keys = len(keys)
is_first = False
# Map index with column name for each row
for row in rows_to_delete:
for k, v in row.items():
# Set primary key with label & delete index based
# mapped key
try:
row[changed_data['columns']
[int(k)]['name']] = v
except ValueError:
continue
del row[k]
sql = render_template(
"/".join([command_obj.sql_path, 'delete.sql']),
data=rows_to_delete,
primary_key_labels=keys,
no_of_keys=no_of_keys,
object_name=command_obj.object_name,
nsp_name=command_obj.nsp_name
)
list_of_sql[of_type].append({'sql': sql, 'data': {}})
for opr, sqls in list_of_sql.items():
for item in sqls:
if item['sql']:
item['data'] = {
pgadmin_alias[k] if k in pgadmin_alias else k: v
for k, v in item['data'].items()
}
row_added = None
def failure_handle(res):
if is_savepoint:
conn.execute_void('ROLLBACK TO SAVEPOINT '
'save_data;')
msg = 'Query ROLLBACK, but the current ' \
'transaction is still ongoing.'
else:
conn.execute_void('ROLLBACK;')
msg = 'Transaction ROLLBACK'
# If we roll backed every thing then update the
# message for each sql query.
for val in query_res:
if query_res[val]['status']:
query_res[val]['result'] = msg
# If list is empty set rowid to 1
try:
if list_of_rowid:
_rowid = list_of_rowid[count]
else:
_rowid = 1
except Exception:
_rowid = 0
return status, res, query_res, _rowid
try:
# Fetch oids/primary keys
if 'select_sql' in item and item['select_sql']:
status, res = conn.execute_dict(
item['sql'], item['data'])
else:
status, res = conn.execute_void(
item['sql'], item['data'])
except Exception as _:
failure_handle(res)
raise
if not status:
return failure_handle(res)
# Select added row from the table
if 'select_sql' in item:
status, sel_res = conn.execute_dict(
item['select_sql'], res['rows'][0])
if not status:
if is_savepoint:
conn.execute_void('ROLLBACK TO SAVEPOINT'
' save_data;')
msg = 'Query ROLLBACK, the current' \
' transaction is still ongoing.'
else:
conn.execute_void('ROLLBACK;')
msg = 'Transaction ROLLBACK'
# If we roll backed every thing then update
# the message for each sql query.
for val in query_res:
if query_res[val]['status']:
query_res[val]['result'] = msg
# If list is empty set rowid to 1
try:
if list_of_rowid:
_rowid = list_of_rowid[count]
else:
_rowid = 1
except Exception:
_rowid = 0
return status, sel_res, query_res, _rowid
if 'rows' in sel_res and len(sel_res['rows']) > 0:
row_added = {
item['client_row']: sel_res['rows'][0]}
rows_affected = conn.rows_affected()
# store the result of each query in dictionary
query_res[count] = {
'status': status,
'result': None if row_added else res,
'sql': item['sql'], 'rows_affected': rows_affected,
'row_added': row_added
}
count += 1
# Commit the transaction if no error is found & autocommit is activated
if auto_commit:
conn.execute_void('COMMIT;')
return status, res, query_res, _rowid

View File

@ -45,6 +45,9 @@ class StartRunningQuery:
if type(session_obj) is Response:
return session_obj
# Remove any existing primary keys in session_obj
session_obj.pop('primary_keys', None)
transaction_object = pickle.loads(session_obj['command_obj'])
can_edit = False
can_filter = False

View File

@ -23,6 +23,7 @@ describe('#callRenderAfterPoll', () => {
update_msg_history: jasmine.createSpy('SQLEditor.update_msg_history'),
disable_tool_buttons: jasmine.createSpy('SQLEditor.disable_tool_buttons'),
disable_transaction_buttons: jasmine.createSpy('SQLEditor.disable_transaction_buttons'),
reset_data_store: jasmine.createSpy('SQLEditor.reset_data_store'),
query_start_time: new Date(),
};
alertify = jasmine.createSpyObj('alertify', ['success']);
@ -37,7 +38,7 @@ describe('#callRenderAfterPoll', () => {
sqlEditorSpy.is_query_tool = false;
});
describe('query was successful but had no result to display', () => {
describe('query was successful and have results', () => {
beforeEach(() => {
queryResult = {
rows_affected: 10,
@ -65,7 +66,7 @@ describe('#callRenderAfterPoll', () => {
});
});
describe('query was successful and have results', () => {
describe('query was successful but had no result to display', () => {
beforeEach(() => {
queryResult = {
rows_affected: 10,
@ -81,10 +82,16 @@ describe('#callRenderAfterPoll', () => {
expect(sqlEditorSpy.update_msg_history).toHaveBeenCalledWith(
true,
'Some result\n\nQuery returned successfully in 0 msec.',
false
true
);
});
it('resets the changed data store', () => {
callRenderAfterPoll(sqlEditorSpy, alertify, queryResult);
expect(sqlEditorSpy.reset_data_store).toHaveBeenCalled();
});
it('inform sqleditor that the query stopped running', () => {
callRenderAfterPoll(sqlEditorSpy, alertify, queryResult);
@ -116,7 +123,7 @@ describe('#callRenderAfterPoll', () => {
sqlEditorSpy.is_query_tool = true;
});
describe('query was successful but had no result to display', () => {
describe('query was successful and have results', () => {
beforeEach(() => {
queryResult = {
rows_affected: 10,
@ -150,7 +157,7 @@ describe('#callRenderAfterPoll', () => {
});
});
describe('query was successful and have results', () => {
describe('query was successful but had no result to display', () => {
beforeEach(() => {
queryResult = {
rows_affected: 10,
@ -166,10 +173,16 @@ describe('#callRenderAfterPoll', () => {
expect(sqlEditorSpy.update_msg_history).toHaveBeenCalledWith(
true,
'Some result\n\nQuery returned successfully in 0 msec.',
false
true
);
});
it('resets the changed data store', () => {
callRenderAfterPoll(sqlEditorSpy, alertify, queryResult);
expect(sqlEditorSpy.reset_data_store).toHaveBeenCalled();
});
it('inform sqleditor that the query stopped running', () => {
callRenderAfterPoll(sqlEditorSpy, alertify, queryResult);

View File

@ -14,6 +14,7 @@ import gettext from 'sources/gettext';
describe('the keyboard shortcuts', () => {
const F1_KEY = 112,
F5_KEY = 116,
F6_KEY = 117,
F7_KEY = 118,
F8_KEY = 119,
PERIOD_KEY = 190,
@ -109,6 +110,14 @@ describe('the keyboard shortcuts', () => {
key_code: 'r',
},
},
save_data: {
alt : false,
shift: false,
control: false,
key: {
key_code: F6_KEY,
},
},
};
queryToolActionsSpy = jasmine.createSpyObj(queryToolActions, [
@ -121,6 +130,7 @@ describe('the keyboard shortcuts', () => {
'executeQuery',
'executeCommit',
'executeRollback',
'saveDataChanges',
]);
});
@ -176,6 +186,42 @@ describe('the keyboard shortcuts', () => {
});
});
describe('F6', () => {
describe('when there is not a query already running', () => {
beforeEach(() => {
event.which = F6_KEY;
event.altKey = false;
event.shiftKey = false;
event.ctrlKey = false;
keyboardShortcuts.processEventQueryTool(
sqlEditorControllerSpy, queryToolActionsSpy, event
);
});
it('should save the changed data', () => {
expect(queryToolActionsSpy.saveDataChanges).toHaveBeenCalledWith(sqlEditorControllerSpy);
});
expectEventPropagationToStop();
});
describe('when the query is already running', () => {
it('does nothing', () => {
event.keyCode = F6_KEY;
event.altKey = false;
event.shiftKey = false;
event.ctrlKey = false;
sqlEditorControllerSpy.isQueryRunning.and.returnValue(true);
keyboardShortcuts.processEventQueryTool(
sqlEditorControllerSpy, queryToolActionsSpy, event
);
expect(queryToolActionsSpy.saveDataChanges).not.toHaveBeenCalled();
});
});
});
describe('F7', () => {
describe('when there is not a query already running', () => {
beforeEach(() => {

View File

@ -235,6 +235,7 @@ def get_test_modules(arguments):
if test_setup.config_data['headless_chrome']:
options.add_argument("--headless")
options.add_argument("--window-size=1280,1024")
options.add_argument("--disable-infobars")
options.add_experimental_option('w3c', False)
driver = webdriver.Chrome(chrome_options=options)