1) To handle non-ascii filenames which we set from table name. Fixes#2314
2) To handle non-ascii query data. Fixes#2253
3) To dump JSON type columns properly in csv. Fixes#2360
In order to resolve the non-ascii characters in path (in user directory,
storage path, etc) on windows, we have converted the path into the
short-path, so that - we don't need to deal with the encoding issues
(specially with Python 2).
We've resolved majority of the issues with this patch.
We still need couple issues to resolve after this in the same area.
TODO
* Add better support for non-ascii characters in the database name on
windows with Python 3
* Improve the messages created after the background processes by
different modules (such as Backup, Restore, Import/Export, etc.),
which does not show short-paths, and xml representable characters for
non-ascii characters, when found in the database objects, and the file
PATH.
Fixes#2174, #1797, #2166, #1940
Initial patch by: Surinder Kumar
Reviewed by: Murtuza Zabuawala
Using 'psycopg2.extensions.UNICODE' (for Python < 3) in the psycopg2
driver for proper conversation of unicode characters. Also - adjusted
the string typecaster to take care of different character types (char,
character, text, name, character varying, and their array types).
Reviewed by: Dave Page, Murtuza Zabuawala & Akshay Joshi
Made backend changes for:
* Taking care of the connection status in the psycopg2 driver. And, when
the connection is lost, it throws a exception with 503 http status
message, and connection lost information in it.
* Allowing the flask application to propagate the exceptions even in the
release mode.
* Utilising the existing password (while reconnection, if not
disconnected explicitly).
* Introduced a new ajax response message 'service_unavailable' (http
status code: 503), which suggests temporary service unavailable.
Client (front-end) changes:
* To handle the connection lost of a database server for different
operations by generating proper events, and handle them properly.
Removed the connection status check code from different nodes, so that
- it generates the proper exception, when accessing the non-alive
connection.
Fixes#1387
1) No handling for INTERVAL type datetime.
For example: executing query
SELECT INTERVAL '15 minutes';
throws json serialization error, because it returns time in timedelta format which is not handled.
Added support to handle timedelta datetime format in DataTypeJSONEncoder class
2) When we try to get BC dates from database raises ValueError: year is out of range
For eg:
SELECT TIMESTAMP '0044-03-15 10:00:00 BC',
It is because pyscopg2 doesn't handle BC datetime format.
So we have defined our method which type cast the datetime value to string in pyscopg2 overriding default behaviour.
Reference:
http://initd.org/psycopg/docs/advanced.html#type-casting-from-sql-to-pythonccf3693be6/pgcli/pgexecute.py
As we convert the binary password to string during storing the
connection information, we also need to convert it back to byte-arrays
during restoring the connections.
Hence - decode the passowrd using 'utf-8' encoding during storing the
connection information, and encode it during restoring the connections.
operation like backup, restore, etc within it.
Also:
* improvised the color combination of the background process logger.
* Removed an unnecessary print statement from the
get_storage_directory(..) function, also return None if STORAGE_DIR
is set to None.
server.
Current implementation keeps the cursor object in the 'g' (current
request context) object identified by the connection-id.
But - it fails to identify a different connection object request, when
we have same database name in different database server, it does not
able to identify it as separate request, Hence - now we will use
server-id qualified identifier for the same object.
Thanks Neel Patel for pointing out the issue.
DriverRegistry is executed second time.
Also, initialize the driver before registering different blueprints,
which uses those driver inside them.
Thanks Khushboo for reporting the issue.
At the moment, we will only allow to fetch status messages from the
asynchronous connection only, later - we may implement to fetch the
status message from the normal connection too.
This commit takes care of the following issues/improvements.
* Adding missing imports for unauthorised in database module
* Node under Servers Nodes (i.e. Databases, tablespaces, roles nodes)
need to be inherited from PGChildNodeView (and, not from NodeView) for
adding server version check for their children.
* Adding statistics for database, and tablespaces in its node (not, yet
in UI)
* Renaming the camel case methods with proper name.
(i.e. getSQL -> get_sql, getNewSQL -> get_new_sql, etc.)
* Fixed the functions going beyond the text limit (column: 80) in
Databases, Roles & Tablespaces modules.
* Fixed the node method of Database module, which was not tested ever.
* We do not need separate SQL template for fetching the name (i.e.
get_name.sql), using the 'nodes.sql' for the same.
* Optimise the query for fetching ACLs for the database node, we didn't
require to join certain tables, while fetching only the ACLs.
* Introduced the list of the ACLs (regular and default ACLs for
different type) supported by each version [Databases Module].
* Renamed the templates 'get_nodes.sql' to' nodes.sql' to make it
consistent with other modules.
* Removed the checks for the authentication table use, as we don't need
to expose the password to the users (even the encrypted MD5). Using
the pg_roles view always for fetching roles/users information now.
* Resolved some typos in unreachable (specially the exceptions
catchment area.)
* Logging the exception in the application.
* Using qtLiteral, qtIdent properly in the templates (do not assume
about the types of data.)
* Using tsid as identifier instead of did for the tablespaces.
* Using nodes method of tablespace view for fetching individual node
information.
* Removing the hardcoded node information from the 'parse_priv_to_db'
function, and pass on allowed ACLs by the caller nodes.
* Using 'nodes.sql' to fetch name of the template instead of writing
that in the delete.sql template, which is definitely wrong place to
fetch the name of the object. [Tablespace Module]
of double quote within it.
The 'replace' function of immutable type string do create a new
instance of type string, which needed to be reassgined to the variable
'value'.
Reported By: Murtuza Zabuawala
It was due to wrong index used for the COL_NAME_KEYWORD, used by the
'needsQuoting' function.
Also, replaced a tab with a space in the generate_keyword.py file.
Reported by: Sanket Mehta, Khushboo Vashi
with schema specified object name. This will also works with
EntepriseDB's package too.
Also, added dependency on the current connection object, it will allow
us to use the keywords list specific to the database server in futuer.
You can use the qtIdent template filter as:
conn|qtIdent(name)
or,
conn|qtIdent(schema_name, name)
inputs.
In order to do the proper quoting around the identifier, different type,
and literal, we introduced respective functions qtIdent, qtTypeIdent,
qtLiteral in psycopg2 driver. Also, introduced them as the Jinja's
custom filter for using it directly inside the templates.
Also, created an utility - generate_keywords.py in order to generate
keyword lists from the latest PostgreSQL installation.
- Allow to release connection using the database OID (did).
- Generate correct url for the collection nodes.
- Removed the server-type from the collection node module, from the
generate_browser_collection_node(...) function.
- Show version string, and not version integer in the server properties
(when connected).
connect to the database server. Also, added modified the way, we do
check the node is supported by the server.
Instead of creating separate blueprint for the server types, they will
be independently works. In order to add different variant of the
PostgreSQL, we need to extend the ServerType class, and override the
'instanceOf' function for identification using version string. Please
take a look at the ppas.py for the example.
During checking the back-end support for the node, we will also check
the server type (variant) along with the version within the range of
maximum and minimum version for the node. And, the same support added
for the schema attributes in front-end (JavaScript).
Really thankful to Khushboo Vashi for her initial work in front-end. I
took it further from there.
driver)
1. Update correct variable for database information in the connection
manager.
2. Raise exception, when database not found, instead of return the
(boolean, error message) tupple.
manager, once the connection to the maintenance database has been made,
because - we will have did (i.e. Database OID) most of the time for the
any node, and not its name as identifier.
This will allow us to work directly with OID, and we will not need to
bother about renaming of the database name.