Serverless Computing: Look Ma, no Guarantees

This blog explores failure behavior in context of AWS Lambda functions being invoked by (transactional) MySQL triggers.

In Short: No Guarantees

There is no transactional coordination between an ongoing database transaction during trigger execution and a AWS Lambda functions invoked by the trigger: a function failure does not necessarily cause a transaction rollback and vice versa: no transactional guarantees, aka, no consistency guarantees, are provided out of the box by the combination of MySQL triggers and AWS Lambda function execution.

It is a DIY moment.

Execution Styles

On a high level, two execution styles can be distinguished:

  • OLTP (Online Transaction Processing)
    • Client initiates the execution of functionality synchronously via an invocation
    • Client receives response of success or failure at the end of the invocation in the same synchronous invocation thread
  • Event (asynchronous execution from client viewpoint)
    • Client initiates the execution of functionality asynchronously by submitting a request, and immediately receives the response that the request was accepted (or not).
    • The client does not get any result of the invocation itself in the initial request submission
    • Server continues the execution, and possibly uses events to coordinate various execution steps
    • Client submits an identifier with the request (later on implemented as ‘correlation_id’) that can be used by the client to check on the asynchronous execution state
    • Any success or failure will be known later in an asynchronous operation: the client polls for the invocation outcome using the ‘correlation_id’, or it will be notified of the outcome via a callback invocation

In the following I focus exclusively on event style execution and all discussion as well as implementation is in context of this style. Retrieving the result or status of asynchronous invocations is done via polling, not callback.

Example Logic

In context of this blog, the client initiates the invocation by inserting a row into a table. The insert initiates a trigger, that in turn calls an AWS Lambda function. The function execution result is placed into the row. If an error happens, error information is recorded in the row instead of a result. The client can retrieve the result, the status or any error that took place by querying the table. In order to know which row to look for, the client can insert a ‘correlation_id’ and use that as primary key to determine the row it is looking for.

The table is defined as follows:

CREATE TABLE fib.fibonacci (
  fib INTEGER,
  value INTEGER,
  correlation_id CHAR(36),
  success BOOLEAN,
  error_sqlstate CHAR(5),
  error_msg TEXT,
  error_errno INTEGER,
  PRIMARY KEY (correlation_id)
);

The columns represent the following data

  • fib. This column is inserted by a client stating the Fibonacci value it wants computed.
  • value. This column will contain the Fibonacci value corresponding to ‘fib’ as it is computed by the function invoked. It is null if it is not computed or an error occurred.
  • correlation_id. This column is inserted by the client in order to distinguish the different rows (aka, asynchronous invocations).
  • success. This column is a Boolean value indicating the success of the invocation (MySQL uses 1 for true, and 0 for false).
  • error_sqlstate. This column contains the MySQL ‘errorstate’ in case an error happened.
  • error_msg. This column contains the MySQL ‘msg’ in case an error happened.
  • error_errno. This column contains the MySQL ‘errno’ in case an error happened.

For this insert

INSERT INTO fib.fibonacci VALUES ( 2, null, '22', null, null, null, null);

the result row looks like

| fib | value | correlation_id | success | error_sqlstate | error_msg | error_errno |
+-----+-------+----------------+---------+----------------+-----------+-------------+
| 2   | 1     | '22'           | 1       | NULL           | NULL      | NULL        |

The following insert causes an error

INSERT INTO fib.fibonacci VALUES (-2, null, '-22', null, null, null, null);

and the result row looks like

| fib | value | correlation_id | success | error_sqlstate | error_msg                           | error_errno |
+-----+-------+----------------+---------+----------------+-------------------------------------+-------------+ 
| -2  | NULL  | '-22'          | 0       | '45000'        | 'Signal before function invocation' | 1091        |

Failure Points Analysis: Principle Points of Failure

There are different possible points of failure during the trigger execution. Given the combination of MySQL triggers and an AWS Lambda function invocation, the various failure points are discussed next.

The trigger has the following structure (some details omitted):

CREATE TRIGGER fibcomp BEFORE INSERT
ON fibonacci
FOR EACH ROW
BEGIN
<local variable declarations>
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
  BEGIN
        GET DIAGNOSTICS CONDITION 1
        error_sqlstate = RETURNED_SQLSTATE,
        error_msg = MESSAGE_TEXT,
        error_errno = MYSQL_ERRNO;
        SET NEW.error_sqlstate = error_sqlstate;
        SET NEW.error_msg = error_msg;
        SET NEW.error_errno = error_errno;
        SET NEW.success = FALSE;
  END;
SET function_result_message = lambda_sync('arn:aws:lambda:<region:id>:function:fibonacci',
                                          CONCAT('{"fib": ', NEW.fib, '}'));

CALL json_extract_property(function_result_message, 'output', output_value_as_string, property_type);

IF (output_value_as_string IS NULL)
   THEN
        SET NEW.error_errno = "-1";
        SET NEW.error_msg = CONCAT('No output value received: ', function_result_message);
   ELSE
        SET NEW.value = output_value_as_string;
        SET NEW.success = TRUE;
   END IF;
END

Basically, when an insert takes place, the AWS Lambda function ‘fibonacci’ is called. The results is given to the recursive descent parser that I implemented for extracting the ‘output’. The output is analyzed and either inserted into the row, or if there is a failure, the error is inserted. The continue handler catches every exception and inserts detailed error information into the row.

In relation to an AWS Lambda function invocation a trigger can fail

  • Before the function execution
  • During the function execution
  • After the function execution

For any failure the following semantics applies:

  • The failures is recorded in the row so that a client can retrieve the failure details (using the ‘correlation_id’)
  • The service implementation, not client, would ideally perform any error handling and retry logic in order to get to a final resolution (of success or terminal failure). The client should not be involved in error recovery and resolution unless the server exhausted all possible error resolution possibilities.
  • In terms of MySQL, the error handler uses the ‘continue handler’ condition so that it can insert error details into the row. The trigger logic is built so that after an error the trigger execution is completed (and not continued after the failure point).
  • The error (or success) is recorded in the ‘success’ column for the client to determine the state. In case of an error ‘0’ (false) is inserted, in case of success a ‘1’ is recorded
  • For clients to check the execution status a client queries the row it is interested in by providing the ‘correlation_id’ as selection criteria

Deterministic Chaos Monkey: Failures and Outcome

One possible way to actually implement forced errors at the various failure points is to introduce a deterministic chaos monkey that creates a failure on command as follows.

A ‘Chaos Monkey’ database function is implemented (in a very simple deterministic approach at this point) causing different failures based on the various input values. The chaos monkey is invoked in 3 locations: before, during, and after the AWS Lambda function invocation.

The induced errors and their triggering input values are discussed next by failure point.

Before Function Invocation

  • fib = -1. Fatal: have trigger sleep for a minute and crash database by manually killing the OS process running the database
    • Rollback takes place after restart, aka, the insert has no effect
    • Client realizes that no row with the provided ‘correlation_id’ can be found, so it knows that a rollback took place
    • The failure test is done on a locally running MySQL database as it is not possible to crash the database intentionally on AWS
  • fib = -2. Non-fatal: signal
    • No rollback takes place and the insert is successful, but execution does not finish successfully
    • The error is recorded in the row
| fib | value | correlation_id | success | error_sqlstate | error_msg                           | error_errno |
+-----+-------+----------------+---------+----------------+-------------------------------------+-------------+ 
| -2  | NULL  | '-22'          | 0       | '45000'        | 'Signal before function invocation' | 1091        |

AWS Lambda Function Execution Failure

  • fib = -3. Implementation failure caused by a div by zero
    • The function invocation returns an incomplete stack trace with that information embedded (denoted separately below for formatting reasons)
| fib | value | correlation_id | success | error_sqlstate | error_msg | error_errno |
+-----+-------+----------------+---------+----------------+-----------+-------------+ 
| -3  | NULL  | '-33'          | 0       | 'HY000'        | <see (1)> | 1873        |

(1)
'Lambda API returned error: Unhandled. {\n
   \"errorMessage\" : \"/ by zero\",\n
 \"errorType\" : \"java.lang.ArithmeticException\",\n
   \"stackTrace\" : [\n
 \"org.blog.fibonacci.Fibonacci.errorMonkey(Fibonacci.java:190)\",\n
      \"org.blog.fibonacci.Fibonacci.handleRequest(Fibonacci.java:137)\",\n
 \"sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\",\n
      \"sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\",\n
 \"sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMetho'
  • fib = -4. Environment failure by exceeding the AWS Lambda execution time limit (the function invocation sleeps for 10 seconds with a AWS Lambda function execution time limit of 9 seconds)
    • An error message is coming back, however, it does not state that a violation of the limit happened, only that the ‘task’ timed out
| fib | value | correlation_id | success | error_sqlstate | error_msg | error_errno |
+-----+-------+----------------+---------+----------------+-----------+-------------+ 
| -4  | NULL  | '-44'          | 0       | 'HY000'        | <see (2)> | 1873        |

(2)
'Lambda API returned error: Unhandled. {\n
   \"errorMessage\" : \"2018-09-25T13:55:22.381Z 9fe96566-c0ca-11e8-bdeb-17decae46851 Task timed out after 9.01 seconds\"\n
}\n'
  • fib = -5. Environment failure by exceeding the memory limit set (a string is created of size 128MB + 1 byte with an AWS Lambda function limit set to 128MB)
    • Again, the error message coming back only states that there is an out of memory error, not that the set limit was exceeded.
| fib | value | correlation_id | success | error_sqlstate | error_msg | error_errno |
+-----+-------+----------------+---------+----------------+-----------+-------------+ 
| -5  | NULL  | '-55'          | 0       | 'HY000'        | <see (3)> | 1873        |

(3)
'Lambda API returned error: Unhandled. {\n
   \"errorMessage\" : \"Java heap space\",\n
 \"errorType\" : \"java.lang.OutOfMemoryError\",\n
   \"stackTrace\" : [\n
 \"java.util.Arrays.copyOf(Arrays.java:3332)\",\n
 \"java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)\",\n
      \"java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:649)\",\n
 \"java.lang.StringBuilder.append(StringBuilder.java:202)\",\n
 \"java.util.Formatter$FormatSpecifier.justify(Formatter.java:2926)\",\n'

Some observations:

  • All AWS Lambda function execution errors have same ‘sqlstate’ and ‘errno’
    • HY000
    • 1873
  • To distinguish the errors and to determine the reason requires parsing the error message. This is a bummer for various reasons.
  • The errors do not necessarily state the complete context for the failure, especially when limits are violated
  • A network problem was encountered, not sure why, and it was intermittent. So errors happen, and this unexpected error makes the point of this blog (and its subsequent ones) quite nicely.
| fib | value | correlation_id | success | error_sqlstate | error_msg | error_errno |
+-----+-------+----------------+---------+----------------+-----------+-------------+ 
| -4  | NULL  | '-44'          | 0       | 'HY000'        | <see (4)> | 1873        |

(4)
'Lambda API returned error: Network Connection. Unable to connect to endpoint'

I tried to find a complete list of possible AWS Lambda errors, but was not able to.

After Function Invocation

  • fib = -6. Fatal: have trigger sleep for a minute and crash database by manually killing the OS process running the database
    • Rollback took place after restart, aka, has no effect
    • Client needs to realize that by not finding a row with its known ‘correlation_id’
  • fib = -7. Non-fatal: signal
    • No rollback takes place and the insert is successful, but execution does not finish successfully
    • It is unknown if the function invocation succeeded or not. Any recovery requires checking if any function side effect took place
| fib | value | correlation_id | success | error_sqlstate | error_msg                          | error_errno |
+-----+-------+----------------+---------+----------------+------------------------------------+-------------+ 
| -7  | NULL  | '-77'          | 0       | '45000'        | 'Signal after function invocation' | 1092        |

The following is the complete table after triggering each error once:

| fib | value | correlation_id | success | error_sqlstate | error_msg                           | error_errno |
+-----+-------+----------------+---------+----------------+-------------------------------------+-------------+ 
| -2  | NULL  | '-22'          | 0       | '45000'        | 'Signal before function invocation' | 1091        |
| -3  | NULL  | '-33'          | 0       | 'HY000'        | <see (1)>                           | 1873        |
| -4  | NULL  | '-44'          | 0       | 'HY000'        | <see (2)>                           | 1873        |
| -5  | NULL  | '-55'          | 0       | 'HY000'        | <see (3)>                           | 1873        |
| -7  | NULL  | '-77'          | 0       | '45000'        | 'Signal after function invocation'  | 1092        |

Error Recovery

At this point in the discussion the errors, if they take place, are recorded. The upcoming blogs will focus on error recovery, and what has to be in place in order to actually be able to recover.

A big role plays the function type and behavior. A function that is idempotent can be simply invoked again and trigger execution can continue.

If a function is non-idempotent, it might be that the function was executed, but the result was never obtained by the trigger (aka, the return value was lost). Unless it is known if the function was executed successfully, it is impossible to recover from the error correctly and completely.

Two options for probing a function’s execution success are:

  • An idempotent check function that checks if the original function was executed successfully or not. Each regular function needs to be paired up with a check function for this approach. The check function must be able to return the result in order to recover the originally lost return
  • The function provides a idempotency token. This invocation style requires that the client generated a separate token for each invocation, and uses the same token for a repeat invocation to indicate that this is an attempt to execute the function again. The function implementation, receiving the idempotency token, understands that this invocation is a probing invocation and can return the result if it was already computed or initiate the computation.

Function Invocation Chain

As a teaser the following requires discussion next: what type of function invocations chains are assumed?

There are two dimensions:

  • One trigger calls more than one function
  • A function calls another function as part of its implementation
  • A combination of both

In each case, every function can be either idempotent or modifying state without being transactionally bound or coordinated with the trigger.

Summary

Since AWS Lambda functions and MySQL triggers are not transactionally bound, errors during execution can leave an inconsistent state behind. If consistency is required even in the presence of failures, error detection, analysis, recovery and resolution has to be designed and implemented in the triggers as well as AWS Lambda functions.

Go Serverless!

Disclaimer

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

 

Advertisement

Serverless Computing: AWS Lambda Function Invocation from Aurora MySQL – Second Part of a Journey

Today the journey of invoking a AWS Lambda function and storing its result into a table is completed. Some more issues had to be overcome.

Recap

The previous blog left off with the realization that in MySQL 5.6 there is no native facility to parse JSON objects as returned from the AWS Lambda invocation.

Recursive Descent Parser

To keep things simple, I implemented a recursive descent parser in MySQL (not 100% complete, only the part needed for this blog – still missing functionality to be implemented later as needed). The parser also extracts a property being sought for at the same time.

The signature is as follows:

CREATE PROCEDURE json_extract_property
  (IN  json_document TEXT,
   IN property_name VARCHAR(32),
   OUT property_value VARCHAR(32),
   OUT property_type VARCHAR(6))

Fundamentally, given a JSON document and a property name, return the property value and JSON type if the property exists. Otherwise the value is SQL NULL; this allows for testing if a value was found or not.

Any error encountered is raised as a SQL exception according to the MySQL approach. For example, if an incorrect character is found in the JSON document, the following exception is raised:

SET message = CONCAT('Not a JSON document or JSON array: found incorrect "', 
                     current_char, '" at position: ', next_char_index - 1);
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = message,
MYSQL_ERRNO = 1001;

Current_char contains the character in question at this point of parser execution, and next_char_index points at the next character to be examined.

MySQL Table for Fibonacci

The functionality is as follows. The client inserts a number (into a table) for which the Fibonacci number should be computed. The table is defined as follows:

CREATE TABLE fib.fibonacci (
    fib INT,
    value INT,
    code CHAR(5),
    msg TEXT
);

The table has four columns, one holding the number for which the Fibonacci number should be computed for, one holds the Fibonacci number, and two columns for error handling – one holding the error code, and one the error message.

The client inserts a row as follows:

INSERT INTO fib.fibonacci VALUES (5, null, null, null);

This then causes a trigger to be executed that computes the Fibonacci number via AWS Lambda invocation and inserts the resulting value (or error message). The trigger is explained next.

MySQL Trigger Computing Fibonacci

The trigger is defined as follows:

DELIMITER $$
CREATE TRIGGER fibcomp BEFORE INSERT
ON fibonacci
FOR EACH ROW
BEGIN
DECLARE function_result_message TEXT;
DECLARE output_value_as_string VARCHAR(32);
DECLARE property_type VARCHAR(6);
DECLARE code CHAR(5) DEFAULT '00000';
DECLARE msg TEXT;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
        BEGIN
        GET DIAGNOSTICS CONDITION 1
        code = RETURNED_SQLSTATE, msg = MESSAGE_TEXT;
        SET NEW.code = code;
        SET NEW.msg = msg;
        END;

SET function_result_message = lambda_sync('arn:aws:lambda:us-west-2:<acct>:function:fibonacci',
                                          CONCAT('{"fib": ', NEW.fib, '}'));
CALL json_extract_property(function_result_message, 'output', output_value_as_string, property_type);
IF (output_value_as_string IS NULL)
THEN
    SET NEW.code = "-1";
    SET NEW.msg = 'No output value received';
ELSE
    SET NEW.value = output_value_as_string;
END IF;
END$$
DELIMITER ;

The trigger invokes the recursive descent parser. If a result is found, it is added to the row the user is inserting. If no value is found, code and error message are set. If there is any error being raised, this is recorded in the row as well.

After the above insert, the result is as follows:

| fib | value | code | msg  |
+-----+-------+------+------+
| 5   | 5     | null | null |

Issues Encountered

This second part of the journey was not without issues, either. Here a short summary:

The AWS RDS web user interface sometimes does not work. When I made a change to my MySQL instance, the change was not recognized by the UI and I could not apply the change.

The resolution to this is using the REST endpoints in order to modify the MySQL instance.

A bigger “issue” is that the function “lambda_sync” provided by AWS is formatting (pretty printing) the resulting JSON document before returning it adding ‘0A’ for pretty printing. It took me a while to realize that.

The resolution to this problem was to add the following to the parser as a first execution:

SET normalized_json_document = REPLACE(REPLACE(json_document, '\r', ''), '\n', '');

This ensures that any formatting characters added by lambda_sync are removed before parsing starts.

Note on Engineering with MySQL

In order to be able to efficiently work while implementing the recursive descent parser I installed MySQL locally not using the hosted version. This was very convenient and a route to consider.

Summary

In summary, I was able to implement the AWS Lambda function in such a way that a single function can be invoked from MySQL as well as via the AWS Gateway providing the same result structure.

In principle, the trigger mechanism allows the invocation of functions as well as their result processing – even though there is no support infrastructure from that provided by AWS. I assume this will get addressed over time by the appropriate upgrades of MySQL as well as making the lambda_sync function available.

Go Serverless!

Disclaimer

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

 

Serverless Computing: AWS Lambda Function Invocation from Aurora MySQL – First Part of a Journey

Lambda Invocation from Aurora MySQL: Setup

After creating an instance of Aurora MySQL it must get privileges assigned in order for it to be able to invoke AWS Lambda. Giving Aurora MySQL access to AWS Lambda is described in the following page. I followed that process to the last detail: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/ AuroraMySQL.Integrating.Lambda.html

Lambda: First Invocation from Aurora MySQL

A first invocation of my AWS Lambda function is implemented as simple SQL select statement – mostly for testing. It looks like the following:

SELECT lambda_sync('arn:aws:lambda:us-west-2:<...>:
                    function:fibonacci',
                   '{"fib": 5}') 
       AS 'result';

First Trouble: Input JSON Document Inconsistency

The return value of the select statement is a BLOB and it contains an error message:

{
   "body" : "{\"output\":\"Event body is null\",
              \"input\":\"{\\\"fib\\\":5}\"}",
   "isBase64Encoded" : false,
   "statusCode" : "400"
}

Now that is curious as the invocation with the exact same payload works when going through the API Gateway:

POST /test/ HTTP/1.1
Host: eflqvnlyoj.execute-api.us-west-2.amazonaws.com
content-type: application/json
Cache-Control: no-cache
Postman-Token: c5102b2c-bd89-4d89-af37-2fd46124a55a
{
"fib": 5
}

Returns

{
    "output": "5",
    "input": <...>
}

Error Analysis

The error message “Event body is null” is coming from my own function implementation. This error message is returned when the function cannot find a property called “fib” in the input payload. It looks like it finds “fib” when invoked through the API Gateway, but it does not find “fib” when invoked through Aurora MySQL. Why is that?

Based on the CloudWatch log the input parameter value (InputStream) to the Lambda function is different when coming from the API Gateway and the Aurora MySQL trigger. The value of the InputStream parameter of the HandleRequest method differs. The difference is as follows.

In case of the API Gateway the InputStream contains a JSON document consisting of the path, path parameters, headers, etc., providing the context of the REST invocation. The actual input payload is contained in a JSON top-level property called “body”.

In case of the invocation from Aurora MySQL the input is the JSON document as defined in the SQL select statement above (without further context like path, path parameters, etc. as it is in context of the API Gateway). Specifically, it is not wrapped within a JSON document with a property “body”.

Disappointing Inconsistency

First of all, this inconsistency is software-engineering-wise disappointing as a part of the function implementation now has to be conditional on the client invoking it. The payload from different clients is structured differently by AWS. Not good.

What should have happened is that the payload should be at the same location, no matter the invocation context or client. In addition, the context should have been qualified and separated out as it would have a different content depending on the client. For example, the InputStream value structure could have been like this:

{
“body”: <payload as sent>,
“context”: <a generic context object with a context type> 
}

The value of the context type could be the kind of possible invocation clients (environments) of functions like “API Gateway”, “Aurora MySQL 5.6”, etc., with details following outlining the specifics of the invoking client. This approach would have allowed a uniform function invocation as well as a uniform function implementation.

Error Resolution

There are different ways to deal with the inconsistent situation. One would be to refactor the function into three parts:

  • One function to be used when invoked from the API Gateway
  • One function to be used when invoked from the Aurora MySQL
  • One “core” function that implements the “raw” functionality and is reused by the previous two functions that are client specific

While this would be a possible approach, this would mean limited reuse of the client specific functions and a proliferation of the number of functions to manage within the AWS Lambda implementation and environment (in my example it would be three instead of one).

My idea, however, is to be able to use the same single function for both clients (in my case). This would be possible if there were a way to check from within the function implementation where the invocation is originating from. Or, if it is possible to probe the InputStream for indications about the invoking client.

Note, there is a context being passed into the function invocation as well (com.amazonaws.services.lambda.runtime.Context), however, this context does not provide a reliable way to distinguish the type of client. It is not fully populated, especially it does not contain a client type designation.

Therefore my chosen solution is

  • Probe for “body” (assuming that every API Gateway invocation adds this property to the InputStream)
  • If found then it must be a API Gateway call
  • If not found then it is a Aurora MySQL invocation

Clearly, this is not a general solution as functions can be invoked from many more types of clients, however, for the current implementation this approach is sufficient. In a complete implementation all possible clients of functions would have to be supported.

Modified Function Implementation

The function implementation now tests for “body”. If “body” is found (API Gateway), then this is extracted to be the “real” input to the function. If “body” is not found (Aurora MySQL), then the value of InputStream is taken as the “real” input.

So far, so good. Now the same function can be used for invocations from the Aurora MySQL trigger as well as an invocation from the API Gateway.

But wait!

More Trouble: Result Output Requirements

There is more trouble. For the API Gateway to work, the output has to contain specific JSON top level properties, and that output would not be the same response to the invocation from Aurora MySQL as it is API Gateway specific. Search for “api gateway function internal server error” and there is a variety of discussions on what has to be included.

So not only the input, but also the output is client specific. Second bummer!

The error resolution for this case (my goal) is to make the output the same for both clients by creating it accordingly. In context of the API Gateway and Aurora MySQL this is possible. The output is as the API Gateway requires it, and the Aurora MySQL trigger will have to process it as well (requiring logic to do so).

In my case a function output example is

{
    "isBase64Encoded": false,
    "body": "{\"output\":\"5\",
              \"input\":\"{<the whole input is contained here>}\",
              \"error\":null}",
    "statusCode": "200"
}

This results in the following when the function is invoked via the API Gateway:

{
    "output": "5",
    "input": "{<the whole input is contained here>}\"}",
    "error": null
}

However, at this point it is not clear to me if that would work for all possible AWS Lambda clients, or if AWS actually would force me to have different outputs for different clients.

Even More Trouble: Error Handling Behavior

The error handling behavior is different for API Gateway and the Aurora MySQL as well. If the payload has a JSON syntax error when invoking the function via the API Gateway, the API Gateway will not complain (neither will AWS Lambda) and the error handling is left to the function implementation itself.

When invoking the function via Aurora MySQL lambda_sync() and the payload has a JSON syntax error, the following error message is returned:

Error Code: 1873. Lambda API returned error: Invalid Request Content. 
Could not parse request body into json: Unexpected character (';' (code 59)): 
was expecting comma to separate Object entries  at [Source: [B@413d13c7; line: 1, column: 11]

The payload with the JSON syntax error is not passed on to the function, but caught before the function is being invoked. This means that AWS does some error handling itself before the function invocation and some reaction to the possible errors have to be implemented in the client code in context of Aurora MySQL in addition to the function implementation.

Trouble Keeps On Coming: MySQL JSON Function Support

Aurora MySQL version 5.6 has AWS Lambda invocation support (e.g. the lambda_sync() function), but no MySQL JSON function support. Aurora MySQL 5.7 has JSON function support, but not AWS Lambda invocation support. Go figure.

(As a side note, this is probably the reason why the AWS Lambda invocation examples from Aurora MySQL 5.6 do not show how a function result is processed in a synchronous function invocation).

Destination: Not Reached Yet

With all the trouble to go through and having had to work through the various inconsistencies in behavior, the blog did not reach its final destination to have the same function be used from both, the API Gateway and Aurora MySQL 5.6.

What is left to be done (as it is known to me so far) is functionality to interpret the returned JSON result in context of Aurora MySQL. I probably have to implement a MySQL function that takes the AWS Lambda function result JSON document and picks out the return value.

Once the return value is extracted, it can be further processed. The Aurora MySQL function invocation logic (in my case via MySQL database triggers) will described in the next blog as the second part of the journey.

Summary

In summary, reusing AWS Lambda functions by different AWS technologies that can act as function invocation clients should be straightforward and not require specific coding in the function implementation.

It is unclear why AWS did not ensure this as it would have been probably easier (at least from a user’s perspective) to add additional clients in the future without having to go through extensive testing and probably additional coding.

Another observation is that the various behaviors and constraints are not documented by AWS. The ecosystem is your friend in this context.

Go Serverless!

Disclaimer

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Serverless Computing: MySQL as Function Trigger (Preparation)

In the previous blog an AWS Lambda function is invoked through the API Gateway by a client. How do I invoke (trigger) the same AWS Lambda function by a relational database? This blog sets up MySQL first as preparation, including a database trigger.

Create MySQL RDS Instance

I followed CHAP_GettingStarted.CreatingConnecting.MySQL.html to create an RDS (Relational Database Service) MySQL instance.

In order to connect to it from the MySQL Workbench you need to

  • find the connectivity information on the details page. Look for the “Connect” tile to find the connection endpoint and port
  • use the master username and password and try to connect. However, the connection must fail as no security group is created yet that allows access (I am correctly getting MySQL error 10060)
  • create a second security group and create a rule with custom TCP, port 3306 (in my case) and the IP address from which you are accessing the database (the AWS UI determines that automatically). The rule has to be assigned to the database instance via modification for it to take effect
  • alternatively you could create a rule that allows access by all traffic from anywhere. However, I prefer a more restrictive rule, even though I might have to update it when the IP address changes in the system from where I am accessing the MySQL instance

Now logging in from the MySQL Workbench on your laptop using the master user name and master user password is possible – that is going to make development for MySQL easier.

Create Aurora MySQL Instance

Turns out, creating a MySQL RDS Instance was a wasted effort for the goal that I set myself. It is not possible to call a AWS Lamda function from an MySQL RDS instance. Bummer. However, it is possible from Aurora MySQL.

So, I started over and created an Aurora MySQL. Once I had Aurora MySQL setup and running, I could continue with the MySQL user, table, trigger and function definition specification inside Aurora MySQL. I’ll refer to Aurora MySQL as MySQL for short in the following.

Setup MySQL User

For development I setup a user in the database instance as follows:

  • CREATE USER ‘cbmysqluser’@’%’ IDENTIFIED BY ‘cbmysqluser’;
  • GRANT CREATE ON fib.* TO ‘cbmysqluser’@’%’;

Additional commands are necessary during development to provide additional permissions. Here is the collection I ended up with over time. You might not have to use every single one of those, but anyway. Here is the list of grants

  • GRANT DROP ON fib.* TO ‘cbmysqluser’@’%’;
  • GRANT SELECT ON fib.* TO ‘cbmysqluser’@’%’;
  • GRANT INSERT ON fib.* TO ‘cbmysqluser’@’%’;
  • GRANT TRIGGER ON fib.* TO ‘cbmysqluser’@’%’;
  • GRANT CREATE ROUTINE ON fib.* TO ‘cbmysqluser’@’%’;
  • GRANT ALTER ROUTINE ON fib.* TO ‘cbmysqluser’@’%’;
  • GRANT EXECUTE ON fib.* TO ‘cbmysqluser’@’%’;
  • GRANT UPDATE ON fib.* TO ‘cbmysqluser’@’%’;
  • GRANT DELETE ON fib.* TO ‘cbmysqluser’@’%’;

Here some revocations

  • REVOKE CREATE ON *.* FROM ‘cbmysqluser’@’%’;
  • REVOKE ALL PRIVILEGES, GRANT OPTION FROM ‘cbmysqluser’@’%’;

And some management commands

  • SHOW GRANTS FOR ‘cbmysqluser’@’%’;
  • FLUSH PRIVILEGES;

Create MySQL Table

Having the user setup, now a table can be created by that user.

The idea is to use the table as the invocation interface. A value is inserted into a column (representing a function parameter), and after the function is executed the result is stored in another column into the same row. Each row therefore has the function parameter value as well as the corresponding return value.

In my case the function is the Fibonacci function. For example, calling it with 0 will return 0. Calling it with 10 will return with 55.

| fib | value |
+-----+-------+
|   0 |     0 |
|  10 |    55 |

Here the table creation statement

CREATE TABLE fibonacci (
  fib INT,
  value INT
);

Create MySQL Trigger

Now moving on to the trigger. I split the trigger into the trigger itself, and a separate function. The rigger, when fired, passes the input parameter to the function and stores the value computed by the function into the table. The trigger is specified as follows

DELIMITER $$
CREATE TRIGGER fibcomp 
  BEFORE INSERT ON fibonacci
  FOR EACH ROW
    BEGIN
      SET NEW.value = fibcomp(NEW.fib);
    END$$
  DELIMITER ;

The function computes the Fibonacci number as follows

DELIMITER $$
CREATE FUNCTION fibcomp (fib INT) 
  RETURNS INT 
BEGIN
  DECLARE f1 INT DEFAULT 0;
  DECLARE f2 INT DEFAULT 1;
  DECLARE sum INT;
  DECLARE i INT DEFAULT 2; 
  
  IF fib <= 0 THEN RETURN f1;
  ELSE
    WHILE i <= fib DO
      SET i = i + 1;
      SET sum = f1 + f2;
      SET f1 = f2;
      SET f2 = sum;
    END WHILE;
    RETURN f2;
  END IF;
END $$
DELIMITER ;

When issuing the following two insert statements, the above table content is the result

INSERT INTO fibonacci VALUES (0, null);
INSERT INTO fibonacci VALUES (10, null);

At this point I have the whole infrastructure available in context of MySQL and the functionality in place. This is the basis for incorporating the AWS Lambda implementation of the function implementing the Fibonacci computation next.

Summary

Once it was clear that Aurora MySQL is required, the setup of a database user, table, trigger and function was easy as this is done within MySQL not using any additional AWS infrastructure. With the ability to connect to the database instance using MySQL Workbench the development environment familiar to me was available, and that’s great.

Go Serverless!

Disclaimer

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Serverless Computing: My First Lambda Function

This blog describes to some level of detail how I created my first AWS Lambda function (https://aws.amazon.com/lambda/).

Overview

You are about to embark on a steep learning curve. Defining and executing a AWS Lambda function (in short: function) is not only typing in the function specification, its implementation and then invoking it. A lot more is involved – the below blog provides a general flow (not a command-by-command tutorial – for that several references are provided).

From a high level, the architecture looks as follows:

+--------+  +-------------+  +-----------------+  +----------------+
| Client |->| Api Gateway |->| Lambda Function |->| Implementation |
+--------+  +-------------+  +-----------------+  +----------------+
                  |                  |
            +--------------------------------+
            | Identity and Access Management |
            +--------------------------------+

A client invoking a function does so via the API Gateway. For the function to be executed, its implementation has to be provided. When using Java for the implementation, the implementation has to be uploaded in a specific packaging. Identity and Access Management (IAM) governs various access points from a security perspective.

Super briefly, as a summary, in order for a function invocation to be successful the following has to be in place (a lot of moving parts):

User

  • needs to know API Gateway URL (it is shown when selecting a stage within the Stages link on the API Gateway page)
  • Must have an access key and secret key for key-based authentication (configured in IAM)

Api Gateway

  • API definition as well as resource specification

Lambda Function

  • Function specification
  • Function implementation uploaded
  • Function policy allowing API Gateway access

Identity and Access Management (IAM)

  • User, group and role definition
  • Access policy definition assigned to invoking user (directly or indirectly) for API gateway

Aside from the function implementation everything can be specified on the AWS web page. The function implementation is uploaded by means of a defined packaging.

AWS Account

All specification and definition activity takes place in context of an AWS account. If you don’t have one then you need to create one. Chances are you purchased an item on Amazon before; in this case you have an AWS account already.

Identity and Access Management (IAM)

Initially I setup two users. One called “apiDev”, and a regular user.

Then I created two groups “apiDevelopers” and “apiUsers”. apiDevelopers has the policy AdministratorAccess assigned. This allows apiDev to create all artifacts necessary to implement and to invoke a function. I logged in as apiDev for creating the function and all necessary artifacts.

The group apiUsers has no policy assigned initially, however, it will get a (function execution) policy assigned that is going to be specifically created in order to access the function. This establishes a fine-grained permissions allowing the users of the group to execute the function.

Function Definition

The function definition is separate from the function implementation. A function is created without it having an implementation necessarily at the same time. In my case I am using Java and the implementation has to be uploaded in a specific packaging format; and that upload is distinct from specifying the function in AWS Lambda.

A function definition consists of a name, the selection which language (runtime) is going to be used as well as an execution role. The latter is necessary for a function to write e.g. into the Amazon CloudWatch logs. However, a function specification does not include the function parameters or return values. A function specification does not contain its signature. Any input/output signature specification is absent and only the code will contain the authoritative information.

The phrase “creating a function” can therefore refer to different activities, e.g., just the function specification in AWS Lambda, or including its implementation via e.g. an upload.

The instructions for creating the function and its implementation is here: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-as-simple-proxy-for-lambda.html. I chose the execution role lambda_basic_execution.

As a side note, AWS Lambda is different from anonymous lambda functions (https://en.wikipedia.org/wiki/Anonymous_function).

Function Implementation

Being an Intellij user I created a separate project for implementing the function. It turns out the easiest way approaching the function implementation was to create a gradle project from scratch using the Intellij project creation option for gradle, and then fill in the AWS function implementation (rather starting with the function implementation and trying to turn it into a gradle project afterwards).

Once the function is developed it has to be uploaded to AWS Lambda in form of a specific packaging. The process of creating the corresponding zip file is here: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-as-simple-proxy-for-lambda.html#api-gateway-proxy-integration-lambda-function-java and here: https://docs.aws.amazon.com/lambda/latest/dg/lambda-java-how-to-create-deployment-package.html.

The upload only happens when pressing the “save” button on the AWS Lambda page and it’ll take a while as the package tends to be several GB. Once uploaded one or more tests can be defined on the web page and executed. While this is not a practical unit test approach, it allows to execute the function without an API Gateway integration in place.

After the function implementation (I choose to implement a function computing Fibonacci numbers) the AWS Lambda user interface looks like this:

Note: this screen dump was taken after I integrated the function with the API Gateway; therefore the API Gateway trigger it is displayed in the UI.

API Gateway

One way invoking (“triggering”) a function is via the API Gateway. This requires the specification of an API and creating a reference to the function. The simplest option is using the proxy integration that forwards the invocation from the API Gateway to the function (and its implementation).

The instructions for creating the API in the API Gateway and its implementation is here: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-as-simple-proxy-for-lambda.html. I chose the Lambda Function Proxy integration.

An API specification must be deployed into a stage in order for it to be accessible. Stages are a mechanism to e.g. implement the phases of development, testing, or production.

Analogous to AWS Lambda, the API Gateway also allows direct testing within the web user interface and this can be used for some initial testing (but is not feasible for integration testing as it is manual).

Once the relationship between the API definition in the API Gateway and the function is established via resource specifications, the function can be invoked from external to Amazon using e.g. Postman.

After the implementation the API Gateway user interface looks like this:

Note on security: by default the API is not secured, meaning, everybody who knows the URL is free to call it and invoke the associated function.

Securing the Function

There are two different locations in the invocation chain that require security consideration and policy setup:

  • First, the API in the API Gateway needs to be protected
  • Second, the function in AWS Lambda needs to be protected

Securing an API is outlined here:

https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html This differentiates accessing the API definition and invoking an API.

For invoking an API I created a policy and attached it to the group apiUsers. Any user within this group is allowed to invoke the API that I created. In addition, I set the authorization to AWS_IAM (see above figure) and that means that the invoking client has to provide the access and secret key when invoking the API.

The function in AWS Lambda is secured using a function policy that can be seen when clicking on the symbol with the key in the AWS Lambda user interface. In my case is states that the API Gateway can access the function when invoked through a specific API.

Note (repeat from earlier): when defining an API in the API Gateway access is open, aka, anybody knowing the URL can execute the function behind the API. While the URL contains the API identifier (and that is randomly generated) and highly unlikely to be guessed, still, access is open.

Once the access policy is defined and put in place, access will be limited according to the policy. However, access restriction is not immediate, it takes some (short) time to become effective.

Function Invocation

In order to invoke the function, a client (in my case Postman) requires the URL. The URL can be found when clicking a stage in the Stages link in the API Gateway UI.

I opted for the IAM authorization using access key and secret key. That needs to be configured in the authorization setting of Postman (it also requires the AWS Region to be specified). No additional headers are required.

As I have defined a POST method, the payload has to be added as well. In my case this is a simple JSON document with one property.

POST /test/ HTTP/1.1
Host: <API URL>
content-type: application/json
Cache-Control: no-cache

{
"fib": 7
}

Once the invocation is set up, and once a few invocations took place, the API Gateway Dashboard will show the number of invocations, errors, etc., separated for API Gateway as well as Lambda functions.

Summary

Defining the first function is an effort as many pieces have to fall in place correctly for it to work out and many mistakes will happen most likely along the way. However, the ecosystem is quite large and has many questions already answered; in addition, AWS has a lot of documentation, which is mostly accurate (but not quite 100%).

The first function, as defined above, now gives me a jump-off platform to investigate and to experience AWS Lambda functions further. Stay tuned for many more blogs that explore a huge variety of aspects and concepts of serverless distributed computing.

Go Serverless!

Disclaimer

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

 

Serverless Computing: What is it?

In a nutshell: “Just upload code and execute it”.

Serverless Computing

From my viewpoint as an engineer, serverless computing means that I can implement as well as use cloud functionality without having to establish and to manage server deployments.

A quote from Amazon states: “Serverless computing allows you to build and run applications and services without thinking about servers. Serverless applications don’t require you to provision, scale, and manage any servers. You can build them for nearly any type of application or back-end service, and everything required to run and scale your application with high availability is handled for you” (https://aws.amazon.com/serverless/).

For example, if business logic has to be executed, I develop one or more functions (procedures), deploy those “into a serverless cloud” and invoke them without having to worry, for example, about initialization, containers, web or application servers, deployment descriptions or scaling.

Or, when I need database access, I create a database service instance “in the cloud” and use it. Of course, I have to possibly (maybe – maybe not) reason about capacity, but I don’t have to find hardware, find, install and maintain database software images, scale the instances, and so on.

There are many explanations and discussions of serverless computing, for example, https://martinfowler.com/articles/serverless.html or https://martinfowler.com/bliki/Serverless.html, among many others. A clear-cut technical definition of “serverless computing” is still missing.

Serverless Computing: Why is it Interesting?

There are many reasons why serverless computing is interesting and appealing. Cost is one factor, hardware utilization is another. The above references outline many, many more.

However, from a software or service engineering perspective there are many important reasons why this relatively new concept is very much worth exploring and considering for future development and possibly migration from traditional approaches like, for example, Kubernetes.

For one, the focus and the effort on managing server deployments and hardware environments is extremely reduced or removed altogether, including aspects like scaling or failure recovery. This frees up engineering, QA, dev ops and ops time and resources to focus on the core business functionality development.

More importantly, a serverless development and execution environment restricts the software architecture, implementation, testing and deployment in major ways. This reduction in variance allows execution optimization and enables a significant increase in development quality and dependability.

Serverless Computing: What are the Choices?

As in the case of any new major development, several providers put forward their specific implementations, and this is the case for serverless computing as well. I don’t attempt to provide a comprehensive list here, but instead refer to the following page as one example that collected some providers: http://www.nkode.io/2017/09/12/serverless-frameworks.html.

One observation is that there are vendor specific as well as vendor unspecific implementations of serverless computing. It is in the eye of the beholder (based on use cases and requirements) to determine the most applicable environment.

Serverless Computing: Changing Planets

It is easy to label serverless computing as “The Next Big Thing”. However, I believe that a real fundamental shift is taking place that in a major ways breaks with the historical development and engineering of distributed computing. Remote procedure call (RPC – https://en.wikipedia.org/wiki/Remote_procedure_call), Distributed Computing Environment (DCE – https://en.wikipedia.org/wiki/Distributed_Computing_Environment), CORBA (https://en.wikipedia.org/wiki/Common_Object_Request_Broker_Architecture), REST (https://en.wikipedia.org/wiki/Representational_state_transfer), just to name a few, have all in common that the design and engineering has to take hardware and resource topology and deployment into consideration, including scaling, and recovery. Furthermore, it had to reason about “local” and “remote”.

Serverless computing, as it currently is implemented, takes away the “distribution” and deployment aspect to very large extent or even completely. This will change significantly how software engineering approaches system and service construction. Time will tell the real impact, of course, but I’d expect major shifts from a software engineering perspective.

What’s Next: Journey Ahead

From a very pragmatic viewpoint, serverless computing is an alternative software development and execution environment for (distributed) services. And this defines the journey ahead: figuring out how the various aspects of software engineering are realized, like for example

  • Function implementation
  • Procedure implementation
  • Error handling, exception handling and logging
  • Failure recovery
  • Scaling (out and in)

The blogs following this will explore these and other aspects over time.

Summary

Serverless computing, and serverless functions in particular, are very appealing developments, especially from the viewpoint of software development/engineering as well as scaleable execution.

Let’s get hands-on and see what the upside potentials are as well as where the limits and issues lie.

Go Serverless!

Disclaimer

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

SQL for JSON and Schema Support (Part 7): Partial Schemas for JSON Documents

This blog concludes the series on JSON schema support in context of databases with a proposal: partial schemas.

Black and White: Rigorous Schema or No Schema At All

So far the discussion around JSON schema support in databases was a black and white (or a all-or-nothing) discussion, aka, a JSON document can either change at any time or it must comply completely with a rigorous schema (structure, data types and possibly values) at any time.

The any-change-at-any-time situation is the “wild west” in terms of structure and data type variation in the sense that code accessing JSON documents has to “protect” itself by e.g. means of assertions. Code has to assume that any structure can be present and therefore has to introspect each document at every access in order to determine if it can process it. Complex error handling is necessary if the code cannot process a specific JSON document (as discussed in an earlier blog).

More than All-Or-Nothing Approach: Rationale for Reconsideration

The ability to change JSON documents randomly is portrait as a desirable and positive feature by many JSON supporting databases, however, from an engineering perspective this “randomness” is adding significant engineering and coding complexity (and significant effort to quality assurance as well).

Are random (“dynamic”) document structures really the 80% case in real situations? Or is in reality a more nuanced situation at hand?

Many documents implement the content and state of business data (functional data). In most cases there are basic data sets to be supported. For example, purchase orders must have an overall order value field, health records must have a unique record identifier including some basic patient data, user accounts must have a unique account identifier and at least an account login name. There is usually a core set of attributes that represents the commonly agreed set of business data and those are mandatory as well in order for the business data to be meaningful.

Each (business) domain has its own commonly agreed core semantics and therefore commonly agreed data attributes. Code should be able to assume that the core semantics is complied to and that the core attributes are present in every JSON document representing the domain semantics.

So maybe it is not it really all-or-nothing in terms of the dynamic structure of JSON documents all the time. Maybe it would be really useful to be able to specify a schema that applies to (only) a part of a JSON document at any time without enforcing complete compliance: partial schemas.

Partial Schema Rationalization

If a JSON document must be partially compliant, then one part of it must comply to a given partial schema, and it can have any form outside of the part governed by the partial schema.

For example, a partial schema for a purchase order could enforce that a purchase order must have an purchase order identifier, a buyer identifier, a set of line items, and a purchase order amount at any point in time. A purchase order could contain additional supplemental data that are not governed by the schema (the dynamic part).

With a rigorous partial schema in place code is guaranteed that the attributes defined by the schema are present. This makes the executions of assertions unnecessary and code can be sure that all documents are compliant with the partial schema.

Optionally a partial schema could define that some attributes must not always be present, but in case they are present, they have to comply to a given schema.

For example, if the full specification of a buyer is provided, not just the buyer identifier, then it must contain the buyer’s legal name, the legally registered address as well as a phone number that reaches the buyer. Similar for a shipping address: if one is provided, it has to be a full shipping address denoting a physical location.

With such a schema in place, code can check if specific optional items are present, and if they are, their schema is guaranteed, meaning, these items comply with their definitions in the partial schema.

Finally, a partial schema does not govern any structure outside the mandatory or optional items and it is possible for a JSON document to contain additional data that is not enforced in terms of presence or structure.

For example, items not enforced by a schema could be supporting documentation for line items like drawings, notes to shippers asking for specific handling, or a trustworthiness rating of a buyer in a purchase order. A user account could have a history of logins or a health record could contain the list of doctors that were involved in treatments over time.

For data items that are not enforced, code would have to use assertions in order to establish their presence (and their data types).

Partial Schema Specification

How would a partial schema specification look like? In principle, it could be an extension what has been discussed in earlier blogs of this series with a different semantics. First, any schema would be considered partial, meaning, a JSON document can contain data items that are not contained in the schema in addition to those specified by the partial schema.

Second, each specification in the schema has a “mandatory” and an “optional” designation defining the mandatory items, as well as the optional items. The following schema shows the possible extensions. There are two designations, “mandatory” and “optional” that indicate what the partial schema governs.

{
  "type": "object",
  "mandatory": [
    "orderId",
    "orderDate",
    "orderLineItems"
  ],
  "optional": [
    "buyerId"
  ],
  "properties": {
    "orderId": {
      "type": "number",
      "description": "Order Identifier: must be of 
                      type number and is mandatory"
    },
    "orderDate": {
      "type": "string",
      "description": "Order Date: must be of 
                      type string and is mandatory"
    },
    "orderLineItems": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "itemId": {
            "type": "number"
          },
          "numberOrdered": {
            "type": "number"
          }
        }
      },
      "description": "Order Line Items: must be of 
                      type array and is mandatory"
    },
    "buyer": {
      "type": "number",
      "description": "Buyer Identifier: must be of 
                      type number and is optional"
    }
  }
}

The above schema enforces the structure of JSON documents in part and specifies mandatory as well as optional attributes. There are three mandatory attributes and one optional attribute. Such a schema could be added to an identifiable collection of JSON documents ensuring thereby their partial compliance.

A change in a partial schema cannot take place without modifying the JSON documents as well since the JSON documents must be compliant to a partial schema at any time. Again, code is assured compliance of JSON documents with the attributes defined by the partial schema and code therefore does not have to protect itself.

Summary

This conclusion of the blog series on JSON schema support in databases provided a proposal for partial schemas. Partial schemas support structure and data type guarantees for code as well as support the freedom for dynamic schema changes outside the part of a JSON document that has to be compliant with a partial schema.

Following this approach

  • Code is guaranteed that JSON documents comply to a partial schema
  • An application has the freedom to add additional data dynamically not governed by the partial schema
  • Over time, as dynamic structures in a JSON document stabilize, those can be moved into the stable partial schema part of a JSON document.

Partial schemas are a win-win for both, databases supporting JSON documents as well as code accessing these databases.

Go [ JSON | Relational ] SQL!

Disclaimer

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

SQL for JSON and Schema Support (Part 6): Dynamic Schema Change and Symbolic Computation

How does software engineering look like in context of the absence of schema support? What now?

Summary – Where Are We?

The structure of existing JSON objects can change at any time in a NoSQL database (in general). New, possibly by the code so far unseen structures of JSON objects can appear at any time in existing as well as in new documents.

Schema support does not provide any guarantees (see the MongoDB discussion in previous blogs) – neither for current objects’ structure nor for future objects’ structure, unless a NoSQL database enforces the schema as rigorously as a relational database management system (RDBMS).

Most NoSQL database management systems provide zero schema supervision in the strict sense (no guarantee that stored JSON documents are guaranteed to be schema compliant).

A developer cannot make any assumptions whatsoever about the schema of the stored JSON documents (maybe with the exception of the mandatory existence of an identifier property) unless the structure is rigorously enforced by every piece of code inserting or updating documents (in general a hard problem to solve).

Code makes assumptions when accessing JSON documents as it accesses properties at certain locations inside the JSON document and based on a specific data type. Those assumptions about location and type might or might not be true for every single JSON document that the code accesses.

So where are we? Basically, the code is on its own to deal with the structure and data type uncertainty. The following discussion is based on the situation that the underlying NoSQL database does not provide any strict schema guarantees.

Engineering Approaches

If there is no guarantee provided by a database system, the compliance check for each document has to be done by the code via assertions on the required structure and data types of values. Before a document is processed, it has to be asserted that the expected structure and data types exist as otherwise the processing would fail.

The following sections discuss one approach of run-time assertions and the phases of processing:

  • Run-time assertions
  • Assertion execution frequency
  • JSON document non-compliance
  • Non-compliance analysis and mitigation

In addition, alternative approaches to run-time assertions are discussed in order to improve efficiency.

As a side note, if code inserting or updating documents would ensure a specific schema rigorously, then code accessing documents will not have to implement assertions. However, this requires a very strict engineering discipline around current and future code that inserts or updates (including 3rd party tools that support direct database access). In many cases this is impossible, hence the emphasis here on what the accessing code can or has to do. This approach of guarantee by inserting or updating code will not be discussed further here.

Run-Time Assertions

Code accessing a document requires specific properties to be present and to be of a specific data type in order for it to execute its logic.

For example, if code needs to access a property like the value of an order, then the code needs to assert the existence of that property and the acceptable data types. Those might be number or string (as a string value could be converted to a number in the code):

assert(exists_path(order.value))
assert(is_of_type(order.value, number) 
       || is_of_type(order.value, string))

Code must assert expectations before access in order to ensure that the document matches those expectations and does not fail because of document non-conformance. If there is no “value” property in an order, the code would not be able to access is.

The blog https://realprogrammer.wordpress.com/2016/11/29/sql-for-json-rationalization-part-11-json-specific-predicates/ introduced the two essential predicates used above (exists_path() and is_of_type()) in context of NoSQL Query.

Assertion Execution Frequency

Checking assertions affects code efficiency due to additional execution, so how often do assertions have to be checked? In principle, before every document access.

There is a variation between NoSQL databases that provide single document transactions and those providing multi document transactions. In the single document transaction case, documents can change at any time in a NoSQL database and so between two separate accesses, the document could have been changed by other code. Assertions have to be checked every time before accessing a document.

some_function() {
// retrieve document
// assert(...)
// execute logic
}

some_function_2() {
// retrieve document
// assert(...)
// execute logic
}

In case multi document transactions are supported, all assertions can be checked at once when the transaction is started, and does not have to be checked again within the same transaction.

// start transaction

// assert()

some_function() {
// retrieve document
// execute logic
}

some_function_2() {
// retrieve document
// execute logic
}

// end transaction

In general, code can operate on a single document or on a set of documents. A single document scope means that the code reads and possibly changes a single document only. A multi document scope means that the code reads one or more documents and possibly modifies one or more documents. For example, summing up the value of all not yet shipped orders and storing the sum in an analysis document. In either case, assertions have to be checked before every document access.

Executing assertions before every access is quite resource intensive at high processing rates. Later on variations of assertions are discussed that limit the effort and the resource consumption in order to make the use of assertions more practical or more efficient.

Note: MongoDB realized the enormous need for multi document transactions and is planning to implement those: https://www.mongodb.com/transactions.

JSON Document Non-Compliance (Violated Assertions)

If one or more assertions are violated, then the document is non-compliant from the viewpoint of the code accessing it. Assertion violation handling and analysis has to take place and there are different options:

  • The document could be ignored (which might or might not be a good option in context of the code’s functionality) if it is irrelevant or negligible for the code’s computation. The code could record the ignored documents for future off-line analysis.
  • An exception can be thrown and the code aborts its execution. This is a hard failure and ensures that no incorrect result is produced by the code.
  • Error handling can take place like for example the code “parks” the document for future manual analysis and processing. In this case the code produces a result with the disclaimer that there are unprocessed documents that since not processed might affect the code’s accuracy of the result. Compared to the first case the code returns an error or warning highlighting the potential inaccuracy (and in the best case the degree of inaccuracy).

In case of single document processing the handling is easier compared to multi document processing. In the latter case, the code might have updated several documents before reaching a non-compliant one. If the handling is to abort execution, then the already updated documents might have to be reverted back (compensation logic). If there are multi document transactions available the database takes care of reverting the documents as part of rolling back the transaction.

If multi document transactions are not supported alternative approaches have to be found. For example, the code could remember the previous document state itself for each document updated and therefore would be able to revert the state (code based compensation). However, those documents could have been changed in the meanwhile, adding further complexity to the compensation logic.

Non-Compliance Analysis and Mitigation

When code encounters non-compliant documents then mitigation has to be determined so that this case will not happen in the future anymore. There are several possible approaches (not necessarily complete):

  • Based on the non-compliant document analysis, the code is changed or extended to be able to process those cases as well. This would change the assertions so that in the future the documents would be considered compliant.
  • The code that inserts or updates the documents in the first place has to change in order to ensure that the assertions of subsequent accesses will be met. Code accessing the documents will have assurance that this non-compliance will not happen anymore.
  • It is acceptable to have manual post-processing. Instead of changing the code and its assertions, it is accepted that manual work is performed in order to produce an accurate result. This might require updating the result manually (like the total amount of all open orders).
  • It is acceptable to ignore non-compliant documents. This is a valid analysis result as well if the logic of the code allows the result to be based only on a subset of the documents.

Different approaches to mitigation have a different impact on the processing time as well as the availability of the improved code base. Of course, avoiding non-compliance in the first place would be best.

Thought Experiments on Alternative Assertion Implementation

Are there possible ways to reduce the amount of assertion processing? In the following approaches are discussed as thought experiment.

The first approach is shadow schema-per-document management for documents. A schema is not declared upfront and enforced by the database, but derived from a document right after its creation. Each document has its own schema and every time the document is updated, its schema is updated as required in order to remain in sync with the document itself.

Code accessing a document can check the document’s schema first before retrieving the document in order to determine if the assertions are met. The assertions are in this case expressed as schema interrogation instead of document structure access. Every time the schema is adjusted to match a document update, its version is increased.

This by itself does not have a huge saving in terms of access, possibly worse, as the schema has to be retrieved in addition to the document itself. However, in the case where many documents have the same schema, the code accessing documents could know which schemas are satisfying the assertions. So just knowing the schema identifier and its version might be sufficient to establish the compliance check. In addition, if a document contains a reference to its schema and version, then verifying assertions is simply comparing a schema name and version. This is highly efficient as the code can keep a list of compliant schemas and their versions (assuming many documents comply to the same schema).

A second and very different approach is to add assertions to the document itself and an indicator for each assertion if it is satisfied or not within the given document. Code accessing a document the first time checks its assertions, and also adds the assertions to the document. Code updating a document re-evaluates the assertions after the update and sets the indicator. Subsequent accesses do not have to evaluate the assertions anymore every time, but lookup the indicator only. This reduces processing at the time of access. A possible optimization is that code updating the document only has to re-evaluate the assertions if the update changes the structure and/or the data types.

There might be further approaches to reduce and to optimize assertion execution. However, nothing beats a supervised and enforced schema when structure and/or data type compliance are paramount.

Summary

When a NoSQL database does not provide rigorous schema enforcement, the code is on its own to establish document structure and data type compliance when accessing documents. The code has to be very defensive in order to avoid execution failure. Due to the execution effort involved, schema support and strict supervision might be really important for the code accessing documents.

In addition, especially when having to abort multi document processing, multi document transactions are essential when several documents are updated during the processing in order to avoid complex compensation logic. Software engineers have to be super careful with compensation logic that cannot rely on multi document transactions provided by the database.

Go [ JSON | Relational ] SQL!

Disclaimer

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

SQL for JSON and Schema Support (Part 5): Intermezzo 3 – MongoDB’s $jsonschema

The previous blog discussed MongoDB’s $jsonschema behavior with a strict validation level. Let’s look at the moderate validation level in this blog.

Example

As usual, first, let’s create a collection and add a few JSON documents to it. Afterwards a schema validation is added with the moderate setting (the following is based on MongoDB version 3.6.1).

> mongo
> use moderate_exploration

Initially, before adding a schema, two JSON objects are inserted that are not compliant with the schema that is going to be added afterwards. The reason is that we need non-compliant JSON objects to discuss the moderate level later.

> db.orders.insert({
   "orderId": 1,
   "orderDate": ISODate("2017-09-30T00:00:00Z"),
   "orderLineItems": [{
    "itemId": 55,
    "numberOrdered": 20
    }, {
    "itemId": 56,
    "numberOrdered": 21
   }],
   "specialInstructions": "Drop of in front, 
                           not back of location"
  })
WriteResult({ "nInserted" : 1 })
> db.orders.insert({
   "orderId": 2,
   "orderDate": ISODate("2017-09-30T00:00:00Z"),
   "orderLineItems": [{
    "itemId": 55,
    "numberOrdered": 40
    }, {
    "itemId": 56,
    "numberOrdered": 41
   }],
   "preferredColor": "red"
  })
WriteResult({ "nInserted" : 1 })

Now the schema is added:

> db.runCommand({ 
   "collMod": "orders",
   "validator": {  
    "$jsonSchema": {   
      "bsonType": "object",
       "required": ["orderId", "orderDate", "orderLineItems"],
       "properties": {
        "orderId": { 
         "bsonType": "int",
         "description": "Order Identifier: must be of 
                         type int and is required"
        },
        "orderDate": { 
         "bsonType": "date",
         "description": "Order Date: must be of 
                         type date and is required"
        },
        "orderLineItems": { 
         "bsonType": "array",
         "items": {  
          "bsonType": "object",
          "properties": {   
           "itemId": {    
           "bsonType": "int"   
           },
           "numberOrdered": {    
           "bsonType": "int"   
           }  
          } 
         },
         "description": "Order Line Items: must be of 
                         type array and is required"
      }   
     }  
    } 
   },
   "validationLevel": "moderate",
   "validationAction": "error"
  })
{ "ok" : 1 }

After the schema is added, two more JSON objects are inserted, this time being schema compliant.

> db.orders.insert({
   "orderId": NumberInt(3),
   "orderDate": ISODate("2017-09-30T00:00:00Z"),
   "orderLineItems": [{
    "itemId": NumberInt(55),
    "numberOrdered": NumberInt(60)
    }, {
    "itemId": NumberInt(56),
    "numberOrdered": NumberInt(61)
   }]
  })
WriteResult({ "nInserted" : 1 })
> db.orders.insert({
   "orderId": NumberInt(4),
   "orderDate": ISODate("2017-09-30T00:00:00Z"),
   "orderLineItems": [{
    "itemId": NumberInt(55),
    "numberOrdered": NumberInt(80)
    }, {
    "itemId": NumberInt(56),
    "numberOrdered": NumberInt(81)
   }]
  })
WriteResult({ "nInserted" : 1 })

At this point the created collection is governed by a schema, and contains four JSON documents, two are compliant with the schema (orderId 3 and 4), and two are not compliant (orderId 1 and 2).

Analysis

The MongoDB documentation states for “moderate”: “Apply validation rules to inserts and to updates on existing valid documents. Do not apply rules to updates on existing invalid documents.” (https://docs.mongodb.com/manual/reference/command/collMod/#validationLevel).

Let’s explore now the behavior of the moderate validation level.

First, let’s try to insert a non-compliant JSON document. The insert will fail as expected:

> db.orders.insert({
   "orderId": 5,
   "orderDate": ISODate("2017-09-30T00:00:00Z"),
   "orderLineItems": [{
    "itemId": 55,
    "numberOrdered": 40
    }, {
    "itemId": 56,
    "numberOrdered": 41
   }],
   "preferredColor": "red"
  })
WriteResult({
 "nInserted": 0,
 "writeError": {
  "code": 121,
  "errmsg": "Document failed validation"
 }
})

Second, let’s try to update a compliant JSON document that already exists in the collection in a non-compliant way:

> db.orders.update({  
   "orderId": NumberInt(3) 
   }, {  
   "$set": {   
    "orderDate": "2018-01-09"  
   } 
  })

As expected the update fails:

WriteResult({
 "nMatched" : 0,
 "nUpserted" : 0,
 "nModified" : 0,
 "writeError" : {
  "code" : 121,
  "errmsg" : "Document failed validation"
 }
})

Third, let’s try to update a non-compliant JSON document

> db.orders.update({  
   "orderId": NumberInt(1) 
   }, {  
   "$set": {   
    "orderDate": "2018-01-10"  
   } 
  })

As per the above explanation of moderate this should work and indeed it does:

WriteResult({
 "nMatched": 1,
 "nUpserted": 0,
 "nModified": 1
})

Bypassing Validation

With the correct permission (https://docs.mongodb.com/manual/reference/privilege-actions/#bypassDocumentValidation) it is possible to bypass document validation.

This allows for the situation that e.g. a collection is governed by a new schema, however, existing application code might have to continue to insert or to update documents with a structure that violates the new schema as the logic cannot be adjusted to the new schema quickly enough (including transforming the non-compliant to compliant JSON documents).

Summary

The brief analysis of MongoDB wrt. document validation in context of JSON schemas added to collections in the last three blogs showed that while schema supervision is possible, it is not as strict as in relational database management systems.

Basically, if a schema is present, a user cannot infer that all documents in that collection comply to that schema. A schema related to a collection can be changed, and existing documents that would violate the new schema on insert will not be discarded from the collection. Furthermore, properties that are not covered by the schema can be added and changed freely.

Go [ JSON | Relational ] SQL!

Disclaimer

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

SQL for JSON and Schema Support (Part 4): Intermezzo 2 – MongoDB’s $jsonschema

After some initial exploration in the previous blog, more aspects on MongoDB’s $jsonschema are looked at in the following.

Example

First, let’s create a collection as follows. It is governed by a schema, and validation is in the strictest setting (the following is based on MongoDB version 3.6.0).

> mongo
> use more_exploration
> db.createCollection("orders", {
  "validator": {
   "$jsonSchema": {
    "bsonType": "object",
    "required": ["orderId", "orderDate", "orderLineItems"],
    "properties": {
     "orderId": {
      "bsonType": "int",
      "description": "Order Identifier: must be of 
                     type int and is required"
     },
     "orderDate": {
      "bsonType": "date",
      "description": "Order Date: must be of 
                     type date and is required"
     },
     "orderLineItems": {
      "bsonType": "array",
      "items": {
       "bsonType": "object",
       "properties": {
        "itemId": {
         "bsonType": "int"
        },
        "numberOrdered": {
         "bsonType": "int"
        }
       }
      },
      "description": "Order Line Items: must be of 
                     type array and is required"
     }
    }
   }
  },
  "validationLevel": "strict",
  "validationAction": "error"
 })
{ "ok" : 1 }

The two documents from the example outlined in the initial blog of series are added next.

> db.orders.insert({
   "orderId": NumberInt(1),
   "orderDate": new Date("2017-09-30"),
   "orderLineItems": [{
     "itemId": NumberInt(55),
     "numberOrdered": NumberInt(20)
    },
    {
     "itemId": NumberInt(56),
     "numberOrdered": NumberInt(21)
    }
   ]
  })
WriteResult({ "nInserted" : 1 })
> db.orders.insert({
   "orderId": NumberInt(2),
   "orderDate": new Date("2017-09-30"),
   "orderLineItems": [{
     "itemId": NumberInt(55),
     "numberOrdered": NumberInt(30)
    },
    {
     "itemId": NumberInt(56),
     "numberOrdered": NumberInt(31)
    }
   ]
  })
WriteResult({ "nInserted" : 1 })

Insert Strictness and Partial Schema Coverage

The validator is in place on the collection “orders”. This can be verified with the command

> db.getCollectionInfos({name: "orders"})

Now let’s try and add a document that has additional properties in addition to those that comply with the schema as follows:

> db.orders.insert({
   "orderId": NumberInt(3),
   "orderDate": new Date("2017-09-30"),
   "orderLineItems": [{
     "itemId": NumberInt(55),
     "numberOrdered": NumberInt(40)
    },
    {
     "itemId": NumberInt(56),
     "numberOrdered": NumberInt(41)
    }
   ],
   "preferredColor": "red"
  })
WriteResult({ "nInserted" : 1 })

It appears that as long as the schema is satisfied, additional properties can be inserted. So the schema is not completely covering the object to be inserted, but only those properties that are defined in the schema (validator). It is a partial schema coverage.

Here is the counter example: the value of the property “orderLineItems” is not in compliance, and so the insertion fails:

> db.orders.insert({
   "orderId": NumberInt(4),
   "orderDate": new Date("2017-09-30"),
   "orderLineItems": ["b", "g"],
   "preferredColor": "red"
  })
WriteResult({
 "nInserted": 0,
 "writeError": {
  "code": 121,
  "errmsg": "Document failed validation"
 }
})

Update Strictness and Partial Schema Coverage

The following updates an existing document:

> db.orders.update({
   "orderId": NumberInt(2)
  }, {
   "$set": {
    "orderDate": new Date("2017-10-01")
   }
  })
WriteResult({
 "nMatched": 1,
 "nUpserted": 0,
 "nModified": 1
})

In part 1 of this blog series the order with identifier 1 was updated to add a property “specialInstructions”. This is not schema compliant, however, the update is possible as it does not violate that part of the document that is covered by the schema.

> db.orders.update({
   "orderId": NumberInt(1)
   }, {
   "$set": {
    "specialInstructions": "Drop of in front, 
                           not back of location"
   }
  })
WriteResult({
 "nMatched": 1,
 "nUpserted": 0,
 "nModified": 1
})

Partial schema coverage applies to update as well, not just to inserts.

An example of a non-compliant update is the following:

> db.orders.update({
   "orderId": NumberInt(2)
  }, {
   "$set": {
    "orderDate": "2017-09-30"
   }
  })
WriteResult({
 "nMatched": 0,
 "nUpserted": 0,
 "nModified": 0,
 "writeError": {
  "code": 121,
  "errmsg": "Document failed validation"
 }
})

Summary

MongoDB supports partial schema coverage in strict mode, meaning, properties defined in the schema must match the schema, however, properties not specified in the schema can be added or modified without rejection.

This means (again) that examining the JSON schema validator of a MongoDB collection only indicates properties common to all documents, but not the complete set of properties of all documents.

The next blog examines the non-strict validation setting of a JSON schema validator in MongoDB.

Go [ JSON | Relational ] SQL!

Disclaimer

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.