Wednesday, 5 April 2017

Debugging those JavaScripts

So, you have some code that is playing up, and you want to debug them. We have a few options depending on the situation we are in.

1. Debug APEX/web app in Chrome

I primarily use Google Chrome, so will be focusing on using that browser to debug. The next thing we want to do is enable a break point so that the execution will pause in you can inspect variables and step over the code line by line.

Typically, in the developer tools you will want to navigate to the sources tab, find your script and add the break point. In the case of inline, I found the following interesting behaviour (unsure if its a buggy browser version or what).

When I run the page for the first time, and open the developer tools, trying to open the page source will result in an empty document.

However, if I re-run the page with the developer tools open, the full page source will be displayed.

Another neat trick that I discovered, is that you can map inline code to a "dummy" source file, as documented here. Wherever you define your functions, add a comment line in the format: //# sourceURL=global.js, where you would want your functions to appear in the source tree as "global.js". These sourceURL seem to be on a per-script block (<script>..</script>) basis.

So, if I add that above comment line to be global variable and function definitions in my page, the source tree in developer tools will now come in like so:

Neat, huh!

The other way of course is if you reference an external script.

Ok, now we know how to find our sources, what we want to do is add a break point to see what is going on. That is just a matter of clicking in the gutter where the line numbers are. You'll see a blue mark appear and be ready to run whatever action causes the code to run.

This UI gives all sorts of goodies - watches, call stack, variable values, etc.

As you can see, the console allows also to run expressions against the current scope where the debugger is paused at.

(One thing that i haven't mentioned is that you can also have a line with the word `debugger` to trigger a breakpoint)

Another neat feature of this is that you can actually modify scripts in the developer tools when you are prototyping an enhancement, before you solidify the changes in your actual code base. All the changes you make are then visible through the history panel.

2. Debug node program with Chrome

The neat thing with node, is you are able to use this same UI to debug a node program. On the command line, you just need to run the command: node --interactive[-brk] script.js, and you will be given a URL that you can load in your browser. Note, there are two arguments you can use when starting the debug process:


Depending on the type of program, interactive-brk is not a bad idea as it will break on the first line of execution.

The console output looks like this:

So, you just need to paste that URL into Chrome, and get debuggin!

3. Debug node program on the command line

Node also comes with the debug argument. This does seem to be deprecated, but even so.

So, you run the command: node debug foo.js. This will pause execution on the first line where you can set break points, continue, step over, etc. You can evaluate expressions by typing the command `repl` and than the expression you want to evaluate.

Refer to the documentation for the full list of available commands:

4. Debug node program with Atom

Atom is a open sourced text editor by GitHub, where certain languages have a package that supports debugging code in the editor - JavaScript being one of those languages. So, go ahead and install the node-debugger package

After installing, you will want to check the configuration. At a minimum, you will probably want to make sure the node path is valid. For example, by default it expects: /bin/node, but I installed node through umake so is located at: /home/trent/.local/share/umake/nodejs/nodejs-lang/bin/node.

To toggle a break point, you will navigate to the line and then run the command "node-debugger:toggle-breakpoint" - note the key binding by default (f9) conflicts with another package I have (build), so I would need to re-map that binding in order to use that. In the meantime, I just run the command.

You will see the breakpoints highlighted by orange in the gutter. Now we can start the debugger by hitting f5. This will still pause execution on the first line, and bring a panel on the right hand side where we can step over, continue, etc.

And at below the editor there is another panel for standard out/error where we can also evaluate expressions.

5. Debug node with VS Code

VS code is another open source text editor that is possible to debug node programs in. This comes with debug functionality on a fresh install for node. So, if you open up your script then hover in the line number gutter, you will see a little break point icon appear.

So, click in the gutter to enable the break point, and then open the debug panel and click Run.

I find this interface a bit more polished than what was provided in the Atom example. Add to that, you can hover over variables to see their values, as is demonstrated with the `i` variable above - which I personally find immensely useful.

One additional feature of VS code which I won't explore in this article is that you can use the debugger in VS code by having it connect to Chrome. Read more here:

Monday, 27 March 2017

(My current) Atom editor Oracle plugins

If you are an Oracle developer and use Atom text editor, here are some extensions that may make your development experience more pleasant.
  • language-oracle
  • build-oracle
  • autocomplete-oracle
  • (experimental/early) plsql-block


This extension aims to provide the syntax highlighting into Atom. It was converted from a text mate bundle back in October 2015, with a few little tweaks here and there since.

To install, open your settings dialog (ctrl+,), go to the Install node and search "language-oracle". The package should come up as the first result, so hit the install button.

(alternatively, from the command line, run: `apm install language-oracle`)

After it's installed, your favourite extensions for Oracle DDL should now be picked up as PL/SQL (Oracle) in the status bar and the keywords and package references should be appropriately styled.


You may recall I blogged about compiling Oracle code back in September 2015. This plugin essentially wraps all the scripts up nicely to get you going quickly.

First, you will want to have the scripting engine installed and available on your system, preferably on your path. For the purpose of this article, I'll just be looking at SQLcl, but SQL*Plus can just as easily be used (and is what I actually have my instance set up with). 

So, head over to the SQLcl downloads page and grab the lastest version:

The unzip and store in a convenient location (I store mine in /opt). 

unzip sqlcl-*.zip 
sudo mv sqlcl/ /opt/sqlcl/

The next part is to install the Atom plugin. To install, open your settings dialog (ctrl+,), go to the Install node and search "build-oracle". The package should come up as the first result, so hit the install button.

(alternatively, from the command line, run: `apm install language-oracle`)

By default, the plugin looks for `sqlplus` in the path. Since this example we are going with SQLcl, we need to update the configuration. So go into the plugins settings and specify the path to the SQL interpreter:

Then, in the root of your project folder, you need a file named `.atom-build-oracle.json`, which will be an array of build targets. Each target should contain the fields "connectString" and "targetName".

        "targetName" : "hr: dev",
        "connectString" : "hr/hr@//"

The first time you add this file into your project, you will need to reload Atom so it knows to read the file to find targets. Now, in the bottom left of the status bar, you will see all the the first target as defined in your build configuration.

So, I have created a query to test this out (sample.sql). I can hit the keyboard shortcut (ctrl+alt+b), or click on the target in the bottom left to trigger a build. This output then comes out like so:

Easy as!

If you want to use SQL*Plus as the engine, there are some notes on the projects README that you may want to review. See here:


Atom has an API to add support for code completions. In this package, I've essentially taken a dump of certain bits from the data dictionary into a JSON file.

To install, open your settings dialog (ctrl+,), go to the Install node and search "autocomplete-oracle". The package should come up as the first result, so hit the install button.

(alternatively, from the command line, run: `apm install autocomplete-oracle`)

With this installed, as you type, certain oracle keywords and package references will appear.

(experimental/early) plsql-block

This is one I haven't really done a lot with, and is not technically my project, but borrowed from another language. A while ago, I came across a couple of Ruby packages that give visual indication of where a block starts and ends.

(FWIW, this is a fork of the project: There was another project that does a similar thing, except only matched when on the ending block, so figured this would be better to base off)

I've done a slight tweak, and out of the box, it works with begin,end blocks. I think the current implementation doesn't work with if, loop statements, so I've disabled the if-block matching implementation for now. If someone has the time to look more into this project, that would be great - as I can this being supremely useful; or maybe stealing the idea's from this project and re-implementing for PL/SQL!

Because it's incomplete for PL/SQL code, I haven't published this - but you can get up and going with this by cloning the fork/repository I have in my GitHub account. Go into your package directory, and clone like so:

cd ~/.atom/packages/
git clone
cd plsql-block
apm i

With that installed, you should see it under community packages in Atom:

And, in your PL/SQL code, you will find the opening and closing blocks highlighted like so:

Even without the if, loop, constructs matched, I can imagine this still being quite handy.



Wednesday, 12 October 2016

APEX URLs and HTML need not be in your SQL report queries

I've seen a few examples recently where people have been embedding URLs and HTML into their SQL queries. I tend to advocate keeping such things out of the query itself and using APEX functionality such as link target and html expressions (in SQL Query column attributes).

First example, a full page URL embedded into a column like so (conditionally based on some condition):

case when job = 'PRESIDENT'
then 'f?p='||:APP_ID||':2:'||:APP_SESSION||'::::P2_EMPNO:'||empno
else  'f?p='||:APP_ID||':3:'||:APP_SESSION||'::::P3_EMPNO:'||empno
end dest_url

Then on the column attributes, you make it a link by specifying the type as a link and the target as a URL specifying the target as #DEST_URL#

Taking a look at the URL, we can see there is only one bit changing, and that is the page number. So, we can avoid embedding the whole URL in our query by simply adding a column for our destination page:

case when job = 'PRESIDENT'
then 2
else  3
end dest_page

Then, when we specify our target, we use the page in this application option, and pass in the row value #DEST_PAGE#, like so:

Another example it is not uncommon to come across in samples is actually generating much more HTML in the query, for example:

'<button onclick="void(0);" type="button"'
        || ' class="t-Button t-Button--hot t-Button--small actionButton"'
        || ' data-empno="' || empno || '"'
        || ' data-job="' || job || '">'
        || ' <span class="t-Button-label">Click me!<span></button>'

After adding this, you also need to set security options to not escape special characters, in order for this to be rendered properly.

But again, it really is not necessary to add all this noise to your query. In a report, every column has a HTML Expression property where you can modify the column/row value to use custom HTML. So, from the previous example, we change the query just to add a new column - if you need a new field, use that, or just add an empty column to put your button in:

Then, in the column attributes, for HTML expression, it's just a matter of specifying that same HTML as from the previous example (less any PL/SQL and SQL bits - and using substitution strings where necessary).

    class="t-Button t-Button--hot t-Button--small actionButton" 
    <span class="t-Button-label">
        Click me!(2)


Obviously, this will come down to a personal preference on how you deal with such situations. I just prefer to remove unnecessary noise from my queries and use aspects of APEX that are available to me - rather than embedding as much as possible in the query source. 

Thursday, 28 July 2016

Password filling with KeePass2 - Ubuntu, Chrome and APEX


Whilst I was on holidays, I saw a tweet mentioning about an APEX enhancement request for browsers to handle passwords in the password manager better. In particular, a feature request was logged which you can see here:

Basically, you can have a multitude of passwords under the same domain, but with different workspaces, sometimes not handled greatly.

There was a bit of back and forth, mention of 1Password not having the issue - this is a non-free product and seemingly no (official) client for Linux. And finally, came the following tweet:

This key sequence is a shortcut that you can type so that if you're in the target window, the specified key stroke will be executed. In my limited testing, I only had success with this when specifying which window to map to (which is based off the window title). It's quite neat as well, but I went in a different direction.

So.... upon a bit more digging, I discovered there is a project called KeePassHttp, which exposes the database entries over a little HTTP server. And then another project, chromelPass, which interfaces with KeePassHttp to retrieve/update entries in the keypass database, direct from the browser.

Keep following on for steps on how to use this. Whilst the direction is based on Ubuntu, I'm sure you'll be able to pick up the difference on your preferred platform.


So, first, you need to install KeePass2. It is worth noting, there are two "keepass" packages in the repos, and you will want to get the one named "keepass2" - as that is the one KeePassHttp interacts with.

In addition, you will want to also have `mono-complete` installed. So at the terminal, run the command:

sudo apt-get install keepass2 mono-complete

KeePass2 has a plugin system, which is how exposing the passwords is done. So, before starting up KeePass, go and grab the plugin file from the GitHub project. You only need the KeePassHttp.plgx file. Once downloaded, it needs to be placed in the directory /usr/lib/keepass2/. So, go ahead and do that, or run the commands:

sudo wget -q -O /usr/lib/keepass2/KeePassHttp.plgx
sudo chmod 644 /usr/lib/keepass2/KeePassHttp.plgx

Set up

With that all done, start up KeePass2 and create your password database. Set up a master password and/or key file, and you can start adding entries. I suggest also to set a preference that by default the close button will minimise the application to the tray - this is because it needs to be running for the browser to access the entries. This is done through Tool -> Options, then go to the Interface tab and it's the first checkbox at the top.

Once, that's all done, it's time to go to Chrome and install the extension. This is done through the Chrome web store: Once added, you'll find an icon added to your extension bar.

Initially, it will have a little red cross as it's not connected to KeePass. So click on the extension button then click Connect.

Then give the database a name and click Save.

Fetching entries in the browser

So, let's say I have manually added an entry for my Gumtree account as per:

Now, when I go to login on Gumtree, I'm prompted with the following:

note: This prompt can be disabled through the KeePassHttp options - under the advanced tab, select "Always allow access to entries" and "Always allow updating entries".

Now, after that. The passwords are auto filled without the prompt. If you have multiple accounts for the same domain, after you grant access again, the username field becomes a drop down list where you can choose the relevant account.

Saving entries to the database, from the browser

Now, to save passwords into KeePass directly from the browser. When you login, the extension button at the top of the browser will start flashing. If you click this you can opt to save a new entry in the database.

So click New and it will be added to the database under the "KeePassHttp" group, where you can re-organise is to one of your other groups.

It's also a good idea to disable Chrome from saving your passwords so you aren't prompted twice. There is done in the settings under the section "Passwords and Forms".

APEX usage

Now, with that all the way, how do we work with the workspace login screen? Go to KeePassHttp settings, and enable custom fields.

Now, to set up a couple of examples of workspace logins. In KeePass, I set up a new group for workspaces (APEX Workspaces), then created some entries as per:

Because we have 3 fields associated to a login (workspace, username and password), we need to record the additional field. So, in the Advanced tab, add a new String field. The name needs to be prefixed with KPH: <name>. <name> is not important for this case as it's the only additional field (otherwise, it would get populated in alphabetical order). So, I just named my field "KPH: Workspace". So, it would end up looking something like this:

The next step, which only need to be done once for each domain, is to specify the form fields, so KeePassHttp knows where to map the values to. If you click the extension icon, you will see a button at the top right, "Choose own credential fields for this page". Click that, and you will be able to specify the fields. All in all, you should see the following flow:

Clicking confirm, and we are ready to form fill. Either click the username where you can select the relevant account, or click the extension icon and you will see a list of all saved accounts for the domain which can be applied.

APEX app usage

One other problem I've found with the regular password manager is that because a bunch of apps are on the same path, with a different slightly query string parameter (f?p=appid:*), it's often treated as the same site and as such quite difficult to have multiple entries for different applications.

One way around is to design your entry names in such a fashion to make it clear what app the credentials map to. So, in my previous examples I prefixed with Workspace so I know they're for workspace logins, and with app logins, I name the title with app ID and name, giving me a selection like so (which makes it quite clear which password entry I'm using).

Thursday, 28 April 2016

Preparing for CSS grids

I was just listening to "The Web Ahead" podcast where they were talking about the upcoming CSS grid system. If you can spare the time, go and take a listen (episode 114). Upcoming in the sense that there is a spec, but it's not yet mainstream - you can enable it through a flag in WebKit based browsers; use Firefox nightly (e.g I'm on Chrome 51 and it seems to be available). So you can't use it in your production applications just yet - soon'ish.

So, to get started (if using Chrome), you will want to enable experimental web platform features:

You can see the current working draft here -

It's worth taking note of the motivation behind this. If you look for CSS layouts, you will see a bunch of solutions (probably involving floats) but using techniques not really designed for a full page layout.

Let's get started making a grid. The containing element should have a display property of either grid or inline-grid.

Without specifying any column widths, when placing elements on the same row, the column widths will be of equal widths.

As per:

<div id="gridContainer">
  <div id="col1">one</div>
  <div id="col2">two</div>
  <div id="col3">three</div>

<style type="text/css">
#gridContainer {
  display: grid;

#col1 {
  grid-column: 1;

#col2 {
  grid-column: 2;

#col3 {
  grid-column: 3;


You may want span elements over multiple columns - in that case, imagine numbered lines for each column. Values are separated with a forward slash (/), and here you would specify the starting column  line and the ending column line. Alternatively, we can specify the number columns to span, prefixed with the span keyword

If we update the CSS to:

#gridContainer {
  display: grid;

#col1 {
  grid-row: 1;
  grid-column: 1 / 2;

#col2 {
  grid-row: 2;
  grid-column: 1 / 3;

#col3 {
  grid-row: 3;
  grid-column: 1 / 4;


#gridContainer {
  display: grid;

#col1 {
  grid-row: 1;
  grid-column: 1 / span 1;

#col2 {
  grid-row: 2;
  grid-column: 1 / span 2;

#col3 {
  grid-row: 3;
  grid-column: 1 / span 3;

We get:

We can make our styles more readable, by specifying the layout spec in the containing element, and giving each line a name. This is done in square parenthesis ([,]). In the container element, we define the layout spec with the property grid-template-columns and grid-template-rows. Here, you would specify the width / height of the columns / rows.

(note: If you don't declare the columns and rows, as per the previous example, the grid is implicitly created based on your grid data - an explicit layout makes the grid more concise)

So, with that applied, when specifying the column - we can use the name rather than the column index, as per:

#gridContainer {
  display: grid;
  grid-template-columns: [start] 100px [col2] 100px [col3] 100px [end];
  grid-template-rows: [top] auto [middle] auto [end];

#col1 {
  grid-row: top;
  grid-column: start / col2;

#col2 {
  grid-row: middle;
  grid-column: start / col3;

#col3 {
  grid-row: end;
  grid-column: start / end;

If you rows and columns follow the same spec for the whole page, you can use the repeat function which accepts two parameters - 1. How many times to repeat; and 2. the column/row spec.

e.g.: grid-template-columns: repeat(4, [col] 100px)

In this example, when referring to the column we would use the index, or the name "col index"

Well, I just wanted to give a brief overview of this new technologies. There are countless examples on that you can check out/try out.

Thursday, 21 April 2016

Setting up and consuming your data securely with ORDS

Setting up

So, you have some data you want to expose to third parties with a REST API, but you want to do so in a secure manner. This is all possible with ORDS using the OAuth2 mechanisms built in. To keep things simple, I will use the sample data (emp, dept) REST functions - and this has all be done in a workspace named "company_reports". And the set up will be done from within APEX - there is an alternative approach of setting everything up using the ORDS API, a subject for another day.

So, go to SQL Workshop and RESTful Services. If you don't already have the module "", click on the "Reset Sample Data" link in the task list:

Now, what we want to do is secure this so that anybody with the URL can't just come in and access the data. In the same task list as above, go to the "RESTful Service Privileges" link and create a new privilege protecting the module "".

At this point, if you now go into your module and try to test access to the data, you will receive an error (401 Forbidden): "Access to this resource is protected. Please sign in to access this resource.".

Before moving onto 3rd party clients, you may want to actually access this data in your APEX application from within the same workspace. We first need to make sure the designated users have been assigned the privilege "RESTful Services".

To access the from the same workspace, in an authenticated APEX session, you need the header set: Apex-Session, with the value being the application id and session id, separated by a comma (e.g.: Apex-Session: 114,16530545902770). If this is not possible, you can pass a query string parameter _apex_session so the URL becomes like:,16530545902770.

Now that we have secured our REST API, how do we set it up so that it can be consumed in a third party application? You need to set up a client, but to be able to do that, you need a user with the role of "OAuth 2.0 Client Developer" (note: this role alone isn't enough to consume the data)

To register a client, you need to go the URL: /ords/workspace/ui/oauth2/clients/ - replacing workspace with your actual workspace name. So, in this example case: Here you need to log in with the user you assigned the OAuth2 Client Developer role.

If this is your first time registering a client, you will be presented with an empty page with a button to register a client:

So, let's go ahead and set up a client. Click the Register Client button filling out all the fields. For this example, I'm going to go with the Code response type. After accepting or denying access to your data, you are re-directed to the URL specified in the Redirect URI field. This URL has some query string parameters, so in your consuming application you need a URL that will be able to interpret the query string parameters to respond to the received code. More on that in the consuming section.

After registering the client, go back into it to get the URL you need to request a token. In this case, it becomes:


So, now this is all set up, it's time to use it in our third party application. For this, I've made a workspace named "employee_consumer". The first thing we need to do here, is set up our callback URI as set up when registering the client. It's at this point we will need to get our access_code.

The two handler cases are:
  1. Success - returns access_code and state
  2. Error - returns error and state
So, set up a handler for the two cases


The source will be PL/SQL code with the following logic (this example has client id and secrets hard coded - ideally, you will encapsulate these better):

    l_access_token_resp CLOB;
    l_resource_resp CLOB;
    l_success_vals apex_json.t_values;
    l_access_token varchar2(50);
    l_error varchar2(50);
    l_resource_url varchar2(200);
    procedure print(p_in varchar2)
        htp.p(p_in || '<br />');
    end print;

    l_resource_url := '';

    apex_web_service.g_request_headers(1).name := 'Content-Type';
    apex_web_service.g_request_headers(1).value := 'application/x-www-form-urlencoded';

    l_access_token_resp :=
            p_url => ''
            , p_http_method => 'POST'
            , p_username => 'cnnD6yQud4G11bM7XtUYog..'--client id
            , p_password => '6tfLKgRat7At3OM2BRhqCQ..'--client secret
            , p_parm_name => apex_util.string_to_table('grant_type:code')
            , p_parm_Value => apex_util.string_to_table('authorization_code:' || :code)
        p_values => l_success_vals,
        p_source => l_access_token_resp
    l_access_token :=
        apex_json.get_varchar2 (
            p_values => l_success_vals,
            p_path   => 'access_token'
    l_error :=
        apex_json.get_varchar2 (
            p_values => l_success_vals,
            p_path   => 'error'
    print('Access token: ' || l_access_token);    
    print('Error: ' || l_error);
    print('Requesting: ' || l_resource_url);
    print( ' ');
    apex_web_service.g_request_headers(1).name := 'Authorization';
    apex_web_service.g_request_headers(1).value := 'Bearer ' || l_access_token;
    l_resource_resp :=
            p_url => l_resource_url
          , p_http_method => 'GET'  



htp.p('Error: ' || :error);
htp.p('State: ' || :state);


So, now we can test all this out. In our application, we add a button with the action to redirect to URL. We specify the URL as that what was given to is during the set up of the OAuth client.

When clicking on the button, I'll be taken to a login prompt

If I enter the credentials of someone without access to the data (the user needs the role of RESTful Services) over REST, I get taken to the redirect URI.

On the other hand, if I enter login of someone with valid credentials, I will be (initially) taken to a page to give access to the data.

So, when clicking Allow access, we get taken to the callback URI, with the code query string parameter.

This code string is then used to request an access token that will be used to access the secured resources. The access token is requested from /ords/company_reports/oauth2/token (e.g.

The GET handler I set up earlier is just set up to add some debug information, and output the resource to the page.

(note: In these examples, on my workstation I set up to point to - which is why most examples use, but the above output is using the actual servers IP address)

In your actual set up, you would probably want to store the access/refresh tokens somewhere and redirect back to APEX - hopefully this gives a good starting point.

Also, refer to ORDS docs giving an example: