Postgres array_to_string() with array in JSONB field

I want to get a TEXT of an array in JSONB field. The string that I expected are the following :,,

The JSONB field contains that :

    "id": 10,
    "domain_list": [

I try to use the array_to_string() function by using the following code :

            '{"id": 10,"domain_list": [  "",  "",  ""]}'::jsonb->'domain_list'
        ', '

But it returns the error

ERROR:  function array_to_string(jsonb, text) does not exist
LINE 2:         array_to_string(
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.

I also try this

SELECT string_agg(domain_list::text, ', ')
FROM json_array_elements(
    '{"id": 10,"domain_list": [  "",  "",  ""]}'::jsonb->>'domain_list'
) as domain_list;

But it returns the error

ERROR:  function json_array_elements(text) does not exist
LINE 2:     FROM json_array_elements(
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.

postgres/oracle: get last non-null value within group

I have data which I have to ‘group by’. In each resultant group, there are rows with multiple columns which has to be treated as such: for each such given column, return non-null, most current value. So I have to ‘group by'(gb) whole table, and find ‘max-like(NUM)’ for every column(below represented as NUM). max-like function sorts by temporal column, below represented as ‘time’. In another words, group by ‘gb’, sort group by ‘time’ desc nulls last, get first item in group.

Sorry, for that convoluted description. I hope it’s clear. Any idea how to write that sql query (oracle/postgres)?


  gb integer,
  time integer

--rows=groups, columns=time; so in first row=group data 
--sorted by time are from left to right the middle value 
--in triplet, thus, 2,1,3. Ie. most current non-null value in time is 3.
insert into test VALUES (1,2,1),(1,1,2),(1,3,3);--3
insert into test VALUES (2,1,1),(2,2,2),(2,3,3);--3
insert into test VALUES (3,3,1),(3,2,2),(3,1,3);--1
insert into test VALUES (4,3,1),(4,2,2),(4,null,3);--2
insert into test VALUES (5,2,1),(5,3,2),(5,null,3);--3
insert into test VALUES (6,2,1),(6,null,2),(6,null,3);--2


  '<magic goes here>'
from test t

is expected to return

1 | 3
2 | 3
3 | 1
4 | 2
5 | 3
6 | 2

ASP.Net When Number of users increases then my drop down selection is very slow

I have used the below stored procedure in drop down selected index changed event.

create  function get_timesheetentrydetails(tasktypeid int, projectbatchlotid int)
 returns table (workpackageid integer, workpackagename varchar(500), taskid int, taskname varchar(500), taskdesc varchar(500), previoustaskid int, 
                previoustaskname varchar(500), previoustaskdesc varchar(500), esthrs numeric, empid varchar(200))  as $$
return query  select,wp.code,wp.taskid,,t.description,wp.previoustaskid, as previoustaskname,t1.description as previoustaskdesc,
   wp.estimatehours ,
     from workpackage wp
   inner join task t on 
   inner join task t1 on wp.previoustaskid =
   inner join userprofile u on wp.lastupdatedby =
   and wp.projectbatchlotid=$2 and t.tasktypeid=$1 and wp.status in ('TBA','inprocess')
   and not exists (select id from timesheet where and endtime is null);
$$ language plpgsql;

In the database server this procedure executes in 450ms but when I am calling from user interface if numberof users and data incresses for execution it is taking 60 sec. Colud you please help me how to slove this issue as early as possible.

The below code is in dropdown selected index changed event.

ddlTask.Enabled = true;

DataTable dtbTasks = getTSTasks(rbnCurrentTask.SelectedItem.Value, ddlLotNo.SelectedItem.Value);

Master.bindDropDownList(ddlTask, dtbTasks, "taskdesc", "taskid");

// cascadeSelection(this, "ddlTask");

if (rbnCurrentTask.SelectedItem.Text == "Project Management")
    DataTable dtb = getPMTasks();
    Master.bindDropDownList(ddlTask, dtb, "taskcode", "taskid");
if (rbnCurrentTask.SelectedItem.Text == "Training")
    DataTable dtbTraining = getTrainingTasks();
    Master.bindDropDownList(ddlTask, dtbTraining, "taskcode", "taskid");

How to build aggregations on joined values with windowing functions most performant?

I got a problem with building aggregations on joined values with windowing functions. Simplified it looks like that:

I got the following tables:

  CREATE TABLE movies (
    id SERIAL,
    name VARCHAR,
    year INT,
    genre VARCHAR,
    country VARCHAR

    id SERIAL,
    name VARCHAR

  CREATE TABLE movies_tags (
    id SERIAL,
    movie_id INT,
    tag_id INT

Now I want to do the following statement:

  SELECT m.*, array_agg( AS tags
  FROM movies m
  LEFT JOIN movies_tags mt ON mt.movie_id =
  LEFT JOIN tags t ON = mt.tag_id
  LIMIT 10

Because of the aggregation in the select, all movies get joined with all tags before selecting the top 10 out of that big join. My goal is it to get the aggregation only on the top 10 movies for performance reasons. So what I did was this:

  WITH top_movies AS (
    SELECT m.*
    FROM movies m
    LIMIT 10
  SELECT tm.*, array_agg( AS tags
  FROM top_movies tm
  LEFT JOIN movies_tags mt ON mt.movie_id =
  LEFT JOIN tags t ON = mt.tag_id

The performance is much better. But I got another problem. The final goal is it to create a form of reusable components like a function in Postgres or a named query in an ORM like Rails’s Active Record which I can modify dynamically afterward according to my needs for example like this:

  SELECT * FROM my_top_movies_with_tags() AS tm
  WHERE = 'USA' AND tm.year <= 1995
  LIMIT 10;

Therefore I have to modify my SQL statement that the movie selection is the outer query which however still limits the joining of the tags to the top n movies I want as a result.

To accomplish this, I experimented with lateral joins and did this:

  SELECT m.*, lat.tags FROM movies m
    SELECT array_agg( AS tags
    FROM movies_tags mt
    JOIN tags t ON = mt.tag_id
    WHERE mt.movie_id =
  ) AS lat
  LIMIT 10;

That gives me the flexibility to modify it dynamically afterward but is performance wise much worse.

Is there any other way to accomplish my goals that I’m not aware of?

My goals summarized:

  1. Build aggregates (array_agg) only on the LIMITed movie set and not on the whole movies table.
  2. Stays modifiable through appending of WHERE, ORDER and LIMIT statements.
  3. Performs well.

Error: column “undefined” does not exist

Hy everyone

I’m a new developer ReactJS. I tried to set up Firebase into my backend and I’ve a problem with my frontend. If I’m running my API using a postman, my API work and send a message from my backend but if I running my code from frontend I got error like this Column in the table is users and is not undefined

 { error: column "undefined" does not exist
    at Connection.parseE (C:\fordg\node_modules\pg\lib\connection.js:567:11)
    at Connection.parseMessage (C:\fordg\node_modules\pg\lib\connection.js:391:17)
    at Socket.<anonymous> (C:\fordg\node_modules\pg\lib\connection.js:129:22)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:188:7)
    at readableAddChunk (_stream_readable.js:176:18)
    at Socket.Readable.push (_stream_readable.js:134:10)
    at TCP.onread (net.js:547:20)
  name: 'error',
  length: 109,
  severity: 'ERROR',
  code: '42703',
  detail: undefined,
  hint: undefined,
  position: '55',
  internalPosition: undefined,
  internalQuery: undefined,
  where: undefined,
  schema: undefined,
  table: undefined,
  column: undefined,
  dataType: undefined,
  constraint: undefined,
  file: 'parse_relation.c',
  line: '3183',
  routine: 'errorMissingColumn' }

and this is my backend code

Orders.patch([ Object.assign({id:}, req.body) ])
    .then(orders => {
        DB.query("SELECT oauth FROM users WHERE id = "+req.body.driver+" OR id = "+req.body.customer+"").then(res=>{
                let tmp = {
                    token: row.oauth.token

                res[i] = tmp;

            // console.log(res);
            const arrayToken = (obj) {
                return obj.token;

            // console.log(arrayToken);

            const message = { 
            // to: res[0].token,
            registration_ids: arrayToken,
          notification: {
            body: "Hey! you got update order notification."

        fcm.send(message, function(err, response){
        }).catch(err => console.log(err));

        jsonData = util.jsonData(null, orders);
    .catch(err => {
        jsonData = util.jsonData(400);

So how to fixed it and can you help me explain error of my code ?

Procedure inside procedure in PostgreSQL

This procedure run well in MySQL.





But NOT in PostgreSQL. It always show :

ERROR:  syntax error at or near "some_name" Line n...

How we do this in postgresql?
As an information, I’m working on project that was migrating a database from MySQL to PostgreSQL.

I don’t know what is the best title for this case. Please someone edit the right title for appropriate with the case. Thanks.

Query Optimization – Postgres /pgadmin4

My query is taking hours to run, its over 9M rows in Postgres 9.3 in PgAdmin 4.

create table "t1" as    
count(gnma2."Disclosure_Sequence_Number") as "Total_Loan_Count",    
avg(cast(gnma2."Loan_Interest_Rate" as double precision))/1000 as "avg_int_rate",   
avg(cast(gnma2."Original_Principal_Balance" as real))/100 as "avg_OUPB",    
avg(cast(gnma2."Unpaid_Principal_Balance" as real))/100 as "avg_UPB",   
avg(cast(gnma2."Loan_Age" as real)) as "avg_loan_age",  
avg(cast(gnma2."Loan_To_Value" as real))/100 as "avg_LTV",  
avg(cast(gnma2."Total_Debt_Expense_Ratio_Percent" as real))/100 as "avg_DTI",   
avg(cast(gnma2."Credit_Score" as real)) as "avg_credit_score",  
left(gnma2."First_Payment_Date",4) as "Origination_Yr"  
Group By    



Postgres 10 cluster does not start automatically after I changed the directory data

I changed the postgres data directory following this steps:

sudo systemctl stop postgresql

sudo rsync -av /var/lib/postgresql /mnt/volume-nyc1-01

sudo nano /etc/postgresql/9.5/main/postgresql.conf

then I edited postgresql.conf:
data_directory = ‘/mnt/volume-nyc1-01/postgresql/9.5/main’

sudo systemctl start postgresql

The new directory data is owned by the postgres user.
The problem is that, after starting ubuntu the postgres cluster is down.
I run the command pg_lsclusters and got the answer:

10  main    5432 down   <unknown> /mnt/volume-nyc1-01 /var/log/postgresql/postgresql-10-main.log

And after run sudo systemctl start postgresql@10-main the cluster starts and postgres works ok.
So what can I do to fix this and the clusters starts normally after Ubuntu starts?

Why are my add_foreign_key on_delete: :cascade designations not being transferred to the schema

In an effort to properly handle Postgres’s ForeignKeyViolation warnings, I’ve gone through each of my models and added on_delete: commands as shown below.

The format I’m following is

add_foreign_key <:named_table>, <:related_table>, on_delete: :cascade

However, after making these changes and running rails db:reset I notice these extra parameters are not being passed into the resulting schema.rb file and I still receive the abovementioned error when trying to delete an image.

Is there something I’m doing wrong either in my procedure or my syntax? Thank you in advance!


class CreateDocumentImages < ActiveRecord::Migration[5.1]
  def change
    create_table :document_images do |t|
      t.references :document, foreign_key: true
      t.references :image, foreign_key: true


    add_foreign_key :document_images, :documents, on_delete: :cascade
    add_foreign_key :document_images, :images, on_delete: :cascade


 add_foreign_key "document_images", "images"
 add_foreign_key "document_images", "documents"

SQLAlchemy, array_agg, and matching an input list

I am attempting to use SQLAlchemy more fully, rather than just falling back to pure SQL at the first sign of distress. In this case, I have a table in a Postgres database (9.5) which stores a set of integers as a group by associating individual items atom_id with a group identifier group_id.

Given a list of atom_ids, I’d like to be able to figure out which group_id, if any, that set of atom_ids belong to. Solving this with just the group_id and atom_id columns was straightforward.

Now I’m trying to generalize such that a ‘group’ is made up of not just a list of atom_ids, but other context as well. In the example below, the list is ordered by including a sequence column, but conceptually other columns could be used instead, such as a weight column which gives each atom_id a [0,1] floating point value representing that atom’s ‘share’ of the group.

Below is most of a unit test demonstrating my issue.

First, some setup:

def test_multi_column_grouping(self):
    class MultiColumnGroups(base.Base):
        __tablename__ = 'multi_groups'

        group_id = Column(Integer)
        atom_id = Column(Integer)
        sequence = Column(Integer)  # arbitrary 'other' column.  In this case, an integer, but it could be a float (e.g. weighting factor)


    # Insert 6 rows representing 2 different 'groups' of values
    vals = [
        # Group 1
        {'group_id': 1, 'atom_id': 1, 'sequence': 1},
        {'group_id': 1, 'atom_id': 2, 'sequence': 2},
        {'group_id': 1, 'atom_id': 3, 'sequence': 3},
        # Group 2
        {'group_id': 2, 'atom_id': 1, 'sequence': 3},
        {'group_id': 2, 'atom_id': 2, 'sequence': 2},
        {'group_id': 2, 'atom_id': 3, 'sequence': 1},

        [MultiColumnGroups(**x) for x in vals])

    self.assertEqual(6, len(self.session.query(MultiColumnGroups).all()))

Now, I want to query the above table to find which group a specific set of inputs belongs to. I’m using a list of (named) tuples to represent the query parameters.

    from collections import namedtuple
    Entity = namedtuple('Entity', ['atom_id', 'sequence'])
    values_to_match = [
        # (atom_id, sequence)
        Entity(1, 3),
        Entity(2, 2),
        Entity(3, 1),
    # The above list _should_ match with `group_id == 2`

Raw SQL solution. I’d prefer not to fall back on this, as a part of this exercise is to learn more SQLAlchemy.

    r = self.session.execute('''
        select group_id
        from multi_groups
        group by group_id
        having array_agg((atom_id, sequence)) = :query_tuples
        ''', {'query_tuples': values_to_match}).fetchone()
    print(r)  # > (2,)
    self.assertEqual(2, r[0])

Here is the above raw-SQL solution converted fairly directly into a
broken SQLAlchemy query. Running this produces a psycopg2 error: (psycopg2.ProgrammingError) operator does not exist: record[] = integer[]. I believe that I need to cast the array_agg into an int[]? That would work so long as the grouping columns are all integers (which, if need be, is an acceptable limitation), but ideally this would work with mixed-type input tuples / table columns.

    from sqlalchemy import tuple_
    from sqlalchemy.dialects.postgresql import array_agg

    existing_group = self.session.query(MultiColumnGroups).\
        having(array_agg(tuple_(MultiColumnGroups.atom_id, MultiColumnGroups.sequence)) == values_to_match).\


Is the above session.query() close? Have I blinded myself here, and am missing something super obvious that would solve this problem in some other way?