What is the most efficient way to store a variable number of columns in SQL Server?

What is the most efficient way to store a variable amount of columns in MS-SQL?

I have a requirement to store a large number (several million) records into a Microsoft SQL server (via c#). Most columns are standard, but certain groups of users will need to add their own custom columns, and record data in them.
The data in each custom column field will not be large, but the number of records with a certain set of custom columns will be in the millions.

I do not know ahead of time what these columns might be (in terms of name or datatype), but I’ll need to pull reports based on these columns as effeciently as possible..

What is the most efficient way of storing the new varying columns and data?

  • Entity-Attribute-Value model?
    Con’s: Efficiency if there’s a large number of custom columns (= large number of rows)?

  • A extra table “CustomColumns”?
    Storing columnName, Data, Datatype each time an entry has a custom column, for each column.
    Con’s: A table with a large number of records, perhaps not the most efficient storage.

  • Serialise the extra columns for each record into a single field
    Con’s: Lookup efficiency and stored procedure complicated when running reports based on a custom field.

  • Any other?

Edit: Think I may be confusing option (1) and (2): I actually meant, is the following the best approach :

Entity (User Groups) 
    id   | name           | description
    --   | ----           | ------------ 
    1    | user group 1   | user group 1
    2    | user group 2   | user group 2


Attribute 
    id   | name      | type     | entityids  (best way to do this for 2 user 
    --   | ----      | ----     |            groups using same attribute?
    1    | att1      | string   | 1,2           
    2    | att2      | int      | 2
    3    | att3      | string   | 1
    4    | att4      | numeric  | 2
    5    | att5      | string   | 1

Value
    id  | entityId| attributeId  | value     
    --  | --------| -----------  | -----
    1   | 1       | 1            | a    
    2   | 1       | 2            | 1
    3   | 1       | 3            | b  
    4   | 1       | 3            | c    
    5   | 1       | 3            | d
    6   | 1       | 3            | 75   
    7   | 1       | 5            | Inches

Duplicate results in mysql query when doing ajax call

SQL Fiddle if you guys wanna mess around with it

I’m unable to reproduce the issue using the fiddle.

Here is the query you can use in the fiddle

SELECT n.*, ns.notification_id AS is_read FROM notifications n
LEFT OUTER JOIN notification_status ns
ON n.id = ns.notification_id
LEFT JOIN notification_user_role nur
ON n.id = nur.notification_id
WHERE
(
  n.esb_consultant_id = 19291
  OR
  n.esb_consultant_id = 'role'
)
AND nur.user_role_id = 'pl_sso_regional_vice_president'
AND n.creation_date <= NOW()
AND n.expiration_date >= NOW()
ORDER BY n.creation_date DESC, (is_read IS NULL) DESC, n.priority ASC
LIMIT 0, 10

I’ve also placed it lower in this post, but here it’s easier to catch the eyes.


I’ll try to keep this as brief as possible.

I’m working on a notification system. I have 3 tables described below.

I’m trying to get notifications with a LIMIT of 10, paginated, 10 per page(so an OFFSET of 10). I’m using ajax to load the next 10.

They are to be ordered by priority(from 1 to 6, 1 being displayed first, while 6th being displayed last).

All unread notifications must be displayed first(priority still applies), while read notifications must be displayed last(priority still applies).

Notifications are per role. A user can have several roles(thus the need for another table).

The notification_status table described below is used to keep track of which notifications are read.

Which ever notification is NOT in the notification_status table is NOT read. This is very important. I didn’t make this decision. I just have to live with it.


To put it into the big picture lets have an example:

Assume we have 14 notifications:

5 of them will be priority 1, unread.

4 of them will be priority > 1, unread.

3 of them will be priority 1, read.

2 of them will be priority > 1, read.


The expected display order is the following.

5 unread priority 1

4 unread priority > 1

1 read priority 1

ajax starts here since we have 10 per page

2 read priority 1

2 read priority > 1


The table structure is as follows.

notifications

+-------------------+------------------+------+-----+---------+----------------+
| Field             | Type             | Null | Key | Default | Extra          |
+-------------------+------------------+------+-----+---------+----------------+
| id                | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| type_id           | int(10) unsigned | NO   |     | NULL    |                |
| sticky            | int(10) unsigned | NO   |     | NULL    |                |
| priority          | int(10) unsigned | NO   |     | NULL    |                |
| esb_consultant_id | varchar(40)      | NO   |     |         |                |
| message_id        | varchar(100)     | NO   |     |         |                |
| esb_params        | varchar(255)     | YES  |     |         |                |
| creation_date     | datetime         | YES  |     | NULL    |                |
| expiration_date   | datetime         | YES  |     | NULL    |                |
+-------------------+------------------+------+-----+---------+----------------+

notification_user_role

+-----------------+------------------+------+-----+---------+-------+
| Field           | Type             | Null | Key | Default | Extra |
+-----------------+------------------+------+-----+---------+-------+
| user_role_id    | varchar(150)     | NO   |     |         |       |
| notification_id | int(10) unsigned | NO   | MUL | NULL    |       |
+-----------------+------------------+------+-----+---------+-------+

notification_status

+-------------------+------------------+------+-----+---------+-------+
| Field             | Type             | Null | Key | Default | Extra |
+-------------------+------------------+------+-----+---------+-------+
| esb_consultant_id | varchar(20)      | NO   |     |         |       |
| notification_id   | int(10) unsigned | NO   | MUL | NULL    |       |
+-------------------+------------------+------+-----+---------+-------+

The query I’m using to retrieve the results:

SELECT n.*, ns.notification_id AS is_read FROM notifications n
LEFT OUTER JOIN notification_status ns
ON n.id = ns.notification_id
LEFT JOIN notification_user_role nur
ON n.id = nur.notification_id
WHERE 
(
  n.esb_consultant_id = :consultant_id 
  OR 
  n.esb_consultant_id = :role_all
)
AND nur.user_role_id = :consultant_role
AND n.creation_date <= NOW()
AND n.expiration_date >= NOW()
ORDER BY n.creation_date DESC, (is_read IS NULL) DESC, n.priority ASC
LIMIT $offset, $limit

$offset is the page times 10 – so if the page is 0(first page) the offset is 0, if the page is 1(first ajax call) the offset is 10 and so on

$limit is the limit, it’s always 10.

:consultant_id is the user id – unique

:role_all is a simple string all. It is used for when certain notifications are for all roles(such as a birthday notification). All users have this notification, regardless of role since all of them have a birthday.


The problem:

Whenever I’m doing an ajax call I get certain notifications that are duplicated. I’ll just post a screenshot of it since it’s easier than drawing it.

Do note that the ajax itself is just part of the way I’m retrieving the results but is not responsible for the duplicates themselvs, I’m absolutely sure. It’s not a display issue either, I’ve double and triple checked that.

Before Ajax

After ajax

What I’ve noticed is that if I were to remove this part

ORDER BY n.creation_date DESC, (is_read IS NULL) DESC, n.priority ASC

from the query. It works fine. No duplicates.

The dump of the query above, with limit removed and order by removed:

Dump

Sorry for the image but it’s easier.


I’m using PHP to query the database.

  public function all($consultant_id, $consultant_role, $offset = 0) {
    $limit = 10;
    $offset = $offset * 10;

    $query = <<<SQL
SELECT n.*, ns.notification_id AS is_read FROM notifications n
LEFT OUTER JOIN notification_status ns
ON n.id = ns.notification_id
LEFT JOIN notification_user_role nur
ON n.id = nur.notification_id
WHERE 
(
  n.esb_consultant_id = :consultant_id 
  OR 
  n.esb_consultant_id = :role_all
)
AND nur.user_role_id = :consultant_role
AND n.creation_date <= NOW()
AND n.expiration_date >= NOW()
ORDER BY n.creation_date DESC, (is_read IS NULL) DESC, n.priority ASC
LIMIT $offset, $limit
SQL;
    $return = $this->connection
      ->query($query
        , [
          ':consultant_role' => $consultant_role,
          ':consultant_id'   => $consultant_id,
          ':role_all'        => NotificationStatus::PL_N_ALL,
        ]
      )->fetchAll(\PDO::FETCH_ASSOC);

    foreach($return as $item) { // this is added simply for display purposes
      echo $item['id'] . '<br>';
    }

    return $return;
  }

The above is a copy+paste of the code used to retrieve the results. The function simply returns the results for displaying, no other magic going around.

The foreach is added to simply display the results in the browser.

Here is the image of the output. Notification 10 is duplicated.

Here is the exact same code only with
ORDER BY n.creation_date DESC, (is_read IS NULL) DESC, n.priority ASC removed. Limit and offset still apply here.


I’m not that good at mysql or sql in general and I’m not sure where the problem itself lies.

Any pointing in the right direction is greatly appreciated. Even workarounds or “hacks” I’m fine with.

Setting a variable in a SQL WHERE clause to be used in SELECT

I’m using Transact-SQL with Microsoft SQL Server, and we have a query that looks like this:

SELECT Cast( Cast ( Cast(XMLBlob as XML).query(N'//continent/forest/tree/age/Text()') as nvarchar) as bigint), 
AnotherField
FROM [MyDB].[dbo].[mytable]
WHERE Cast( Cast ( Cast(XMLBlob as XML).query(N'//continent/forest/tree/age/Text()') as nvarchar) as bigint) 
between 10 and 100 

The XML cast is an expensive operation, and since it’s used in both the WHERE and SELECT, it seems like I should be able to save it away as a variable in the WHERE (which, by order of operations, is evaluated before the SELECT), and use it in the SELECT instead of having to cast again. Is this possible?

Unknown command beginning “administer…”

SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE ‘C:\app\oracle\product\11.2.0\dbhome_1\NETWORK\ADMIN\WALLET’ IDENTIFIED BY Welcome@123;
SP2-0734: unknown command beginning “ADMINISTER…” – rest of line ignored.

I have created a wallet and while creating a key I am getting the error as shown above. How can I resolve the issue? I have tried using ” set sqlblanklines on;” , it didn’t work.

How to move data from an SQL dump file into an SQLITE database for use with Room Persistence Library Android?

I am building an Android application that will perform statistical analysis on some sample data. The sample data is in the form of an SQL dump file (.sql).

I have followed the guide on Google to setup Room, but I don’t know how to transfer the data from the dump file into this database.

Get all record from one table plus the relative last record of a second table

Situation (simplified):

tableA

id  | id_device
----+----------
... | 10
... | 11
... | 12

tableB

id  | id_device | val
----+-----------+----
... | 10        | 200
... | 10        | 105
... | 10        | 120
... | 11        | 80

Result expected: All the records from tableA, and for each record of tableA, the last relative record on tableB.

id_device | val 
----------+-----
10        | 120
11        | 80
12        | null

I tried to something like this, but I can’t get the last record correctly:

SELECT tableA.*,
       tableB.* 
FROM tableA
LEFT JOIN (
    SELECT id_device,
           val,
           MAX(id) 
    FROM tableB
    GROUP BY id_device
) AS tableB
     ON tableA.id_device = tableB.id_device

Retrieve Row based on another row value in a table?

I have a table-1 with structure as below:

id  id2 Status
1    1   10
2    1   20
3    2   10

Suppose if I don’t have an entry for status = 20 in this table for particular id2, then I should query another table with id2 value which is having Status = 10. And if an entry for status = 20 is present in the table, then I shouldn’t query any of the table.
Is it possible to do this in a single SQL statement instead of using PLSQL block?


Select * from table_1 where id2 = 1;

This will give the result as

1 1 10 
1 2 20

Since 20 is present in this Query, we should use the id2 = 1 with status 10 for querying the table TABLE_2.

select * from table_1 where id2=2; 

As this query will not have a row with status 20, no row should be fetched. i.e we need to query TABLE_2 with id2 which is having Status as 10.

Error when deleting foreign key

For a table copy script to another database, I want to drop a table to recreate another with news elements.
To do that and after try to use Microsoft helps, I use this command to disable constraint key on my table and after drop it :

USE MDE_DOS_DEMO_GH;
GO
ALTER TABLE MDE_DOS_DEMO_GH.dbo.PMEELEMENT NOCHECK CONSTRAINT ALL
GO

DROP TABLE MDE_DOS_DEMO_GH.dbo.PMEELEMENT;

I’ve this error when I drop my table :

Imposible to delete PMEELEMENT object because ... FOREIGN KEY CONSTRAINT

Have you an idea ?

Thank you

Understanding SQL Certificates

I am having a hard time understanding how certificates in SQL work. I think I get the use of them conceptually, but when I run this tutorial from Microsoft, I realize that I may not fully understand it. If I run the script below except steps 4-5, I cannot query the Vendor table using the new user, but I can run the proc which can query the vendor table. This doesn’t make sense as I thought that the user running the proc would still not be able to query that table because I haven’t given permission for that table yet. Please help me understand. The full script is below:

/* Step 1 - Open the AdventureWorks2012 database */  
USE AdventureWorks2012;  
GO  
-- Set up a login for the test user  
CREATE LOGIN TestCreditRatingUser  
   WITH PASSWORD = 'ASDECd2439587y'  
GO  
CREATE USER TestCreditRatingUser  
FOR LOGIN TestCreditRatingUser;  
GO  

/* Step 2 - Create a certificate in the AdventureWorks2012 database */  
CREATE CERTIFICATE TestCreditRatingCer  
   ENCRYPTION BY PASSWORD = 'pGFD4bb925DGvbd2439587y'  
      WITH SUBJECT = 'Credit Rating Records Access',   
      EXPIRY_DATE = '12/05/2014';  
GO  

/* Step 3 - Create a stored procedure and  
sign it using the certificate */  
CREATE PROCEDURE TestCreditRatingSP  
AS  
BEGIN  
   -- Shows who is running the stored procedure  
   SELECT SYSTEM_USER 'system Login'  
   , USER AS 'Database Login'  
   , NAME AS 'Context'  
   , TYPE  
   , USAGE   
   FROM sys.user_token;     

   -- Now get the data  
   SELECT AccountNumber, Name, CreditRating   
   FROM Purchasing.Vendor  
   WHERE CreditRating = 1;  
END  
GO  

ADD SIGNATURE TO TestCreditRatingSP   
   BY CERTIFICATE TestCreditRatingCer  
    WITH PASSWORD = 'pGFD4bb925DGvbd2439587y';  
GO  

/* Step 4 - Create a database user for the certificate.   
This user has the ownership chain associated with it. */  
USE AdventureWorks2012;  
GO  
CREATE USER TestCreditRatingcertificateAccount  
   FROM CERTIFICATE TestCreditRatingCer;  
GO  

/* Step 5 - Grant the user database rights */  
GRANT SELECT   
   ON Purchasing.Vendor   
   TO TestCreditRatingcertificateAccount;  
GO  

GRANT EXECUTE  
   ON TestCreditRatingSP   
   TO TestCreditRatingcertificateAccount;  
GO  

/* Step 6 - Test, using the EXECUTE AS statement */  
GRANT EXECUTE   
   ON TestCreditRatingSP   
   TO TestCreditRatingUser;  
GO  

-- Run the procedure as the dbo user, notice the output for the type  
EXEC TestCreditRatingSP;  
GO  

EXECUTE AS LOGIN = 'TestCreditRatingUser';  
GO  
EXEC TestCreditRatingSP;  
GO  

/* Step 7 - Clean up the example */  
REVERT;  
GO  
DROP PROCEDURE TestCreditRatingSP;  
GO  
DROP USER TestCreditRatingcertificateAccount;  
GO  
DROP USER TestCreditRatingUser;  
GO  
DROP LOGIN TestCreditRatingUser;  
GO  
DROP CERTIFICATE TestCreditRatingCer;  
GO

Question 1: Why can a new user with no privileges except execute on 1 proc be able to query a table in that proc? I thought their lack of SELECT permissions would still not allow them to query that table.

Question 2: If they can run anything in the proc anyway, what is the point of the certificate?

Not A valid Month – SQL ORA-01843 – In unix

I’m using a

TO_CHAR(TO_DATE(tdj_tran_dt,'DD-MON-RRRR'),'DD-MON-RRRR')    

in my one of my views.

The underlying Data is in the form DD-MON-YY and I have to display it in the form DD-MON-YYYY in the screen.

Initially I was using to_char(tdj_Tran_dt,'DD-MON-YYYY') alone but it wasnt working out. For example, 20-OCT-17 would become 20-OCT-0017. My system has migrated data so changing the form of the data while inserting into table will not help. So for this I’ve used TO_CHAR(TO_DATE(tdj_tran_dt,'DD-MON-RRRR'),'DD-MON-RRRR') which seems to be working everywhere except in unix.
I have a proc file (run through linux) which calls this view and writes the data in xls format. but in the proc while opening the cursor it gives Oracle error ORA-01843. Changing it back to to_char(tdj_Tran_dt,'DD-MON-YYYY') seems to work but brings back the original problem.

I just want to know is there some setting in the database that I can change to fix this issue. I have the samething running in two different environments and the not a valid month error seems to be only occuring in one environment.
I have checked nls_parameters which seems to be the same in both environments.

NLS_LANGUAGE    AMERICAN
NLS_TERRITORY   AMERICA
NLS_CURRENCY    $
NLS_ISO_CURRENCY    AMERICA
NLS_NUMERIC_CHARACTERS  .,
NLS_CALENDAR    GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE   AMERICAN
NLS_CHARACTERSET    AL32UTF8
NLS_SORT    BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT    DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT  HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY   $
NLS_NCHAR_CHARACTERSET  AL16UTF16
NLS_COMP    BINARY
NLS_LENGTH_SEMANTICS    BYTE
NLS_NCHAR_CONV_EXCP FALSE

I’ve checked in a lot of places but no solution. If anybody has any idea how this could be solved, please help.