Quick Tip – How to fix SQL Server 2017 install error on Ubuntu 18.04: E: Could not get lock /var/lib/dpkg/lock-frontend – open (11: Resource temporarily unavailable)


Recently, when I tried to install SQL Server 2017 (mssql-server package) in Ubuntu 18.04, I encountered the error below:


The command that I used to install the mssql-server package is the following:

sudo apt-get install -y mssql-server

Reason for error

This error suggested that the mssql-server package could not acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend) because another process already locked it.


To solve the problem, I first ran the following command to find the id of the process (PID) that already acquired a lock on dpkg frontend (/var/lib/dpkg/lock-frontend):

lsof /var/lib/dpkg/lock-frontend

*Note: Lsof lists on its standard output file information about files opened by processes.

Next, I ran the following sudo kill -9 command to kill that process:

sudo kill -9 122

*Note: Here 122 is the process identifier of the process that has the lock on dpkg frontend.

Then, I ran the following two commands on Ubuntu terminal to remove the lock and reconfigure dpkg:

sudo rm /var/lib/dpkg/lock-frontend
sudo dpkg --configure -a

Finally, I ran the following install SQL Server again to install SQL Server:

sudo apt-get install -y mssql-server

This time, the mssql-server package installation completed successfully. Here is the output of the installation package:

basitfarooq@ubuntu:~$ sudo apt-get install -y mssql-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
  linux-headers-4.18.0-15 linux-headers-4.18.0-15-generic
  linux-image-4.18.0-15-generic linux-modules-4.18.0-15-generic
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  gawk libc++1 libc++abi1 libjemalloc1 libpython-stdlib
  libsasl2-modules-gssapi-mit libsigsegv2 libsss-nss-idmap0 python
  python-minimal python2.7 python2.7-minimal
Suggested packages:
  gawk-doc clang python-doc python-tk python2.7-doc binfmt-support
The following NEW packages will be installed:
  gawk libc++1 libc++abi1 libjemalloc1 libpython-stdlib
  libsasl2-modules-gssapi-mit libsigsegv2 libsss-nss-idmap0 mssql-server
  python python-minimal python2.7 python2.7-minimal
0 upgraded, 13 newly installed, 0 to remove and 124 not upgraded.
Need to get 180 MB of archives.
After this operation, 935 MB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libsigsegv2 amd64 2.12-1 [14.7 kB]
Get:2 https://packages.microsoft.com/ubuntu/16.04/mssql-server-2017 xenial/main amd64 mssql-server amd64 14.0.3076.1-2 [178 MB]
Get:3 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 gawk amd64 1:4.1.4+dfsg-1build1 [401 kB]
Get:4 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 python2.7-minimal amd64 2.7.15~rc1-1ubuntu0.1 [1,304 kB]
Get:5 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 python-minimal amd64 2.7.15~rc1-1 [28.1 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 python2.7 amd64 2.7.15~rc1-1ubuntu0.1 [238 kB]
Get:7 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libpython-stdlib amd64 2.7.15~rc1-1 [7,620 B]
Get:8 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 python amd64 2.7.15~rc1-1 [140 kB]
Get:9 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libsasl2-modules-gssapi-mit amd64 2.1.27~101-g0780600+dfsg-3ubuntu2 [35.5 kB]
Get:10 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 libc++abi1 amd64 6.0-2 [56.7 kB]
Get:11 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 libc++1 amd64 6.0-2 [183 kB]
Get:12 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 libjemalloc1 amd64 3.6.0-11 [82.4 kB]
Get:13 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libsss-nss-idmap0 amd64 1.16.1-1ubuntu1.2 [19.8 kB]
26% [2 mssql-server 15.5 MB/178 MB 9%]                                                                                                                                                                      Fetched 180 MB in 7min 20s (409 kB/s)
Preconfiguring packages ...
Selecting previously unselected package libsigsegv2:amd64.
(Reading database ... 194963 files and directories currently installed.)
Preparing to unpack .../libsigsegv2_2.12-1_amd64.deb ...
Unpacking libsigsegv2:amd64 (2.12-1) ...
Setting up libsigsegv2:amd64 (2.12-1) ...
Selecting previously unselected package gawk.
(Reading database ... 194970 files and directories currently installed.)
Preparing to unpack .../gawk_1%3a4.1.4+dfsg-1build1_amd64.deb ...
Unpacking gawk (1:4.1.4+dfsg-1build1) ...
Selecting previously unselected package python2.7-minimal.
Preparing to unpack .../python2.7-minimal_2.7.15~rc1-1ubuntu0.1_amd64.deb ...
Unpacking python2.7-minimal (2.7.15~rc1-1ubuntu0.1) ...
Selecting previously unselected package python-minimal.
Preparing to unpack .../python-minimal_2.7.15~rc1-1_amd64.deb ...
Unpacking python-minimal (2.7.15~rc1-1) ...
Selecting previously unselected package python2.7.
Preparing to unpack .../python2.7_2.7.15~rc1-1ubuntu0.1_amd64.deb ...
Unpacking python2.7 (2.7.15~rc1-1ubuntu0.1) ...
Selecting previously unselected package libpython-stdlib:amd64.
Preparing to unpack .../libpython-stdlib_2.7.15~rc1-1_amd64.deb ...
Unpacking libpython-stdlib:amd64 (2.7.15~rc1-1) ...
Setting up python2.7-minimal (2.7.15~rc1-1ubuntu0.1) ...
Linking and byte-compiling packages for runtime python2.7...
Setting up python-minimal (2.7.15~rc1-1) ...
Selecting previously unselected package python.
(Reading database ... 195173 files and directories currently installed.)
Preparing to unpack .../0-python_2.7.15~rc1-1_amd64.deb ...
Unpacking python (2.7.15~rc1-1) ...
Selecting previously unselected package libsasl2-modules-gssapi-mit:amd64.
Preparing to unpack .../1-libsasl2-modules-gssapi-mit_2.1.27~101-g0780600+dfsg-3ubuntu2_amd64.deb ...
Unpacking libsasl2-modules-gssapi-mit:amd64 (2.1.27~101-g0780600+dfsg-3ubuntu2) ...
Selecting previously unselected package libc++abi1:amd64.
Preparing to unpack .../2-libc++abi1_6.0-2_amd64.deb ...
Unpacking libc++abi1:amd64 (6.0-2) ...
Selecting previously unselected package libc++1:amd64.
Preparing to unpack .../3-libc++1_6.0-2_amd64.deb ...
Unpacking libc++1:amd64 (6.0-2) ...
Selecting previously unselected package libjemalloc1.
Preparing to unpack .../4-libjemalloc1_3.6.0-11_amd64.deb ...
Unpacking libjemalloc1 (3.6.0-11) ...
Selecting previously unselected package libsss-nss-idmap0.
Preparing to unpack .../5-libsss-nss-idmap0_1.16.1-1ubuntu1.2_amd64.deb ...
Unpacking libsss-nss-idmap0 (1.16.1-1ubuntu1.2) ...
Selecting previously unselected package mssql-server.
Preparing to unpack .../6-mssql-server_14.0.3076.1-2_amd64.deb ...
Unpacking mssql-server (14.0.3076.1-2) ...
Setting up libc++abi1:amd64 (6.0-2) ...
Setting up libsss-nss-idmap0 (1.16.1-1ubuntu1.2) ...
Processing triggers for mime-support (3.60ubuntu1) ...
Processing triggers for desktop-file-utils (0.23-1ubuntu3.18.04.2) ...
Setting up libjemalloc1 (3.6.0-11) ...
Setting up python2.7 (2.7.15~rc1-1ubuntu0.1) ...
Setting up gawk (1:4.1.4+dfsg-1build1) ...
Setting up libpython-stdlib:amd64 (2.7.15~rc1-1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up libsasl2-modules-gssapi-mit:amd64 (2.1.27~101-g0780600+dfsg-3ubuntu2) ...
Processing triggers for gnome-menus (3.13.3-11ubuntu1.1) ...
Setting up python (2.7.15~rc1-1) ...
Setting up libc++1:amd64 (6.0-2) ...
Setting up mssql-server (14.0.3076.1-2) ...

Please run 'sudo /opt/mssql/bin/mssql-conf setup'
to complete the setup of Microsoft SQL Server

SQL Server needs to be restarted in order to apply this setting. Please run
'systemctl restart mssql-server.service'.
Processing triggers for libc-bin (2.27-3ubuntu1) ...

I then proceeded with steps to set the SA password and edition of SQL Server, to complete the installation successfully.

I hope you find this post useful. Please drop your comments below. Basit 🙂

CREATE DATABASE failed on SQL Server FCI cluster: Only formatted files on which the cluster resource of the server has a dependency can be used


Today, I have a request to create a new database on two-node SQL Server 2016 failover cluster instance, which is in an active/passive configuration. I connected to the clustered SQL Server instance and ran the CREATE DATABASE command similar to the one below:

USE [master]

( NAME = N'MyDatabase_001_Data', FILENAME = N'E:\DATA_MyServer\MyDatabase.mdf' , SIZE = 2097152KB , MAXSIZE = 2097152KB , FILEGROWTH = 262144KB )
( NAME = N'MyDatabase_001_Log', FILENAME = N'E:\LOG_MyServer\MyDatabase_log.ldf' , SIZE = 4390912KB , MAXSIZE = 2048GB , FILEGROWTH = 262144KB )

However, when I executed the command, I received the following error:

Msg 5184, Level 16, State 2, Line 1
Cannot use file 'E:\DATA_MyServer\MyDatabase.mdf' for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used. Either the disk resource containing the file is not present in the cluster group or the cluster resource of the Sql Server does not have a dependency on it.
Msg 1802, Level 16, State 1, Line 1
CREATE DATABASE failed. Some file names listed could not be created. Check related errors.

Reason for error

This error is self-explanatory. This error occurs when the disks resources containing SQL Server database data and log files are not added as a dependency of the SQL Server cluster resource.


  • Open the Failover Cluster Manager snap-in.
  • Select the SQL Server resource, open the Properties dialog box, and use the Dependencies tab to add the disk to the set of SQL Server dependencies.
  • Next, click OK to save the setting and to close the Properties dialog box.

How to change Integration Services Catalog (SSISDB) database Master Key encryption password?

To change the Integration Services Catalog (SSISDB) database Master Key encryption password, run the following Transact-SQL statement:


OPEN MASTER KEY DECRYPTION BY PASSWORD = N'[old_password]'; -- Password used when creating SSISDB

In the above Transact-SQL fragment, we first open the Integration Services Catalog (SSISDB) database Master Key with the existing password, and then regenerated it with a new one.

Database Issue: The transaction log for database ‘db_name’ is full due to ‘ACTIVE_BACKUP_OR_RESTORE’.

Today, I was called in the early hours because the transaction log of one of the SQL Server 2016 database was full. I logged onto the SQL server instance via SQL Server Management Studio, and saw the following errors in the SQL Server error log:

The transaction log for database 'OLTPDB' is full

Reason for failure

This occurs because the full backup for the ‘OLTPDB’ database was still running, and as a result, the transaction log of this database was not freed up for reuse. This caused the transactions that were running against this database to fail.

Issue resolution

As we know, the portion of the transaction log file that is used during the full backup operation must be maintained while the full backup operation is running, which will also be backed up at the end of the full backup operation. That’s because the full database backup contains all the database data at the start of the full backup operation plus the transition log used until the end of the full backup operation. Therefore, to fix the issue, I had to manually increase the size of the transaction log file, to accommodate the high load against the database.

An Overview on SQL Server Checkpoints


Commercial database systems like SQL server have many recovery mechanisms to restore data hardware or software failure. Checkpoints are part of such recovery mechanism.SQL Server Database Engine is programmed to perform changes or modifications to database pages in buffer cache (memory) first and after some processing the modification are written on the disk files. This “processing” involves Checkpoint which writes all the dirty pages existing on Buffer Cache to Physical disk. It also enters log records from buffer log to physical file. Checkpoint in SQL Server was introduced to reduce the time required for recovery during an unexpected shutdown or system failure. Database Engine issues a checkpoint for each database on regular intervals.

Various factors and conditions (depending upon the Recovery Interval settings done) commit a checkpoint to issue. It can be an execution of ALTER DATABASE command, manual execution of CHECKPOINT, Server clean shutdown, or even in case SQL database is in SIMPLE mode and its log is 70% full.

Talking about Checkpoints, a regular definition states that once a transaction is made checkpoint writes about it. But this is not true, checkpoint writes about all the pages which have been changed (marked dirty) since the last checkpoint. It doesn’t depend on transaction, whether it is committed or not. tempdb is an exception where data pages are not written to disk as a part of checkpoint. Below section will elaborate in detail, what exactly happens when checkpoint is triggered;

Working Operation of CHECKPOINT

Changes done in memory of respective databases are checked as per last checkpoint and then all the dirty pages of databases are written to the disk. It is independent of the state of the transaction which has made the changes. Then, all the log records along with the most recent log record describing changes made in the database are written to disk first before page is written to disk assuring the recovery can be done through this write-ahead logging.

Sequential entries are made to the log from all the transactions. And it is not possible to write selective records to disk. Thus, when a user writes a dirty page to disk even if having only single log record creating problem in it, all the log records prior to this log record will be written on the page.Later, generation of log records with information about checkpoints takes place.The LSN (Log Sequence Number) of checkpoint is recorded in boot page of that database in the dbi_checkptLSN field along with other critical information. If SIMPLE Recovery mode is assigned, the VLFs in log are checked if it is possible to mark them inactive. These tasks are independent of what type of CHECKPOINT has been made (Manual or Automatic).

Categories of SQL Server Checkpoints


Automatic Checkpoint is the most common one, and it issues automatically in the background as per the settings done in Recovery Interval server configuration option. This Recovery Interval parameter is defined at server level. By default, this parameter value is 0(zero) in which target recovery interval is 1 minute. Automatic checkpoints are throttled on the basis of number of outstanding writes and on the fact whether Database Engine senses any rise in write latency above 20 milliseconds.

The following is the query to define the [recovery interval]:

USE [master];

EXEC [sp_configure] '[recovery interval]', 'seconds'


Indirect Checkpoints were added in SQL Server 2012 and this also runs in the background but the difference is it runs on the basis of user-specified target time for recovery for respective databases. If user has used ALTER DATABASE to set TARGET_RECOVERY_TIME as >0, it will be used overriding the Recovery Interval specified at server level completely, avoiding Automatic Checkpoint for that Database. It has been observed that Indirect checkpoint are faster and provide more predictable recovery time as compared to automatic checkpoints.

Recovery time required for database recovery is reduced when indirect checkpoints are preferred. This is done by factoring in cost of random I/O during a REDO operation. These checkpoints also reduceassociated I/O thwarting by continuously writing dirty pages to disk.

Here is the syntax of the query to produce indirect checkpoint:

USE [master];



As the name defines, this command runs like any other T-SQL statement and once issued it will run to its completion. It must be noted that Manual Checkpoint will run for the current database only. Checkpoint_Duration can also be defined in seconds at database level, which defines the time to complete checkpoint and is optional.

Here is the syntax to issue manual checkpoint:

USE [master];

CHECKPOINT [ checkpoint_duration ]


User cannot control these Internal Checkpoints. This is issued by various server operations like; backup & database-snapshot creation ensuring that the images taken synchronize with the state of log. Following events will follow up with generation of internal checkpoints;

When databases are added or removed with ALTER DATABASE command. It is triggered when backup of a database is taken or database snapshot is created explicitly or internally. A clean Shutdown (Shutdown with NOWAIT) will also trigger internal checkpoint. Changes in Recovery Model from Full\Bulk-logged to Simple will also initiate internal checkpoint.

How Recovery-Interval Stimulates Recovery Performance?

Normally, default values are enough to provide optimum recovery performance.But changing the recovery interval can be practiced to improve performance. You can opt to change the interval time in some conditions like; if the recovery process takes longer than 1 minute when long-running transactions are not rolled back. Frequent checkpoints are ruining performance, as under high frequency of running checkpoints performance of SQL Server can drop due to heavy I/O activity.

In case recovery interval setting has to be changed, it is recommended to increase it slowly with small increments. You can examine the relative recovery performance with gradual increase in recovery interval. This will help you get effective results.


Checkpoints are useful repository of information and serves best for recovery of SQL server databases. This article shows Checkpoint’s importance, relevance and categories of checkpoints. Practicing a better checkpoint method will not only enhance the server performance but it also renders a better recovery plan. Recovery Interval can be chosen as per I/O activities and significance of the databases; it must be up-to-date and at the same time should not hamper the performance of server. One thing must be considered that the checkpoint log records are never overwritten by successive checkpoints. It will be only overwritten when log wraps and VLFs (Virtual Log Files) are re-used.

About the author:

This is the guest post by Andrew Jackson, a US based SQL Server DBA, who is currently working for SysTools Group, a company that provides its services across a diverse range that includes data recovery, digital forensics, and cloud backup. 

He can be contacted through his blog or via LinkedIn.

Windows policy Lock Pages in Memory option and SQL Server instance in VM


As we know, this Windows policy Lock Pages in Memory option determines which accounts can use a process to keep data in physical memory, preventing the Windows operating system from paging out a significant amount of data from physical memory to virtual memory on disk. This Windows policy is disabled by default. This Windows policy (Lock Pages in Memory option) must be enabled for SQL Server service account. That’s because, setting this option can increase the performance of SQL Server instance running on the virtual machine (VM) where paging memory to disk is expected. When not enabled, there is a risk that SQL Server buffer pool pages may be paged out from physical memory to virtual memory on disk.
*Note: Only applies to SQL Server instances running on the virtual machine (VM).

You can use the following Transact-SQL script to check whether or not this Windows policy is enabled for SQL Server Service Startup account:


DECLARE @CMDShellFlag [bit] ,
		@CheckCommand [nvarchar](256);

DECLARE @xp_cmdshell_output TABLE
      [output] [varchar](8000)

                FROM    [sys].[configurations]
                WHERE   [name] = N'xp_cmdshell'
                        AND [value_in_use] = 1 )
        SET @CMDShellFlag = 1;

        EXEC [sp_configure] 'show advanced options', 1;


        EXEC [sp_configure] 'xp_cmdshell', 1;


        EXEC [sp_configure] 'show advanced options', 0;


SELECT  @CheckCommand = 'EXEC [master]..[xp_cmdshell]' + SPACE(1) + QUOTENAME('whoami /priv', '''');

INSERT INTO @xp_cmdshell_output
        ( [output] )
EXEC [sys].[sp_executesql] @CheckCommand;

            FROM    @xp_cmdshell_output
            WHERE   [output] LIKE '%SeLockMemoryPrivilege%enabled%' )
    SELECT  'Windows policy Lock Pages in Memory option is enabled' AS [Finding];
    SELECT  'Windows policy Lock Pages in Memory option is disabled' AS [Finding]; 

IF @CMDShellFlag = 1

        EXEC [sp_configure] 'show advanced options', 1;


        EXEC [sp_configure] 'xp_cmdshell', 0;


        EXEC [sp_configure] 'show advanced options', 0;



Here are instructions to enable Lock Pages in Memory option Windows policy:

  1. On the Start menu, click Run. In the Open box, type gpedit.msc.
  2. On the Local Group Policy Editor console, expand Computer Configuration, and then expand Windows Settings.
  3. Expand Security Settings, and then expand Local Policies.
  4. Select the User Rights Assignment folder. The policies will be displayed in the details pane.
  5. In the pane, double-click Lock pages in memory.
  6. In the Local Security Setting – Lock pages in memory dialog box, click Add User or Group.
  7. In the Select Users, Service Accounts, or Groups dialog box, add SQL Server Service Startup account.
  8. Restart SQL Server for this change to take effect.

For more information, refer to article “Enable the Lock Pages in Memory Option (Windows)” on the Microsoft Development Network website.

Very Important Note: You must also configure the following two server memory options, min server memory and max server memory, if you enable Windows policy Lock Pages in Memory option. That’s because these memory options help you to control the amount of server memory with SQL Server can consume. Ignoring this recommendation can severely reduce SQL Server performance and even prevent SQL Server from starting. For more information about how to configure these two memory options, refer to article “Server Memory Server Configuration Options” on the Microsoft Development Network website.

I hope you will find this post useful. 🙂

Have a nice week….