Saturday, 28 September 2013

How to: Setup PostgreSQL on PC

After few years working on MSSQL, I almost forgot there is still a whole world of databases out there to be explored, and a lot of which are free and very lightweight, such as SQLite, PostgreSQL.

Long story short, I cracked on setting up a PostgreSQL as it is work-related, and I probably should have done it on Linux. But the web portal is .NET based, so here we are.

In order to set up your own PostgreSQL server on PC, it is pretty straight forward, but with some little tricks.

1. Install PostgreSQL

Go to this official download page to get a copy of EnterpriseDB for free, and install it. 

Then you will find most of interesting bits are installed in "C:\Program Files\PostgreSQL\<version>\bin".

2.Install pgAdmin

After that, you can also find a copy of admin tool from download page, and install it.

Open the pgAdmin, and the fun begins. Go 'File -> Add Server' to create a new database server, and drop in the settings as below:



You should be able to see this:




4. Restore a Database

Now you can choose to create a new database or restore from backup. Since I was buggered by restoring, I am going to show you how I dealt with it.

The backup file I hold is in plain text .backup file (bad I know, but because there is no confidential information, so I did). It stops me from running direct restore, and prompted me this message "input file appears to be a text format dump. Please use psql.".

I've mentioned most fun things are under bin folder, therefore, psql executable is kinda easy to find. And by running command "cat <backupfile>.backup | <unc>psql <target dbname>", the database was restored! What the heck? Yes, even I was surprised.

If you are interested, there is lot more fun in the bin folder than just this.

5. ORM

Setting up a PostgreSQL seems to be a nice and painless experience, but how do I consume it?

One of the preferable way is to use ORM, and with the help from tools like Devart, it really is easier done than said. Devart is a .NET library to access data using database-first or model-first, similar to Entity Framework, but without the support for code-first as far as I notice. The only downside is Devart is not free.

By installing dotConnect for PostgreSQL plugin, you will get more options when creating data access models in Visual Studio.



Choose to use database first, to start configuring.



The problem I came across is a network setting issue which suggests me I am not using proper SSL.



Since we are not communicating with any public network, I find the 'pg_hba.conf' file under 'C:\Program Files\PostgreSQL\<version>\data', and hack through it by adding 'host all all 0.0.0.0/0 md5' to the config.

Then everyone's happy!

After this, you can very much keep clicking next until finish if you don't have any specific requirements. And you will find yourself in a place as if you are using LinqToSQL.

Conclusion

PostgreSQL is probably not as powerful as MSSQL, but you get most of day to day functions if you are not hardcore DBA. It can be hosted on different platforms, and most importantly it is free.

Since I only started using PostgreSQL, I find its UI is somehow alien, and could be a bit clunky to use sometimes. The syntax has its own vein of blood when it comes to write functions.

ORM is definitely the best way to access data, yet Devart does not give you everything for free. Perhaps this will not be a problem when community sorts out code first. Then we can truly enjoy the convenience and freedom of using PostgreSQL.

Monday, 26 August 2013

How to: Setup F# Development Environment in Mac

F# has been an awesome programming language, it gives a simple and elegant way to code, review and iterate. Despite F# is a open source project, it still could not get rid of the Microsoft gene underneath its skin. 

While just like C#, I am not against it at all, but really enjoy and has been working with C# for a long time. It is the hassle to utilise and migrate to different platforms irritates me.

This time, Mono has come to our rescue to help us hack on Mac.

1. Install Mono MRE and MDK

In order to able to compile and run F# using Mono, MRE (aka. Mono Runtime Environment) and MDK (aka. Mono Development Kit) need to be installed. Visit the link to the official site to get the latest version.

After installation, you should be able to navigate to 

/Library/Frameworks/Mono.framework/Versions/
to check the version you installed. Most importantly, make sure the 'Current' version under the directory points to the correct version you want.

2. Install MonoDevelop (aka Xamarin Studio)

From the previous Mono download, you should be able to find the link to MonoDevelop as well. In case you could not find it, take the express bus to the site to rock.

3. Install F# Compiler

Head over to fshaper's GitHub page to get an instruction on how to make sure of the compiler. For Mac users, I have also listed out some steps from the page just to keep this introduction neat.

Before we proceed, you probably would like to check if you have got brew and Xcode command line tools installed.

Brew is an open source project to help you manage packages on Mac, an equivalent to apt-get or Chocolatey if you like. The installation is just to execute the following line in terminal.

ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)"
Xcode command line tools is something could easily be forgotten, but it can be found and installed very fast from the following location in Xcode.

Preferences -> Download -> Command Line Tools
Now we need to make use of them in assist of getting our fsharp complier up and running. In your terminal, run the following command in sequence.

#Install Automake to help build the compiler
brew install automake

#Navigate to your desiable localtion, and get a copy of fsharp project
git clone git://github.com/fsharp/fsharp.git

#Navigate into the fsharp project
cd fsharp

#Execute autogen script pointing at the current version of Mono
./autogen.sh --prefix=/Library/Frameworks/Mono.framework/Versions/Current/

#Compile fsharp compiler code
make

#Install fsharp compiler
sudo make install

To verify fsharp compiler has been installed successfully on your Mac, run the command below, and you should get the result as well.

fsharpc

F# Compiler for F# 3.0 (Open Source Edition)
Freely distributed under the Apache 2.0 Open Source License

error FS0207: No inputs specified

4. Install MonoDevelop F# Binding

To get your Xamarin Studio displays F# along with other standard languages, we need to sort out the language binding. In order to do that, execute the following commands.

#Navigate to your desiable localtion, and get a copy of fsharp binding project
git clone git://github.com/fsharp/fsharpbinding.git

#Navigate into the fsharp binding project
cd fsharpbinding/monodevelop

#Execute configure script
./configure.sh

#Compile fsharp binding code
make

#Install fsharp binding to MonoDevelop
sudo make install

5. Enable F# Binding in MonoDevelop

We are almost there, but don't get to exited just yet. The last thing we need to do is to make Xamarin Studio display the F# option, you should be able to enable the configuration under this location by installing it.

Xamarin Studio -> Add-in Manager -> Installed -> Language bindings -> F# Language Binding
Now if you go create a new solution, you can then see the magnificent screen!

Sunday, 18 August 2013

SVN vs TFS vs Git

Despite the real world benefit like being able to review and rollback, I believe being able to grow and evolve with your code is one the reasons why version control is such an essential part in software development.

What have we got across the board?

In the beginning, there is Subversion. I've been using a couple of SVN tools to manage my code at work or for fun for years, which are TortoiseSVN, SmartSVN and VirtualSVN. (ref: How to Set Up a Source Control)

Subversion tools use tree structure to represent the trunk and branches of your code, which is easy to understand and use when committing, branching, tagging and etc.

But there are problems with svn tools too. In svn, it is exceptionally expensive to branch and merge. Branching always loads with a complete copy of the code base on server, and being able to branch locally does not exist. And you would never feel bored when merging more than a dozen files, or bugs fixes from previous release to current one, since tree conflicts and broken builds would be your best companions.

Later in my career, I move onto TFS. A magnificent tool and a successor of SourceSafe, which does almost everything for you, including version control, CI, test automation, project management, chat board, and at some point I almost believe Microsoft will integration Visual Studio into it. Therefore, I doubt it is a fair comparison, since what I want is a hiking pole, but I am getting is an iron man suit.

Like all Microsoft products, you get occasional crashes. And it does everything just about right, but slow, clunky and worst of all, pricey. Despite all the flaws, I am sure some loaded coops are still happy to live with it. In case you are curious, Team Foundation Service is a free cloud version for a five people team.

At some point in my career, Git just spreads like a wild fire, you hear people talking about it on every online community. Moreover, the cloud storage like GitHub does work on Git's favour to make it even more popular.

Git is fast, real fast, and it represents a complete different mind set in comparison to subversion, called DVCS. It uses graph, DAG to be specific, to represent the branches. It allows you to work on a local repository, which does not require you to be connected to your centralised all the time. Consequently the branching in Git is significantly convenient and cheap, as well as merging.

As every developer would have noticed, Git does lack a nice and visual GUI to assist illiteracy. Of course, I didn't mean it literally, but I am sure with a nice UI, Git would be more intuitive and appealing to non-tech heavy people. But apart from that, I honestly can't think of any thing else to complain.
So which tool should we use?

If you are loaded with money and a full on Microsoft house, TFS is your ultimate choice.

As for subversion tools and Git, there really is not such conclusion that one tool is definitely more superior than the other, since they all have their merits and they represent completely different version control approach. I will still list out the reasonings just for arguments' sake:

* Git has a steeper learning curve comparing to SVN.
* Git's workflow is generally faster comparing to SVN.
* Git branching is easier, faster and cheaper than SVN.
* Git lacks a good GUI comparing to SVN.
* Git allows history manipulation comparing to SVN.
* Git is more suitable for offline working, while SVN is designed to centralise code.

But as a hipster programmer, I'd say, use Git!

Friday, 16 August 2013

How to: Add Watermark to PDFs Programmatically using iTextSharp

Being able to add text or image watermark to PDF files is one of those handy things you can archive via program.

The only library required for the little program is iTextSharp. For those who haven't used it yet, iTextSharp is a C# version of iText, which allows you to generate or manipulate PDFs.

So let's cut the chitchat, and go straight to the code. By calling 'AddTextWatermark' or 'AddImageWatermark' function with required parameters supplied, you are ready to rock!


public void CreateTemplate(string watermarkText, string targetFileName){
    var document = new Document();
    var pdfWriter = PdfWriter.GetInstance(document, new FileStream(targetFileName, FileMode.Create));
    var font = new Font(Font.FontFamily.HELVETICA, 60, Font.NORMAL, BaseColor.LIGHT_GRAY);
    document.Open();
    // Specify font and alignment of text watermark
    ColumnText.ShowTextAligned(pdfWriter.DirectContent, Element.ALIGN_CENTER, new Phrase(watermarkText, font), 300, 400, 45);
    document.Close();
}

// Add watermark using a PDF file
public void AddTextWatermark(string sourceFilePath, string watermarkTemplatePath, string targetFilePath){
    var pdfReaderSource = new PdfReader(sourceFilePath);
    var pdfStamper = new PdfStamper(pdfReaderSource, new FileStream(targetFilePath, FileMode.Create));
    var pdfReaderTemplate = new PdfReader(watermarkTemplatePath);
    var page = pdfStamper.GetImportedPage(pdfReaderTemplate, 1);

    for (var i = 0; i < pdfReaderSource.NumberOfPages; i++)
    {
        var content = pdfStamper.GetUnderContent(i + 1);
        content.AddTemplate(page, 0, 0);
    }

    pdfStamper.Close();
    pdfReaderTemplate.Close();
}

// Add watermark to a file using an image
public void AddImageWatermark(string sourceFilePath, string watermarkImagePath, string targetFilePath){
    var pdfReader = new PdfReader(sourceFilePath);
    var pdfStamper = new PdfStamper(pdfReader, new FileStream(targetFilePath, FileMode.Create));
    var image = Image.GetInstance(watermarkImagePath);
    image.SetAbsolutePosition(200, 400);

    for (var i = 0; i < pdfReader.NumberOfPages; i++)
    {
        var content = pdfStamper.GetUnderContent(i + 1);
        content.AddImage(image);
    }

    pdfStamper.Close();
}

Tuesday, 13 August 2013

How to: Setup FTP Server on Windows Server 2008 R2 and Use FTP Programmatically

• Setup a FTP server on Windows Server 2008 R2

Open 'Server Manager', and add new 'Role' to it.

 In 'Add Roles Wizard', tick 'Web Server (IIS)'.
In 'Role Services', tick everything under 'FTP Server' section. Follow the wizard to the end to install all the features required.
Open IIS, tick 'Add FTP Site' when right-click on 'Sites' to add a new FTP site.
Fill in the desirable site name and file location.
Choose 'All Unassigned' for 'IP Address', and do not SSL if you just want a simple FTP site.
Choose 'Basic Authentication' and allow 'All Users' to 'Read' and 'Write'. And carry on finishing the 'Add FTP Site' wizard.
Open 'Windows Firewall with Advanced Security' to add a new rule to the FTP site.
Specify a port for FTP communication, and leave the rest as default.

To verify the FTP is working, navigate to 'ftp://<yourftpsiteaddress>' in your browser.

• FTP client

//using System;
//using System.IO;
//using System.Net;
//using System.Text;
void Main(){
   ListFtpServerDirectoryContents
();
  
UploadToFtpServer();
  
ListFtpServerDirectoryContents();
  
DownloadFromFtpServer();   }
public void ListFtpServerDirectoryContents() {
   var request
= (FtpWebRequest)WebRequest.Create("ftp://ftpserveraddresss/");
  
request.Method = WebRequestMethods.Ftp.ListDirectoryDetails;
 
  
request.Credentials = new NetworkCredential("myusername", "mypassword");
 
  
var response = (FtpWebResponse)request.GetResponse();
 
  
var responseStream = response.GetResponseStream();
  
var reader = new StreamReader(responseStream);
  
Console.WriteLine(reader.ReadToEnd());
 
  
Console.WriteLine("Directory List Complete, status {0}", response.StatusDescription);
 
  
reader.Close();
  
response.Close(); }
public void UploadToFtpServer(){
   var request
= (FtpWebRequest)WebRequest.Create("ftp://ftpserveraddresss/targetfilename");
  
request.Method = WebRequestMethods.Ftp.UploadFile;
 
  
request.Credentials = new NetworkCredential("myusername", "mypassword");
 
  
var sourceStream = new StreamReader("filepathhere");
  
var fileContents = Encoding.UTF8.GetBytes(sourceStream.ReadToEnd());
  
sourceStream.Close();
  
request.ContentLength = fileContents.Length;
 
  
Stream requestStream = request.GetRequestStream();
  
requestStream.Write(fileContents, 0, fileContents.Length);
  
requestStream.Close();
 
  
var response = (FtpWebResponse)request.GetResponse();
 
  
Console.WriteLine("Upload File Complete, status {0}", response.StatusDescription);
 
  
response.Close(); }
public void DownloadFromFtpServer(){
   var request
= (FtpWebRequest)WebRequest.Create("ftp://ftpserveraddresss/targetfilename");
  
request.Method = WebRequestMethods.Ftp.DownloadFile;
 
  
request.Credentials = new NetworkCredential("myusername", "mypassword");
 
  
var response = (FtpWebResponse)request.GetResponse();
 
  
var responseStream = response.GetResponseStream();
  
var reader = new StreamReader(responseStream);
  
Console.WriteLine(reader.ReadToEnd());
 
  
Console.WriteLine("Download Complete, status {0}", response.StatusDescription);
 
  
reader.Close();
  
response.Close(); }

Friday, 9 August 2013

SQL: Automation for Database Index Defragmentation

The benefit of using index in SQL is to improve the performance when searching through tables without having to scan for each table referenced in a query. Yet as the table grows, the index expands. More devastatingly, the index becomes fragmented, which actually creates unnecessary overhead for queries.

In order to resolve the fragmentation problem on tables and indexes, I created the following T-SQL to look through all tables and indexes and attempt to defragment them.



USE MASTER
  
---------------- CREATE TEMPORARY TABLES ----------------DECLARE @DatabaseName NVARCHAR(100) DECLARE @TableName NVARCHAR(100)DECLARE @IndexName NVARCHAR(100)IF OBJECT_ID('tempdb..#DatabaseList') IS NOT NULL BEGIN
   DROP TABLE
#DatabaseList END
CREATE TABLE
#DatabaseList (
  
[DatabaseName] NVARCHAR(100) NOT NULL,
)
IF OBJECT_ID('tempdb..#DatabaseTableIndexList') IS NOT NULLBEGIN
   DROP TABLE
#DatabaseTableIndexList END
CREATE TABLE
#DatabaseTableIndexList (
  
[DatabaseName] NVARCHAR(100) NOT NULL,
  
[TableName] NVARCHAR(100) NULL,
  
[IndexName] NVARCHAR(200) NULL,
  
[Rows] BIGINT) IF OBJECT_ID('tempdb..#DatabaseTableIndexFragmentation') IS NOT NULL BEGIN
   DROP TABLE
#DatabaseTableIndexFragmentationEND
CREATE TABLE
#DatabaseTableIndexFragmentation(
  
[ObjectName] NVARCHAR(200),
  
[ObjectId] INT,
  
[IndexName] NVARCHAR(200),
  
[IndexId] INT,
  
[Level] INT,
  
[Pages] INT,
  
[Rows] INT,
  
[MinimumRecordSize] INT,
  
[MaximumRecordSize] INT,
  
[AverageRecordSize] INT,
  
[ForwardedRecords] INT,
  
[Extents] INT,
  
[ExtentSwitches] INT,
  
[AverageFreeBytes] NUMERIC(16, 10),
  
[AveragePageDensity] NUMERIC(16, 10),
  
[ScanDensity] NUMERIC(16, 10),
  
[BestCount] INT,
  
[ActualCount] INT,
  
[LogicalFragmentation] NUMERIC(16, 10),
  
[ExtentFragmentation] NUMERIC(16, 10)
)
IF OBJECT_ID('tempdb..#DatabaseTableIndexResult') IS NOT NULL BEGIN
   DROP TABLE
#DatabaseTableIndexResult END
CREATE TABLE
#DatabaseTableIndexResult (
  
[DatabaseName] NVARCHAR(100) NOT NULL,
  
[TableName] NVARCHAR(100) NULL,
  
[IndexName] NVARCHAR(200) NULL,
  
[Rows] BIGINT,
  
[Pages] BIGINT,
  
[Density] FLOAT,
  
[Fragmentation] FLOAT)---------------- GET DATABSAES ----------------INSERT INTO #DatabaseList (DatabaseName) SELECT LTRIM(RTRIM(Name))FROM MASTER.dbo.sysdatabasesWHERE Name NOT IN ( 'master', 'model', 'msdb', 'tempdb' ) ---------------- GET TABLES AND INDEXES ---------------- WHILE EXISTS ( SELECT TOP 1 1 FROM #DatabaseList ) BEGIN
   SELECT TOP
1 @DatabaseName = DatabaseName
  
FROM #DatabaseList

  
EXEC('
USE ['
+ @DatabaseName + ']
INSERT INTO #DatabaseTableIndexList ([DatabaseName], [TableName], [IndexName], [Rows])
SELECT '''
+ @DatabaseName + ''', so.name, si.name, si.rows
FROM sysobjects so
INNER JOIN sysindexes si ON so.ID = si.ID
WHERE so.xtype = ''U'' AND si.rows > 1000 AND si.indid > 0 AND si.indid < 255 AND (si.status & 64)=0
'
)

  
DELETE FROM #DatabaseList
  
WHERE DatabaseName = @DatabaseName END
DROP TABLE
#DatabaseList ---------------- GET INDEX FRAGMENTATION ----------------WHILE EXISTS ( SELECT TOP 1 1 FROM #DatabaseTableIndexList) BEGIN
   SELECT TOP
1
      
@DatabaseName = DatabaseName,
      
@TableName = TableName,
      
@IndexName = IndexName
  
FROM #DatabaseTableIndexList

  
INSERT INTO #DatabaseTableIndexFragmentation
  
EXEC('USE [' + @DatabaseName + '] DBCC SHOWCONTIG ( [' + @TableName + '] , [' + @IndexName + '] ) WITH TABLERESULTS, NO_INFOMSGS, FAST ')

  
INSERT INTO #DatabaseTableIndexResult (DatabaseName, TableName, IndexName, [Rows], [Pages], [Density], [Fragmentation])
  
SELECT dti.DatabaseName, dti.TableName, dti.IndexName, dti.Rows, frag.Pages, frag.ScanDensity, frag.LogicalFragmentation
  
FROM #DatabaseTableIndexList dti
  
INNER JOIN #DatabaseTableIndexFragmentation frag ON frag.ObjectName = dti.TableName AND frag.IndexName = dti.IndexName
  
WHERE dti.DatabaseName = @DatabaseName AND dti.TableName = frag.ObjectName AND dti.IndexName = frag.IndexName

  
DELETE FROM #DatabaseTableIndexList
  
WHERE DatabaseName = @DatabaseName AND TableName = @TableName AND IndexName = @IndexNameEND

DROP TABLE
#DatabaseTableIndexList DROP TABLE #DatabaseTableIndexFragmentation ---------------- FIX FRAGMENTATION ---------------- WHILE EXISTS ( SELECT TOP 1 1 FROM #DatabaseTableIndexResult) BEGIN
   SELECT TOP
1
      
@DatabaseName =  DatabaseName,
      
@TableName = TableName,
      
@IndexName = IndexName
  
FROM #DatabaseTablleIndexResult

  
EXEC ( 'DBCC INDEXDEFRAG ( [' + @DatabaseName + '], [' + @TableName + '], [' + @IndexName + '] ) WITH NO_INFOMSGS ' )

  
DELETE FROM #DatabaseTableIndexResult
  
WHERE DatabaseName = @DatabaseName AND TableName = @TableName AND IndexName = @IndexName END

DROP TABLE
#DatabaseTableIndexResult


NOTE:

* For defragmenting large tables, I will rather break them down in the 'GET DATABASES' section to avoid long running job and risks of database block.

IMPROVEMENTS (try them as you see fit):

* Time stamps can be added around 'FIX FRAGMENTATION' section to monitor the time consumption.
* The '#DatabaseTableIndexResult' table can be stored in a table as a record for auditing or monitoring purposes.
* A density lookup can be created to filter out the tables and indexes with low density, which means needing to defragment more urgently.
* Each section can be broken down to individual functions or stored procedures to increase reusability, i.e., separate out 'FIX FRAGMENTATION' so that it can be triggered whenever necessary.

TIP (A query to find all indexes on a database):

SELECT
  
o.name,
  
indexname=i.name,
  
i.index_id,
  
user_seeks,
  
user_scans,
  
user_lookups,
  
reads=user_seeks + user_scans + user_lookups,
  
writes =  user_updates,
  
rows =(SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.OBJECT_ID = p.OBJECT_ID),
  
CASE WHEN s.user_updates < 1 THEN 100 ELSE 1.00 *(s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates END AS reads_per_write,
  
'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(c.name) + '.' + QUOTENAME(OBJECT_NAME(s.OBJECT_ID)) AS 'drop statement',
   (
      
SELECT (8 * SUM(aa.used_pages))/1024 AS 'Indexsize(MB)'
      
FROM sys.indexes AS ii JOIN sys.partitions AS pp ON pp.OBJECT_ID = ii.OBJECT_ID AND pp.index_id = ii.index_id
      
JOIN sys.allocation_units AS aa ON aa.container_id = pp.partition_id
      
WHERE ii.name = i.name
      
GROUP BY ii.OBJECT_ID,ii.index_id,ii.name
  
) AS size FROM sys.dm_db_index_usage_stats s INNER JOIN sys.indexes i ON i.index_id = s.index_id AND s.OBJECT_ID = i.OBJECT_ID INNER JOIN sys.objects o ON s.OBJECT_ID = o.OBJECT_ID INNER JOIN sys.schemas c ON o.schema_id = c.schema_id WHERE OBJECTPROPERTY(s.OBJECT_ID,'IsUserTable') = 1 AND s.database_id = DB_ID()  --AND i.type_desc = 'nonclustered'AND i.is_primary_key = 0 AND i.is_unique_constraint = 0 --AND (SELECT SUM(p.rows) FROM sys.partitions p  WHERE p.index_id = s.index_id AND s.object_id = p.object_id) > 10000
--AND (user_seeks + user_scans + user_lookups)=0
ORDER BY reads_per_write, name

Tuesday, 6 August 2013

How to: Create a Windows Service Kickstarter using Powershell

Since some legacy Windows services are getting more and more unstable and fall apart without any means of recovery, a kickstarter is needed to ensure those important(?) services don't just disappear quietly without any struggle.

The idea is to use a combination of Powershell script and Task Scheduler to trigger the job every now and then.

0. Precondition


The Windows Server 2003 R2 does not come with Powershell installed. To allow Powershell scripts to run on the machine, follow the link to download and install it, and bear in mind, system reboot will NOT be required.

1. Powershell Script


Powershell is really easy to pick up. It only requires basic knowledge of Command Prompt, and the rest is just C#.

function ServiceKickstarter($serverName, $serviceName)
{  
    $service = Get-Service -ComputerName $serverName -Name $serviceName
    if ($service.Status -eq "Stopped")
    {
        Write-Host $serviceName "is" $service.Status
        Write-Host "Starting" $serviceName
        $service.Start()
        while ($service.Status -ne "Running")
        {
            Write-Host $serviceName "is" $service.Status      
            $service.Refresh()
            Start-Sleep(1)
        }
    }
    Write-Host $serviceName "is" $service.Status
}

function AllServicesKickstarter ($serverName, $servicesName)
{
    $length = $servicesName.length
    for ($i=0; $i -lt $length; $i++)
    {
        "Start " + $servicesName[$i]
        startService $serverName $servicesName[$i]
    }
}

$server = "<server alias>"
$service = "<service name>"
$services = "<service name>", "<another service name>", "<some other service name>"

ServiceKickstarter $server $service
#AllServicesKickstarter $server $services

NOTE: the server name is not required, if the script is to run from the target server directly.


2. Execution Policy


In order to run the script, you may have to sort out execution policy as well, because the restrictions on some servers. And you will see the following message:

"File … cannot be loaded.  The file is not digitally signed.  The script will not be executed on the system."
 

This has everything to do with server execution policy. You can either choose to sign the script, following the link. Alternatively, change your signing policy.

The execution policy details provided by MSDN are:

Restricted - No scripts can be run. Windows PowerShell can be used only in interactive mode.
AllSigned - Only scripts signed by a trusted publisher can be run.
RemoteSigned - Downloaded scripts must be signed by a trusted publisher before they can be run.
Unrestricted - No restrictions; all Windows PowerShell scripts can be run.

 

Instead of using default (which allows you to run nothing), 'RemoteSigned' is the level that can satisfy our requirement, and manage not to piss sysadmins off.

3. Schedule Task


The last job is to create the schedule task to make sure the script will check the target service(s). That's just to follow the Task Scheduler Wizard, and you will get there eventually.

Monday, 5 August 2013

Tips and Hints: Windows Server 2008 Scheduled Tasks Result Codes

0 or 0x0: The operation completed successfully.
1 or 0x1: Incorrect function called or unknown function called.
12 or 0x2: File not found.
110 or 0xa: The environment is incorrect.
0x41300: Task is ready to run at its next scheduled time.
0x41301: Task is currently running.
0x41302: Task is disabled.
0x41303: Task has not yet run.
0x41304: There are no more runs scheduled for this task.
0x41306: Task is terminated.
0x8004130F: Credentials became corrupted (*)
0x8004131F: An instance of this task is already running.
0x800704DD: The service is not available (is 'Run only when an user is logged on' checked?)
0xC000013A: The application terminated as a result of a CTRL+C.
0xC06D007E: Unknown software exception.


Ref: http://msdn.microsoft.com/en-us/library/aa383604

Thursday, 25 July 2013

How to: Remotely start/stop/monitor windows services

When working with a lot of different servers, to monitor and control the windows services on those machines can be a real pain. Because you either need to RDP to the machine or open a remote services session.

In order to automate the tedious hard labour and achive this in bulk. I used the remote command 'SC' to access NT Service Contoller provided by MS.

The syntax is as simple as the following:
for monitoring service: sc <server-name> query <spooler or any service name>
for stopping service: sc <server-name> stop <spooler or any service name>
for starting service: sc <server-name> start <spooler or any service name>
List all the servers and put them in .bat file, that way would save you a great deal of time repeating yourselves.

FYI, the SC command details are listed below:
    The option <server> has the form "\\ServerName"
    Further help on commands can be obtained by typing: "sc [command]"
    Commands:
      query-----------Queries the status for a service, or
                      enumerates the status for types of services.
      queryex---------Queries the extended status for a service, or
                      enumerates the status for types of services.
      start-----------Starts a service.
      pause-----------Sends a PAUSE control request to a service.
      interrogate-----Sends an INTERROGATE control request to a service.
      continue--------Sends a CONTINUE control request to a service.
      stop------------Sends a STOP request to a service.
      config----------Changes the configuration of a service (persistant).
      description-----Changes the description of a service.
      failure---------Changes the actions taken by a service upon failure.
      qc--------------Queries the configuration information for a service.
      qdescription----Queries the description for a service.
      qfailure--------Queries the actions taken by a service upon failure.
      delete----------Deletes a service (from the registry).
      create----------Creates a service. (adds it to the registry).
      control---------Sends a control to a service.
      sdshow----------Displays a service's security descriptor.
      sdset-----------Sets a service's security descriptor.
      GetDisplayName--Gets the DisplayName for a service.
      GetKeyName------Gets the ServiceKeyName for a service.
      EnumDepend------Enumerates Service Dependencies.

    The following commands don't require a service name:
    sc <server> <command> <option>
      boot------------(ok | bad) Indicates whether the last boot should
                      be saved as the last-known-good boot configuration
      Lock------------Locks the Service Database
      QueryLock-------Queries the LockStatus for the SCManager Database

Wednesday, 5 June 2013

SQL: List All Requests Executing within SQL Server

--LOOK FOR PROGREE IN RESTORES AND BACKUPS, RERQUIRES SQL SERVER 2005 OR BETTER
--THIS QUERY IS JUST FOR BACKUPS AND RECOVERY
USE MASTER
SELECT session_id,
    command,
    s.text,
    start_time,
    percent_complete,
    CAST(((DATEDIFF(s,start_time,GetDate()))/3600) as varchar) + ' hour(s), '
        + CAST((DATEDIFF(s,start_time,GetDate())%3600)/60 as varchar) + 'min, '
        + CAST((DATEDIFF(s,start_time,GetDate())%60) as varchar) + ' sec' as running_time,
    CAST((estimated_completion_time/3600000) as varchar) + ' hour(s), '
        + CAST((estimated_completion_time %3600000)/60000 as varchar) + 'min, '
        + CAST((estimated_completion_time %60000)/1000 as varchar) + ' sec' as est_time_to_go,
    DATEADD(second,estimated_completion_time/1000, getdate()) as est_completion_time
FROM
    sys.dm_exec_requests r
    CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) s
WHERE
    r.command IN (
        'RESTORE DATABASE',
        'RESTORE VERIFYON',
        'BACKUP DATABASE',
        'RESTORE LOG',
        'BACKUP LOG'
    )