T-SQL Programming Part 5 – Using the CASE Function #microsoft, #articles, #microsoft


#

T-SQL Programming Part 5 – Using the CASE Function

Have you ever wanted to replace a column value with a different value based on the original column value? Learn how, with the T-SQL CASE function.

The CASE function is a very useful T-SQL function. With this function you can replace a column value with a different value based on the original column value. An example of where this function might come in handy is where you have a table that contains a column named SexCode, where 0 stands for female, 1 for male, etc. and you want to return the value “female” when the column value is 0, or “male” when the column value is 1, etc. This article will discuss using the CASE function in a T-SQL SELECT statement.

The CASE function allows you to evaluate a column value on a row against multiple criteria, where each criterion might return a different value. The first criterion that evaluates to true will be the value returned by the CASE function. Microsoft SQL Server Books Online documents two different formats for the CASE function. The “Simple Format” looks like this:

And the “Searched Format” looks like this:

Where the “input_expression” is any valid Microsoft SQL Server expression, the “when_expression” is the value in which the input_expression is compared, the “result_expression ” is the value that will be return for the CASE statement if the “when_expression” evaluates to true, ” . n ” represents that multiple WHEN conditions can exist, the “else_result_expression ” is the value that will be returned if no “when_expression” evaluates to true and in the “Searched Format” the “Boolean_expression” is any Boolean express that when it evaluates to true will return the “result_expression”. Let me go through a couple of examples of each format to help you better understand how to use the CASE function in a SELECT statement.

For the first example let me show you how you would use the CASE function to display a description, instead of a column value that contains a code. I am going to use my earlier example that I described at the top of this article where I discussed displaying “female” or “male” instead of the SexCode. Here is my example T-SQL Code:

Here is the output from this T-SQL code:

This example shows the syntax in action for a CASE function using the “Simple Format”. As you can see the CASE function evaluates the PatientSexCode to determine if it is a 0, 1, or 2. If it is a 0, then “female” is displayed in the output for the “Patient Sex” column. If the PatientSexCode is 1, then “male” is display, or if PatientSexCode is 2 then “unknown” is displayed. Now if the PatientSexCode is anything other than a 0, 1 or 2 then the “ELSE” condition of the CASE function will be used and “Invalid PatientSexCode” will be displayed for the “Patient Sex” column.

Now the same logic could be written using a “Searched Format” for the CASE function. Here is what the SELECT statement would look like for the “Searched Format”:

Note the slight differences between the “Simple” and “Searched” formats. In the “Simple” format I specified the column name for which row values will be compared against the “when_expressions” ,where as in the “Searched” format each WHEN condition contains a Boolean expression that compares the PatientSexCode column against a code value.

Now the CASE function can be considerably more complex than the basic examples I have shown. Suppose you want to display a value that is based on two different columns values in a row. Here is an example that determines if a Product in the Northwind database is of type Tins or Bottles, and is not a discontinued item.

The output for the above command on my server displays the following:

As you can see I’m using a “Searched Format” for this CASE function call. Also, each WHEN clause contains two different conditions. One condition to determine the type (tins, or bottles) and another condition to determine if the product has been discontinued. If the QuantityPerUnit contains the string “Tins” and the Discontinue column value is 0 then the “Type of Availability” is set to “Tins”. If the QuantityPerUnit contains the string “Bottles” and the Discontinue column value is 0 then the “Type of Availability” is set to “Bottles”. For all other conditions, the “Type or Availability” is set to “Not Tins , Not Bottles , or is Discontinued.

The WHEN clauses in the CASE function are evaluated in order. The first WHEN clause that evaluates to “True” determines the value that is returned from the CASE function. Basically, multiple WHEN clauses evaluate to “True”, only the THEN value for the first WHEN clause that evaluates to “True” is used as the return value for the CASE function. Here is an example where multiple WHEN clauses are “True.”

The output on my machine for this query looks like this:

If you look at the raw titles table data in the pubs database for the title “You Can Combat Computer Stress!” you will note that the price for this book is $2.99. This price makes both the “price 12.00” and “price 3.00” conditions “True”. Since the conditions are evaluated one at a time, and the “price 12.00” is evaluated prior to the “price 3.00,” the “Price Category” for the title “You Can Combat Computer Stress!” is set to “Cheap”.

The CASE function can appear in different places within the SELECT statement, it does not have to only be in the selection list within the SELECT statement. Here is an example where the CASE function is used in the WHERE clause.

The output for this query looks like this:

Here I only wanted to display books from the titles table in pubs database if the price category is ‘Average’. By placing my CASE function in the WHERE clause I was able to accomplish this.

Conclusion

As you can see the CASE function is an extremely valuable function. It allows you to take a data column and represent it differently depending on one or more conditions identified in the CASE function call. I hope that the next time you need to display or use different values for specific column data you will review the CASE function to see if it might meet your needs.

See All Articles by Columnist Gregory A. Larsen


Welcome! The Apache HTTP Server Project #server #for #website


#

Essentials

Source Repositories

Get Involved

Subprojects

Related Projects

The Number One HTTP Server On The Internet

The Apache HTTP Server Project is an effort to develop and maintain an open-source HTTP server for modern operating systems including UNIX and Windows. The goal of this project is to provide a secure, efficient and extensible server that provides HTTP services in sync with the current HTTP standards.

The Apache HTTP Server (“httpd”) was launched in 1995 and it has been the most popular web server on the Internet since April 1996. It has celebrated its 20th birthday as a project in February 2015.

Apache httpd 2.4.27 Released 2017-07-11

The Apache Software Foundation and the Apache HTTP Server Project are pleased to announce the release of version 2.4.27 of the Apache HTTP Server (“httpd”).

This latest release from the 2.4.x stable branch represents the best available version of Apache HTTP Server.

Apache httpd 2.2.34 Released End-of-Life 2017-07-11

The Apache HTTP Server Project announces the release of version 2.2.34, the final release of the Apache httpd 2.2 series. This version will be the last release of the 2.2 legacy branch. (Version number 2.2.33 was not released.)

The Apache HTTP Server Project has long committed to providing maintenance releases of the 2.2.x flavor through June of 2017, and may continue to publish some security source code patches beyond this date through December of 2017. No further maintenance patches nor releases of 2.2.x are anticipated. Any final security patches will be published to www.apache.org/dist/httpd/patches/apply_to_2.2.34/

Want to try out the Apache HTTP Server?

Great! We have updated our download page in an effort to better utilize our mirrors. We hope that by making it easier to use our mirrors, we will be able to provide a better download experience.

Please ensure that you verify your downloads using PGP or MD5 signatures.

Want to contribute to the Apache HTTP Server?

Awesome! Have a look at our current ‘Help Wanted’ listings then:

Copyright 1997-2017 The Apache Software Foundation.
Apache HTTP Server, Apache, and the Apache feather logo are trademarks of The Apache Software Foundation.


SQL Server Backup Best Practices #database #backup #and #recovery, #sql #server


#

SQL Server Backup Best Practices

The absolute worst time to find out that your recovery plans don t work is right in the middle of a critical system restore. Follow these SQL Server backup best practices to ensure that you really can restore your system when (not if) it goes down.

Perform Full Backups Daily

A full database backup is the foundation for every DBA s data protection plan and in most cases should be performed daily. SQL Server supports online backups, allowing end users and SQL Server jobs to be active while the backup operation occurs. Even so, large databases can take a long time to back up. Strategies for reducing the backup window include backing up to disk and utilizing backup data compression.

Perform Frequent Transaction Log Backups

Next most important is to back up the transaction log, which contains all of the recent activity in the database and can be used to restore a database to a given point in time. Backing up the transaction log also truncates it, keeping it from becoming full. Like database backups, transaction log backups can occur while the system is active. Organizations with active databases might back up the transaction log every 10 minutes while those with less active databases might need to back up the transaction log only every half hour or every hour.

Regularly Back Up System Databases

Your backup strategy is incomplete without a plan to back up SQL Server system databases (master, model, msdb). These databases contain system configuration information as well as SQL Server job information that needs to be restored as part of a total system restore. Back up system databases daily for frequently changing instances, weekly for more stable installations.

Back Up the Host OS Daily

SQL Server runs on top of the OS and an event such as a hardware failure could require a complete system restore, beginning with the OS. Therefore, daily backups of the host OS are a good idea. At a minimum, back up the host system partition following any system updates or configuration changes.

Practice Recovery Operations

Changing business requirements can affect your plans, quickly making backup strategies obsolete. Test your strategies regularly in different scenarios, including both system and individual database restores, to ensure your backup plans really work when you need them.

YOUR SAVVY ASSISTANT
The Missing Link to IT Resource
Christan Humphries

For her birthday, I gave my sister a card embossed with golden print that reads The economy stinks. Be happy you got this card. However disappointing the birthday gift (and my attempt at a joke) most likely was, the shiny message on it is accurate. And in this economy, which has forced companies to nudge and sometimes shove employees out of positions, I ve noticed a change in attitude toward job hunting. Here in the United States (and even in countries with better economies), it seems that changing jobs is not only accepted, it s almost expected. And just like on Match.com, It s OK to look. In fact, our network of IT products includes a free resource that even makes it easy to look: IT Job Hound.

IT Job Hound is an online job-search engine that concentrates on the IT industry. Job seekers can find recently posted positions from top IT companies on the site or via email job alerts no registration required! Whether you want to evaluate your skills, secure a new position, or completely change your job title, check out IT Job Hound at www.itjobhound.com. (If you re looking for gift ideas, give my sister a call.)

Mike, I am being pressured by the IS Director to use CA Backup Exec (or the like) in lew of regular SQL Server backups. I tried to explain how Full, Differential and Tran log backups are used to restore to a point in time by identifying T-SQL transactions and log sequence numbers. He insists that there are third part tools available that are easier to use (not a good reason in my book!) and continuously take pictures of changes. What is you take on said tools and which, if any, would you recomend. tnx, G. Douglas Clavell Seven Feathes Hotel and Casino Canyonville OR.

frontierteg (not verified)

First of all, I’m assuming you are always backing up your SQL server to disk and the server people have already figured in the space requirements and have allocated that space to you. Remember, they are part of your backup plan. With that assumption, like the article says, you should backup changes every 10-30 minutes in case of hardware failure. How often you create a full backup is really up to the individual business to decide. The questions become, how long can you stand to be down? Restoring a full backup and a days worth of logs takes less time than a full and a weeks worth of logs. As to being able to restore to a certain point, as long as you are keeping the last 2 weeks of full and/or log backups on tape, you shouldn’t have a problem restoring to any point. So the questions become, how often do I want to transfer the SQL backups to tape? and how far in the past do I want to be able to restore to? Hopefully you are backup up to a seperate server in case the SQL server goes down. And hopefully you are taking the backup media offsite at least daily in case the data center burns down. Or you have a backup SAN offsite that you just copy the backups to once they are created, again, in case the data center burns. All in all, a great article explaining the use of full backups and transaction logs. Use some common sense to implement the right backup scheme for your environment using seperate servers or portable media.

Why a FULL backup every day? You realize you limit your ability to recover to a certain point of time with every FULL backup you perform. Suppose we need to go back 4 days ago because the corruption was noticed until then? Now how are you going restore? You already toasted your full backup a couple of days ago. I prefer a FULL once a week. And differentials throughout the production hours. After a week, my data has been clean enough to go through another weekly full backup. If a user then calls me and says on Thursday, I need the backup from Tuesday, I can easily say sure which one, morning, afternoon or evening. Doing fulls daily limits your ability to restore via multi-days. And if you attempt to SAVE your DAILY backups, and you have a large database, well you’ll have the server dept calling you. Do a full once a week. Do diffs between that time whatever the business needs decides. You’ll see your diff backups are much smaller. If you have an error, backup the tranlogs and go from there. If you need to restore you’ve got a lot of choices. Same thing with system dbs. And I’d be REALLY surprised if you’re MAKING changes DAILY on jobs, schedules, etc, creating new or altering your schema, or your one tired puppy! The article doesn’t even tell you why you back them up. And far be it from actually RESTORING the system databases. Me I set mine to once a month, or heck once a quarter, knowing if I had to, I’d just re-intall SQL and restore because messing around with system dbs are no fun and causes extra headaches when I can do a clean re-install. Not much of a BEST practice article. I was expecting a little more from Otey. disappointing. Andy

Mark (not verified)

The most important thing about a backup is it works when you need it. To work it needs: 1) to be a complete backup set 2) current 3) supports a restore process that’s tested, works, and has contingency in it. When we set a backup schedule we look at what we’re trying to protect. We’ll reimage the OS in a heartbeat. So we only backup the OS weekly. If the DB has a warm standby, it only gets a full backup weekly and incrementals on the daily basis (plus the log backups during the day). More importantly than setting the schedule is confirming the backups are working. 1) did the jobs complete when expected? If they are late, it’s a sign your equipment is failing, or your requirements are changing, either require action. 2) are the restores working? You need to measure both of those things. Additionally, I measure the actions taken on #1 and #2. If the rate of action is changing. why..are we managing better or worse, and then what action should that drive. There’s a lot more to best practices for DB backup than setting a schedule.

Seems to omit the use of differential backups.


Why use a dedicated SMTP server #email #dedicated #server


#

Why use a dedicated SMTP server

Many marketers still think that to make their recipients get a bulk email – a newsletter, for instance – they can rely on a common, free SMTP server. Well, it’s a big mistake. There are three main reasons why it’s imperative to use a dedicated SMTP server when it comes to email marketing:

  1. A normal SMTP server – like the one associated to a common email provider such as Gmail – has two big limitations: first of all, it doesn’t allow you to send more than a certain number of emails (or to treat manage more than a certain number of recipients) per day. Secondly, not being designed for bulk mailings, it cannot guarantee a good deliverability – that is, your messages will always run the risk to be rejected by antispam filters and ISPs. On the contrary, a dedicated outgoing server like turboSMTP will let you send unlimited emails ensuring you the highest delivery rate.
  2. A professional SMTP service generally comes with some advanced tools to analyze your email campaign. getting useful stats like the bounce rate, the open rate, the click-through rate etc.
  3. If you run into a technical issue – either a typical SMTP error or anything more serious – a good, dedicated SMTP server will provide you a customer support to help you solve it quickly and make sure that all your emails correctly reach the inbox. (turboSMTP, in particular, offers a 24/7 support).

So, if you are managing an email campaign or simply need to send a mass message once, it’s definitely time to switch to a professional service .

You can try turboSMTP subscribing now and getting 6.000 free emails per month, forever . For all clients with very large mailing needs (that is, more than 100.000 emails/month), turboSMTP gives the possibility to get a dedicated IP to make even safer your email campaign.

Related Content

Crafting a neatly coded HTML layout is essential to boost your delivery rate. We suggest to do that with MailStyler. an excellent and easy-to-use software.


Best Server Management Software #server #patch #management #software


#

Top Server Management Software Products

ManageEngine SQLDBManager Plus is an Microsoft SQL Server availability and performance monitoring software that helps DBAs ensure high availability and performance for their critical database servers. This tool offers a single console to monitor, manage and audit your SQL Server instances. View Profile

Single tool to monitor, manage and audit SQL Servers. View Profile

You have selected the maximum of 4 products to compare Add to Compare

by Microsoft

Improve performance and scale capacity efficiently to run workloads while enabling recovery options to protect against outages. View Profile

Improve performance and scale capacity efficiently to run workloads while enabling recovery options to protect against outages. View Profile

You have selected the maximum of 4 products to compare Add to Compare

by BMC Software

Manage your servers with an automated, policy-based solution that keep your critical business services running smoothly all the time. View Profile

Manage your servers with an automated, policy-based solution that keep your critical business services running smoothly all the time. View Profile

You have selected the maximum of 4 products to compare Add to Compare

by CENTREL Solutions

A server documentation tool that inventories and audits the configuration of your servers and tracks changes to your IT environment. View Profile

A server documentation tool that inventories and audits the configuration of your servers and tracks changes to your IT environment. View Profile

You have selected the maximum of 4 products to compare Add to Compare

by Percona

Percona Server for MySQL is a free, fully compatible, enhanced, open source drop-in replacement for MySQL. View Profile

Percona Server for MySQL is a free, fully compatible, enhanced, open source drop-in replacement for MySQL. View Profile

You have selected the maximum of 4 products to compare Add to Compare

by Server Density

Bulletproof monitoring out of the box + endless customisable options. Save time + effort with our simple, reliable server monitoring. View Profile

Bulletproof monitoring out of the box + endless customisable options. Save time + effort with our simple, reliable server monitoring. View Profile

You have selected the maximum of 4 products to compare Add to Compare

by Infrascale

Backup, recovery, and management of all servers and machines are included in one portal View Profile

Backup, recovery, and management of all servers and machines are included in one portal View Profile

You have selected the maximum of 4 products to compare Add to Compare

by Softerra

Active Directory management and automation: Provisioning, RBAC, AD Web Access, Self Password Reset, Exchange Office 365 automation. View Profile

Active Directory management and automation: Provisioning, RBAC, AD Web Access, Self Password Reset, Exchange Office 365 automation. View Profile

You have selected the maximum of 4 products to compare Add to Compare

by Corner Bowl Software

Log monitoring, consolidation, auditing and reporting tool that allows users to monitor networks and satisfy auditing requirements. View Profile

Log monitoring, consolidation, auditing and reporting tool that allows users to monitor networks and satisfy auditing requirements. View Profile


Top Colocated Server Hosting Winner of 2009 #colocation #server #hosting


#

Colocation Hosting

Colocation server hosting gives its customers two benefits of security and the facility of dedicated server. It also enables user to have complete control over the scalability and also the control of their hardware. So in colocation web hosting the host allows the user to put there hardware in there data center which is completely secure and also equipped with all latest technologies and also offer dedicated internet connections, regulated power and full time technical support. This ensures that the user is getting full up time and high security. These security features may include cameras, fire extinguishers, backup power generators, filtered power and multiple connections.

It is very important to have proper idea about your requirements and facilities that you will need from hosting server, so that you know that you will need colocation host or not, and even if you need it then with what specifications you are in position of making a deal. It will also help you about the future needs of your web site, so you can see the prices and other specifications of add-ons offered by your host. What kind of data security you will need for data storage, what are the types of security offered by the hosting company. All this will go in to your favor and will save you loads of time and money so that you can manage your work while staying well within your budget.

As a user of collocated server hosting you must have an idea about where your server will be located physically, it is also of great importance to have 24/7 support system. And your server is in safe hands in the locality with all needed safety measures in the hands of skilled people, who know how to do their job. If you are allowed to visit the locality of your server only within the office hours of the hosting company then you might fall in trouble in case your web site goes down after there office hours. This could be big loss for you as you will not be able to retrieve your web site until and unless people are back in office.

Another important aspect of colocation server is to find out how your server is connected. There are two main kinds of connections available, first is the connection from your website to the server which determines the speed of the systems and the other is the connection between your server and internet. The first one determines the accessibility speed where as the 2nd one determines the bandwidth and ultimately how much traffic can be handled on your web site. You must also make sure that the facility of your server is well equipped with 24 hour armed security, uninterrupted power supply etc.

Basically, colocation host is very similar to the unmanaged dedicated hosting server, with the difference of ownership of server, as here it is owned by client and by the hosting company. That is the main reason that makes co location server hosting much cheaper as compared to the dedicated hosting.

Also in colocation servers you don�t have to pay for rentals of the server. Most of the time this rent is dependant upon the rack space taken by your system and charge according to the per GB of data transfer which is normally taken in the same way as in normal web hosting plan. You must make sure with your colocation host that your server is rack mountable since this is the only type which is acceptable by most of the hosting providers. You must also have the proper knowledge of basics of hosting like colocation bandwidth and dedicated server colocation so that you are able to manage your server on your own, because colocation servers provide very minimal support. Another important issue to be sorted out between you and your colocation hosting is the timing of your visits. Try to get 24 hours permission for visiting the data center, so that you are able to handle any problems in your server anytime.

All in all co location hosting is another form of dedicated web hosting but in this case you are the owner of your own server, so along with getting all the benefits of dedicated server hosting you have more options available for modification of server because that is your own property. Although here you need much more expertise with server hosting than you require in dedicated hosting where all activities are managed by hosting firm itself.

When you have to take hosting for your web site, the biggest question is to decide whether you will go for co location hosting or data center hosting or not. Both are basically the same. Co-location facilities are actually data centers but not all data centers provide the facility of co-location. Some data centers don�t allow any kind of equipment to be co-located in their premises but they still allow users to purchase dedicated hosting from there servers. Basically data centers are brick and a mortar facility where your equipment is placed that allows remote user access and other computer related services. Co-location is the process of placing or housing all of this data and information some where else, rather than these data centers which are the property of hosting companies.

There are many benefits attached with colocation host. One of the major benefits of colocation server is the increased security of your equipment, as the data center is responsible for the security of your server and the sensitive data residing in it. These data centers have high measures of security and they are monitored 24/7 and 7 days a week, so that is the most secure place for your server.

There is cost efficiency involved in this process as you don�t need to have a building for your computers. Also the companies of hosting get benefit as they just provide you some rack space in their already existing building, and get rent against there service. They need to have this building for their own servers so this is an additional income to help their business. Another big advantage of co location hosting is that you get your own bandwidth, RAM and many other important hardware facilities completely as your own and you don�t have to share any of these. So ultimately all your servers will work at their best capacity and speed.

As the co-location involves dedicated servers which are regularly monitored and placed in the highest quality facilities so the hosting client is sure that the services they are purchasing are 100% trust worthy and reliable. The basic concept of co-location is to give an independent location for the placement of servers to client. This facility ultimately enables you to have your data or equipment hosted by some specific data center. Instead of that you can host your data and all the equipment wherever you need to place and run smoothly without any restrictions. If you opt for the co-location of your server then your physical business is sure to be in safe hands where it can run with full efficiency and effectiveness.

Another big advantage of having colocation server is that you can upgrade your system whenever you want to or change it according to your requirements whenever there is need for change in your business. It is not like you have to pay hosting company and take additional or changed packages to implement the required changes, by investing more money.

If we compare co location hosting with others we will come to know the long term and short term effectiveness of the service. Although I don�t recommend it for small business web sites or personal websites, where there is not much data or traffic to be handled. These small web sites can be handled through regular hosting packages in cheap rates, there are many hosting companies which will provide good hosting services in affordable rates. If you don�t require a larger space there is no need to invest in purchasing heavy equipments.

However, if you are running a website of heavy traffic, or in other case if your web site has loads of data to be handled then the options like dedicated hosting and colocation hosting are good options for your business. If your traffic of site is very heavy then shared or regular kinds of hosting can create problems for you, as they can�t handle the load of too much traffic. In terms of large business you have funds to invest in larger equipment then co locatin hosting is your best choice, as here you have full command and control of your system along with the proper management of equipments done by the people of data center. Try to have your expertise in the field of hosting servers before going for this option of hosting as here you have to manage your server by yourself.


The Web Robots Pages #web #file #server


#

About /robots.txt

In a nutshell

Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol.

It works likes this: a robot wants to vists a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt, and finds:

The “User-agent: * ” means this section applies to all robots. The “Disallow: / ” tells the robot that it should not visit any pages on the site.

There are two important considerations when using /robots.txt:

  • robots can ignore your /robots.txt. Especially malware robots that scan the web for security vulnerabilities, and email address harvesters used by spammers will pay no attention.
  • the /robots.txt file is a publicly available file. Anyone can see what sections of your server you don’t want robots to use.

So don’t try to use /robots.txt to hide information.

The details

The /robots.txt is a de-facto standard, and is not owned by any standards body. There are two historical descriptions:

In addition there are external resources:

The /robots.txt standard is not actively developed. See What about further development of /robots.txt? for more discussion.

The rest of this page gives an overview of how to use /robots.txt on your server, with some simple recipes. To learn more see also the FAQ.

How to create a /robots.txt file

Where to put it

The short answer: in the top-level directory of your web server.

The longer answer:

When a robot looks for the “/robots.txt” file for URL, it strips the path component from the URL (everything from the first single slash), and puts “/robots.txt” in its place.

For example, for “http://www.example.com/shop/index.html. it will remove the “/shop/index.html “, and replace it with “/robots.txt “, and will end up with “http://www.example.com/robots.txt”.

So, as a web site owner you need to put it in the right place on your web server for that resulting URL to work. Usually that is the same place where you put your web site’s main “index.html ” welcome page. Where exactly that is, and how to put the file there, depends on your web server software.

Remember to use all lower case for the filename: “robots.txt “, not “Robots.TXT.

What to put in it

The “/robots.txt” file is a text file, with one or more records. Usually contains a single record looking like this:

In this example, three directories are excluded.

Note that you need a separate “Disallow” line for every URL prefix you want to exclude — you cannot say “Disallow: /cgi-bin/ /tmp/” on a single line. Also, you may not have blank lines in a record, as they are used to delimit multiple records.

Note also that globbing and regular expression are not supported in either the User-agent or Disallow lines. The ‘*’ in the User-agent field is a special value meaning “any robot”. Specifically, you cannot have lines like “User-agent: *bot*”, “Disallow: /tmp/*” or “Disallow: *.gif”.

What you want to exclude depends on your server. Everything not explicitly disallowed is considered fair game to retrieve. Here follow some examples:

To exclude all robots from the entire server
To allow all robots complete access

(or just create an empty “/robots.txt” file, or don’t use one at all)

To exclude all robots from part of the server
To exclude a single robot
To allow a single robot
To exclude all files except one

This is currently a bit awkward, as there is no “Allow” field. The easy way is to put all files to be disallowed into a separate directory, say “stuff”, and leave the one file in the level above this directory: Alternatively you can explicitly disallow all disallowed pages:


T-SQL Programming Part 1 – Defining Variables, and logic #articles, #databases, #microsoft


#

T-SQL Programming Part 1 – Defining Variables, and IF. ELSE logic

Whether you are building a stored procedure or writing a small Query Analyzer script you will need to know the basics of T-SQL programming. This is the first of a series discusses defining variables, and using the IF. ELSE logic.

This is the first of a series of articles discussing various aspects of T-SQL programming. Whether you are building a stored procedure or writing a small Query Analyzer script you will need to know the basics of T-SQL programming. This first article will discuss defining variables, and using the IF. ELSE logic.

Local Variables

As with any programming language, T-SQL allows you to define and set variables. A variable holds a single piece of information, similar to a number or a character string. Variables can be used for a number of things. Here is a list of a few common variable uses:

  • To pass parameters to stored procedures, or function
  • To control the processing of a loop
  • To test for a true or false condition in an IF statement
  • To programmatically control conditions in a WHERE statement

More than one variable can be defined with a single DECLARE statement. To define multiple variables, with a single DECLARE statement, you separate each variable definition with a comma, like so:

Here is an example of how to use the SELECT statement to set the value of a local variable.

One of the uses of a variable is to programmatically control the records returned from a SELECT statement. You do this by using a variable in the WHERE clause. Here is an example that returns all the Customers records in the Northwind database where the Customers Country column is equal to ‘Germany’

IF. ELSE

T-SQL has the “IF” statement to help with allowing different code to be executed based on the results of a condition. The “IF” statement allows a T-SQL programmer to selectively execute a single line or block of code based upon a Boolean condition. There are two formats for the “IF” statement, both are shown below:

Format one: IF condition then code to be executed when condition true

Format two: IF condition then code to be executed when condition true ELSE else code to be executed when condition is false

In both of these formats, the condition is a Boolean expression or series of Boolean expressions that evaluate to true or false. If the condition evaluates to true, then the “then code” is executed. For format two, if the condition is false, then the “else code” is executed. If there is a false condition when using format one, then the next line following the IF statement is executed, since no else condition exists. The code to be executed can be a single TSQL statement or a block of code. If a block of code is used then it will need to be enclosed in a BEGIN and END statement.

Let’s review how “Format one” works. This first example will show how the IF statement would look to execute a single statement, if the condition is true. Here I will test whether a variable is set to a specific value. If the variable is set to a specific value, then I print out the appropriate message.

The above code prints out only the phrase “The number is 29”, because the first IF statement evaluates to true. Since the second IF is false the second print statement is not executed.

Now the condition statement can also contain a SELECT statement. The SELECT statement will need to return value or set of values that can be tested. If a SELECT statement is used the statement needs to be enclosed in parentheses.

Here I printed the message “Found A-D Authors” if the SELECT statement found any authors in the pubs.dbo.authors table that had a last name that started with an A, B, C, or D.

So far my two examples only showed how to execute a single T-SQL statement if the condition is true. T-SQL allows you to execute a block of code as well. A code block is created by using a “BEGIN” statement before the first line of code in the code block, and an “END” statement after that last line of code in the code block. Here is any example that executes a code block when the IF statement condition evaluates to true.

Above a series of “PRINT” statements will be executed if this IF statement is run in the context of the master database. If the context is some other database then the print statements are not executed.

Sometimes you want to not only execute some code when you have a true condition, but also want to execute a different set of T-SQL statements when you have a false condition. If this is your requirement then you will need to use the IF. ELSE construct, that I called format two above. With this format, if the condition is true then the statement or block of code following the IF clause is executed, but if the condition evaluates to false then the statement or block of code following the ELSE clause will be executed. Let’s go through a couple of examples.

For the first example let’s say you need to determine whether to update or add a record to the Customers table in the Northwind database. The decision is based on whether the customer exists in the Northwind.dbo.Customers table. Here is the T-SQL code to perform this existence test for two different CustomerId’s.

The first IF. ELSE logic checks to see it CustomerId ‘ALFKI’ exists. If it exists it prints the message “Need to update Customer Record”, if it doesn’t exist the “Need to add Customer Record” is displayed. This logic is repeated for CustomerId = ‘LARS’. When I run this code against my Northwind database I get the following output.

As you can see from the results CustomerId ‘ALFKI’ existed, because the first print statement following the first IF statement was executed. Where as in the second IF statement CustomerId ‘LARSE’ was not found because the ELSE portion of the IF. ELSE statement was executed.

If you have complicated logic that needs to be performed prior to determining what T-SQL statements to execute you can either use multiple conditions on a single IF statement, or nest your IF statements. Here is a script that determines if the scope of the query is in the ‘Northwind’ database and if the “Customers” table exists. I have written this query two different ways, one with multiple conditions on a single IF statement, and the other by having nested IF statements.

As you can see I tested to see if the query was being run from the Northwind database and whether the “Customers” table can be found in sysobjects. If this was true, I printed the message “Table Customers Exists”. In the first example I had multiple conditions in a single IF statement. Since I was not able to determine which parts of the conditions in the IF statement where false the ELSE portion printed the message “Not in Northwind database or Table Customer does not exist”. Now in the second example where I had a nested IF statement I was able to determine whether I was in the wrong database or the object “Customers” did not exist. This allowed me to have two separate print statements to reflect exactly what condition was getting a false value.

I hope that this article has helped you understand how to declare and use local variables, as well as IF. ELSE logic. Local variables are useful to hold the pieces of information related to your programming process. Where as the IF statement helps control the flow of your program so different sections of code can be executed depending on a particular set of conditions. As you can see nesting IF statements and/or having multiple conditions on an IF statement allows you to further refine your logic flow to meet your programming requirements. My next article in this T-SQL programming series will discuss how to build a programming loop.

See All Articles by Columnist Gregory A. Larsen


Data protector red tapes – Server Fault #server #tape #backup


#

I am using HP Data Protector A.06.11 in my organization, with HP EML E-SeriesEML library, with 4 drives using LTO-4 tapes, and i am having some problems.

Yesterday I put 5 new tapes in the robot and formatted them. At that time, the robot got just those empty 5 tapes with empty space. (all the rest of the tapes are red, or with protection)

Today in the morning after the night (1 backup run at night), and 2 of the new tapes are red (the properties are):

I format one of them, and check for each drive if the tape become red, no one of the drives do it.

In the main pool properties, in media condition got:

asked Aug 8 ’12 at 10:41

migrated from stackoverflow.com Aug 9 ’12 at 7:53

This question came from our site for professional and enthusiast programmers.

Please: what, exactly, is your question? If it s just why is this happening , could you give any reasons why it might be anything other than I bought a bad batch of tapes ? MadHatter Aug 9 ’12 at 9:20

As someone who has to work with what HP thinks passes as backup software (for another couple weeks at least), I can attest that you don’t need to worry about what Data Protector says about your media state. It doesn’t know what the hell it’s talking about. that particular tape got marked bad as a result of the errors registered on it, independent of whether or not those errors are due to bad media, corrupt source data, a drive that needs cleaning. or even a network hiccup, and anything else that could possibly create a write or read error to or from the tape.

So, really, the media state flag is useless, and will only prevent a poor (red colored) tape from being used for backups. To reset the flag on any tape to good (green), including those that are fair (yellow) or unknown (grey), open a command line and navigate to \bin\ under the Data Protector install directory, and then issue omnimm -reset_poor_medium [medium_id]. as in the screenshot below, which shows me setting a fair tape named AK0722L3 back to good.

answered Aug 29 ’12 at 0:25


Scotia Recovery Services #server #recovery


#

Who or what is Scotia Recovery Services?

We are a private sector business that specializes in:

  • Civil Enforcement
  • Repossessions
  • Process Serving
  • Asset Liquidation

Civil Document Services

Scotia Recovery Services offers expedient document service for all its clients. Serving everything from divorce documents, writs, eviction notices to small claim documents: we are here to serve for you!

In the field of collateral recovery, Scotia Recovery Services can and will recover your collateral. Big or small we cover it all. From the physical recovery to storage and or sale, We offer our clients the opportunity to have us involved from start to finish. We will recover, hold and liquidate at your request.

Sometimes referred to as a Bailiff or as the Repoman, we enforce civil contractual rights of a secured creditor such as a bank or leasing agency.

Scotia Recovery Services is the most professional bailiff and recovery service in the Maritimes, upholding the highest of standards, we offer our clients complete coverage within Nova Scotia, New Brunswick and Prince Edward Island.

Let our professional experience reward you with our prompt, courteous and effective service.

The definition of repossession is an act whereby one regains possession of .

By virtue of proper documentation and in accordance with the Personal Property Security Act , Scotia Recovery And Investigative Services will repossess leased or financed equipment and vehicles and if need be liquidate such assets to the general public.

Civil Document Service or better known as Process Serving is whereby a licensed agent, Process Server or bailiff, formally effects service of civil documents such as: Small Claims Court Documents, Divorce Petitions, Eviction Notices, Writs, Originating Notices etc.


SCIM: System for Cross-domain Identity Management #provisioning #server


#

Overview

The System for Cross-domain Identity Management (SCIM) specification is designed to make managing user identities in cloud-based applications and services easier. The specification suite seeks to build upon experience with existing schemas and deployments, placing specific emphasis on simplicity of development and integration, while applying existing authentication, authorization, and privacy models. Its intent is to reduce the cost and complexity of user management operations by providing a common user schema and extension model, as well as binding documents to provide patterns for exchanging this schema using standard protocols. In essence: make it fast, cheap, and easy to move users in to, out of, and around the cloud.

Information on this overview page is not normative.

Model

SCIM 2.0 is built on a object model where a Resource is the common denominator and all SCIM objects are derived from it. It has id, externalId and meta as attribute and RFC7643 defines User, Group and EnterpriseUser that extends the common attributes.

Example User

This is an example of how user data can be encoded as a SCIM object in JSON.

While this example does not contain the full set of attributes available, notice the different types of data that can be used to create SCIM objects. Simple types like strings for id. username. etc. Complex types, i.e. attributes that have sub-attributes, for name and address. Multivalued types for e-mail. phonenumber. address. etc.

Example Group

In addition to users, SCIM includes the defintions of groups. Groups are used to model the organizationational structure of provisioned resources. Groups can contain users or other groups.

Operations

For manipulation of resources, SCIM provides a REST API with a rich but simple set of operations, which support everything from patching a specific attribute on a specific user to doing massive bulk updates:

Discovery

To simplify interoperability, SCIM provides three end points to discover supported features and specific attribute details:

Specification compliance, authentication schemes, data models.

An endpoint used to discover the types of resources available.

Introspect resources and attribute extensions.

Create Request

To create a resource, send an HTTP POST request to the resource’s respective end point. In the example below we see the creation of a User.

As can be seen in this and later examples the URL contains a version number so that different versions of the SCIM API can co-exist. Available versions can be dynamically discovered via the ServiceProviderConfig end point.

Create Response

A response contains the created Resource and HTTP code 201 to indicate that the Resource has been created successfully. Note that the returned user contains more data then was posted, id and meta data have been added by the service provider to make a complete User resource. A location header indicates where the resource can be fetched in subsequent requests.

Get Request

Fetching resources is done by sending HTTP GET requests to the desired Resource end point, as in this example.

Get Response

The response to a GET contains the Resource. The Etag header can, in subsequent requests, be used to prevent concurrent modifications of Resources.

Filter Request

In addition to getting single Resources it is possible to fetch sets of Resources by querying the Resource end point without the id of a specific Resource. Typically, a fetch request will include a filter to be applied to the Resources. SCIM has support for the filter operations equals, contains, starts with, and more. In addition to filtering the response it is also possible to ask the service provider to sort the Resources in the response.

In addition to filtering the response it is also possible to ask the service provider to sort the Resources in the response, return specific attributes of the resources, and return only a subset of the resources.

  • https://example.com/?filter= & sortBy= ?>

Apache Tomcat #tomcat #server #xml #connector


#

Apache Tomcat

Content

Apache Tomcat

The Apache Tomcat software is an open source implementation of the Java Servlet, JavaServer Pages, Java Expression Language and Java WebSocket technologies. The Java Servlet, JavaServer Pages, Java Expression Language and Java WebSocket specifications are developed under the Java Community Process.

The Apache Tomcat software is developed in an open and participatory environment and released under the Apache License version 2. The Apache Tomcat project is intended to be a collaboration of the best-of-breed developers from around the world. We invite you to participate in this open development project. To learn more about getting involved, click here .

Apache Tomcat software powers numerous large-scale, mission-critical web applications across a diverse range of industries and organizations. Some of these users and their stories are listed on the PoweredBy wiki page.

Apache Tomcat, Tomcat, Apache, the Apache feather, and the Apache Tomcat project logo are trademarks of the Apache Software Foundation.

2017-07-01 Tomcat 8.0.45 Released

The Apache Tomcat Project is proud to announce the release of version 8.0.45 of Apache Tomcat. Apache Tomcat 8.0.45 includes fixes for issues identified in 8.0.44 as well as other enhancements and changes. The notable changes compared to 8.0.44 include:

  • Add a new JULI FileHandler configuration for specifying the maximum number of days to keep the log files. By default the log files will be kept indefinitely.
  • Improvements to enable the Manager and HostManager to work in the default configuration when working under a security manager.

Full details of these changes, and all the other changes, are available in the Tomcat 8 changelog.

Note: End of life date for Apache Tomcat 8.0.x is announced. Read more.

2017-07-01 Tomcat 7.0.79 Released

The Apache Tomcat Project is proud to announce the release of version 7.0.79 of Apache Tomcat. This release contains a number of bug fixes and improvements compared to version 7.0.78. The notable changes compared to 7.0.78 include:

  • Add a new JULI FileHandler configuration for specifying the maximum number of days to keep the log files. By default the log files will be kept indefinitely.
  • Improvements to enable the Manager and HostManager to work in the default configuration when working under a security manager.

Full details of these changes, and all the other changes, are available in the Tomcat 7 changelog.

2017-06-26 Tomcat 8.5.16 Released

The Apache Tomcat Project is proud to announce the release of version 8.5.16 of Apache Tomcat. Apache Tomcat 8.5.x is intended to replace 8.0.x and includes new features pulled forward from Tomcat 9.0.x. The minimum Java version and implemented specification versions remain unchanged. The notable changes compared to 8.5.15 include:

  • Add a new JULI FileHandler configuration for specifying the maximum number of days to keep the log files. By default the log files will be kept indefinitely.
  • Improvements to enable the Manager and HostManager to work in the default configuration when working under a security manager.
  • Introduce new API o.a.tomcat.websocket.WsSession#suspend / o.a.tomcat.websocket.WsSession#resume that can be used to suspend / resume reading of the incoming messages.

Full details of these changes, and all the other changes, are available in the Tomcat 8.5 changelog.

2017-06-26 Tomcat 9.0.0.M22 (alpha) Released

The Apache Tomcat Project is proud to announce the release of version 9.0.0.M22 (alpha) of Apache Tomcat. The is a milestone release of the 9.0.x branch and has been made to provide users with early access to the new features in Apache Tomcat 9.0.x so that they may provide feedback. The notable changes compared to 9.0.0.M21 include:

  • Add a new JULI FileHandler configuration for specifying the maximum number of days to keep the log files. By default the log files will be kept for 90 days.
  • Update the Servlet 4.0 implementation to add support for setting trailer fields for HTTP responses.
  • When pre-compiling with JspC, report all compilation errors rather than stopping after the first error.

Full details of these changes, and all the other changes, are available in the Tomcat 9 changelog.

2017-02-21 Tomcat Native 1.2.12 Released

The Apache Tomcat Project is proud to announce the release of version 1.2.12 of Tomcat Native. The notable changes since 1.2.10 include:

  • Windows binaries built with APR 1.5.2 and OpenSSL 1.0.2k.
  • Better documentation for building on Windows including with FIPS enabled OpenSSL.

Note that users should now be using 1.2.x in preference to 1.1.x.

2016-10-05 Tomcat Connectors 1.2.42 Released

The Apache Tomcat Project is proud to announce the release of version 1.2.42 of Apache Tomcat Connectors. This version fixes a number of bugs found in previous releases.

2015-12-15 Tomcat Native 1.1.34 Released

The Apache Tomcat Project is proud to announce the release of version 1.1.34 of Tomcat Native. The notable changes since 1.1.33 include:

  • Unconditionally disable export Ciphers
  • Improve ephemeral key handling for DH and ECDH
  • Various fixes to build with newer OpenSSL versions

Note that, unless a regression is discovered in 1.2.x, users should now be using 1.2.x in preference to 1.1.x.

2015-03-17 Apache Standard Taglib 1.2.5 Released

The Apache Tomcat Project is proud to announce the release of version 1.2.5 of the Standard Taglib. This tag library provides Apache’s implementation of the JSTL 1.2 specification.

Version 1.2.5 is a minor bug fix release reverting a change made in 1.2.1 where c:import modified the HTTP method during POST operations, and fixing an issues that resulted in an AccessControlException during startup unless permission was granted to read the accessExternalEntity property.

Please see the Taglibs section for more details.

2013-11-11 Tomcat Maven Plugin 2.2 Released

The Apache Tomcat team is pleased to announce the release of Tomcat Maven Plugin 2.2. Changelog available here.

The Apache Tomcat Maven Plugin provides goals to manipulate WAR projects within the Apache Tomcat servlet container.

The binaries are available from Maven repositories. You should specify the version in your project’s plugin configuration:


How to Enable Ping on Windows Server 2012 R2 Firewall #server #ping


#

How to Enable Ping on Windows Server 2012 R2 Firewall

If you wish your Windows 2012 R2 server to respond ping commands but not by disabling the complete firewall service, here is the simple guide about how to enable ping on Windows server 2012 R2 firewall.

It is always an good idea to enable ping response in Windows 2012 R2 servers, so it will be easy to monitor and manage the network and IPs. We have already published a guide to ping the all IP addresses on same network to manage the IPs. Also you can find more articles about enabling ping in Windows 8, 8.1 and Windows 7 .

Here is the running Windows 2012 R2 server used for following demonstration.

Simple Way to Enable Ping on Windows Server 2012 R2

The below method is applicable for Windows 2012 server also.

1) Go to control panel from Windows charm bar or search for ‘control’. Open ‘Windows Firewall’.

3) We need to create a firewall rule to allow ICMP echo packets which used in ping command. Luckily the rule is already there in Windows 2012 server and it just needs to be enabled.

To enable inbound rule of allowing ICMP packets, select ‘Inbound Rules’. Find out and right click on ‘File and Printer Sharing (Echo Request –ICMPv4-In’), select Enable Rule .

That will allow incoming ping requests in Windows 2012 R2 server and respond to them without completely disabling firewall service.

Below screen shot shows how the Windows 2012 R2 server started responding ping request when the above rule was enabled.

This would be a simple and easy guide to follow.


Installing Ubuntu inside Windows using VirtualBox #virtual #server #tutorial, #installing #ubuntu #inside


#

Installing Ubuntu inside Windows using VirtualBox

The screenshots in this tutorial use Ubuntu 12.04, but the same principles apply also to Ubuntu 12.10, 11.10, 10.04, and any future version of Ubuntu. Actually, you can install pretty much any Linux distribution this way.

Introduction
VirtualBox allows you to run an entire operating system inside another operating system. Please be aware that you should have a minimum of 512 MB of RAM. 1 GB of RAM or more is recommended.

Comparison to Dual-Boot
Many websites (including the one you’re reading) have tutorials on setting up dual-boots between Windows and Ubuntu. A dual-boot allows you, at boot time, to decide which operating system you want to use. Installing Ubuntu on a virtual machine inside of Windows has a lot advantages over a dual-boot (but also a few disadvantages).

Advantages of virtual installation

  • The size of the installation doesn’t have to be predetermined. It can be a dynamically resized virtual hard drive.
  • You do not need to reboot in order to switch between Ubuntu and Windows.
  • The virtual machine will use your Windows internet connection, so you don’t have to worry about Ubuntu not detecting your wireless card, if you have one.
  • The virtual machine will set up its own video configuration, so you don’t have to worry about installing proprietary graphics drivers to get a reasonable screen resolution.
  • You always have Windows to fall back on in case there are any problems. All you have to do is press the right Control key instead of rebooting your entire computer.
  • For troubleshooting purposes, you can easily take screenshots of any part of Ubuntu (including the boot menu or the login screen).
  • It’s low commitment. If you later decide you don’t like Ubuntu, all you have to do is delete the virtual hard drive and uninstall VirtualBox.

Disadvantages of virtual installation

  • In order to get any kind of decent performance, you need at least 512 MB of RAM, because you are running an entire operating system (Ubuntu) inside another entire operating system (Windows). The more memory, the better. I would recommend at least 1 GB of RAM.
  • Even though the low commitment factor can seem like an advantage at first, if you later decide you want to switch to Ubuntu and ditch Windows completely, you cannot simply delete your Windows partition. You would have to find some way to migrate out your settings from the virtual machine and then install Ubuntu over Windows outside the virtual machine.
  • Every time you want to use Ubuntu, you have to wait for two boot times (the time it takes to boot Windows, and then the time it takes to boot Ubuntu within Windows).

Installation Process
The first thing you have to do is obtain VirtualBox. Visit the VirtualBox website’s download page. Install it the same way you would any normal Windows program.

Follow these instructions to get a Ubuntu disk image (.iso file).


After you launch VirtualBox from the Windows Start menu, click on New to create a new virtual machine. When the New Virtual Machine Wizard appears, click Next.


You can call the machine whatever you want. If you’re installing Ubuntu, it makes sense to call it Ubuntu. I guess. You should also specify that the operating system is Linux.


VirtualBox will try to guess how much of your memory (or RAM) to allocate for the virtual machine. If you have 1 GB or less of RAM, I would advise you stick with the recommendation. If, however, you have over 1 GB, about a quarter your RAM or less should be fine. For example, if you have 2 GB of RAM, 512 MB is fine to allocate. If you have 4 GB of RAM, 1 GB is fine to allocate. If you have no idea what RAM is or how much of it you have, just go with the default.


If this is your first time using VirtualBox (which it probably is if you need a tutorial on how to use it), then you do want to Create new hard disk and then click Next.


Theoretically, a dynamically expanding virtual hard drive is best, because it’ll take up only what you actually use. I have come upon weird situations, though, when installing new software in a virtualized Ubuntu, in which the virtual hard drive just fills up instead of expanding. So I would actually recommend picking a Fixed-size storage.


Ubuntu’s default installation is less than 3 GB. If you plan on adding software or downloading large files in your virtualized Ubuntu, you should tack on some buffer.


Click Create and wait for the virtual hard drive to be created. This is actually just a very large file that lives inside of your Windows installation.


The next thing to do to make the (currently blank) virtual hard drive useful is to add the downloaded Ubuntu disk image (the .iso) boot on your virtual machine. Click on Settings and Storage. Then, under CD/DVD Device. next to Empty. you’ll see a little folder icon. Click that.

Then double-click your virtual machine to start it up.


You may get a bunch of random warnings/instructions about how to operate the guest operating system within VirtualBox. Read those, and then you may also want to mark not to see those again.


Once it’s started up, just follow the regular installation procedure as if you were installing Ubuntu on a real hard drive (instead of a virtual one).


Afterwards, in order to use your virtualized installation (instead of continually booting the live CD), double-check that the CD/DVD Device entry is Empty again.

If you have suggestions or corrections for these tutorials, please post in this Ubuntu Forums thread or leave a comment on my blog.

I will not give help to people posting in the above places. If you require technical support, start a support thread on the Ubuntu Forums. That is the appropriate place to ask for help.


Support – handyCafe Internet Cafe Software #internet #cafe #software, #free #firewall, #network


#

Internet Cafe Software, Cyber Cafe Software, Game Center Software. HandyCafe Internet Cafe Software by Ates Software Ltd. HandyCafe 2.1.36 se is the Latest Free Version.

HandyCafe 2.1.36 includes tones of new internet cafe software features. Thanks to Firewall/Filter feature you can filter any websites, contents or connections easily in your Cybercafe. You can grant access to your cashiers. Bandwidth warnings will show you a message if a customer overs his/her bandwidth. You will get full control of your cybercafe with remote control option of

HandyCafe Internet Cafe Software

Easily track all incomes/expenses. Use multiple pricing schemes. Control your console applications like playstation (PS), playstation2 (PS2), playstation3 (PS3), xbox 360, pool table and etc. Directly connect yourself to Ates Network. Do everything easily in your cybercafe with the best cyber cafe software: HandyCafe Internet Cafe Software

click here to check

HandyCafe internet cafe software

free internet cafe software

pricing for your cyber cafe software

HandyCafe Internet Cafe Software

Cyber Cafe Software

is the BEST and the Most Popular Cyber cafe Software in the world. Why would you pay for your

internet cafe software, cyber cafe software. Download HandyCafe Internet Cafe Software Free!

More than 25.000 internet cafes are using

HandyCafe Internet Cafe Software

If you have earlier versions of HandyCafe Internet Cafe Software, please update . All version updates are free.

We offer our customers free online support. If you think that you are having a problem we recommend you to check Error.log (Server Client) and Dataerror.log (Server) files. These files will help you to explain your problem. If you have these files please email us. We will investigate and reply you back as soon as possible.

Forgot your Product Key or Serial Number?

Supported Operating Systems

HandyCafe Software is compatible with Windows 98/Me/2000/XP and not compatible with Windows 95 NT. HandyCafe Software was developped using latest Windows API (Application Programming Interface) to give you the BEST performance.

Working With Other Programs

HandyCafe can work with any 3rd party applications. If you have any conflicts please contact us with a detailed information. HandyCafe Software listens both UDP and TCP ports to communicate eachother. You can change port numbers using Settings panel. If you are using a firewall you must give access to Client and Server. Please contact to your firewall documention for more information.


SQL Server Reporting Services 2016 – Part 1 – The Database Avenger


#

This first post of 3 takes a quick peek at SSRS 2016 using the Community Technical Preview (CTP) 3.2. I will be making a quick post installation tweak and then guiding you through the steps to build your first report. If you are experienced with SSRS you can probably just scan this post to see the differences in 2016. See the past post Installing SQL Server 2016 for details on the install I did prior to working on this post.

Post 2 will cover some of the new features and changes to the old style SSRS reports (referred to as paginated reports).

Post 3 will cover the new Mobile Reports and KPI features.

SSRS is essentially a website that you can upload reports to giving people in your organisation a central place to go to get their data.

To start configuring SSRS open Reporting Services Configuration Manager from the start menu. This utility lets you configure the web server that will serve up the SSRS portal.

When everything is installed on the same box, like I have done for this test server, the default settings should be fine. One thing I am changing is my TCP port. This test server already has a website on port 80 so I ll use 8080 for SSRS.

If you change the port number on the Web Service URL screen you will also need to configure that same port number on the Report Manager URL screen. The screenshot below shows that it still has the default port number.

To set the new port number click Advanced and then edit.

After confirming the port change you will hopefully see the output below with a column of nice green ticks.

Once set click the URL. This will open the old style portal on your newly installed SSRS.

Click Preview the new Reporting Services to see what all the fuss is about.

Straight away I noticed that the new portal doesn t have the long spin-up time on first load that the old portal had. Clicking the down arrow icon and selecting Report Builder takes you to this link to download the Report Builder installer. Report Builder is the application we will use to develop our test reports. Report developers will most likely use SQL Server Data Tools (SSDT) to manage multiple reports.


Download and click through the installer making sure to set the default URL as instructed. My install is in native mode so I needed /reportserver on the end of the URL.

Once installed launch Report Builder to start building your first report.

To anyone who has used SSRS and Report Builder in the past, the screenshot above will show that not much has changed here other than the style of the UI. Click Blank Report in the New Report section to open the Report Designer.

Before we can build a report we need some data. Right click Data Sources and select Add Data Source. Give the data source the name Master and select Use a connection embedded in my report . Enter your SQL Server s connection details or localhost as server name if installed on the same machine as SSRS. Select master as the database and confirm.

Now we have a connection to our SQL Server we can build a data set to display in the report. Right click DataSets and select Add Data Set. Name the data set Databases and select Use a data set embedded in my report . Select our data source named Master then copy and paste the query below into the Query text box to populate the data set.

This query will return some basic information about each of the databases on the server.

Confirming the data set settings will return you to the designer. Let s tidy things up a bit before publishing this report. Click the title text box and enter Database Statuses Click Insert, Table and then Table Wizard. Select the Databases data set. On the next screen select all four fields and drag them into the Values box on the bottom right of the screen. Click through to the finish and confirm. Finally stretch the columns in the new grid so that the headers fit.

Click Design to go back to the designer and save the report. Select a location on the SSRS Server to store the report. This will publish the report to the SSRS portal. Refresh your browser that you used earlier to view your new report on the portal.

This was a real high level first look at SSRS 2016. I will dive a little deeper in the next two posts.

Share this:


Recovering a database with a missing transaction log #microsoft #sql #server, #sql


#

Recovering a database with a missing transaction log

Question: We had a SAN problem over the weekend and the upshot is that one of our main databases was shut down with open transactions, but we ve lost the log file. The last working backup is two weeks old.

Is there any way to recover the database without having to resort to the old backup?

Answer: Yes, but not without consequences.

Usually when a database has open transactions and the server crashes, crash recovery will run on the affected database and roll back the open transactions. This prevents the partial effects of transactions being present in the database. If the transaction log is not available when SQL Server starts, the database will be in the SUSPECT state.

In this case, the only way to bring the database online (note that I m not saying way to make the database usable ) is to use the emergency mode repair functionality that was added in SQL Server 2005. This basically builds a new transaction log and then runs a DBCC CHECKDB using REPAIR_ALLOW_DATA_LOSS. Read more .

The problem you will have if you decide to go this route is that emergency mode repair cannot roll back any of the active transactions as it has no knowledge of what they were, because the transaction log was destroyed. This means that at best, the resulting database will be transactionally inconsistent i.e. the state of the data in the database is unknown.

For instance, there may have been a transaction that was updating some sales records in a table, and only half of them were updated before the crash and subsequent transaction log loss occurred. You ll find it very difficult to figure out what state the data is in, and how to make the database properly usable again by the application.

Emergency mode repair is supposed to be a real last resort when all other methods of recovering data have failed.

In your case, you ll have to figure out what is the lesser of two evils recovering the database into an inconsistent state, or restoring the two-week old backup. You may end up deciding to do both, and trying to piece together the data, but that will be very time consuming and problematic again, because you don t know what was happening in the database at the time that the crash occurred.

To prevent this situation in future, beef up your backup strategy to use much more frequent backups, and add a high-availability technology that maintains a real-time copy of your database such as database mirroring or SQL Server 2012 Availability Groups.

[Unrelated: this is the last post on this blog, as we re going to focus on our own blogs moving forward. It s been great having this additional outlet to the community and we ve enjoyed writing the weekly posts for the last two years. We hope you ve enjoyed reading and learning from them too!]


Security Considerations for a SQL Server Installation #sql #server #domain #controller


#

Security Considerations for a SQL Server Installation

Enhance Physical Security

Physical and logical isolation make up the foundation of SQL Server security. To enhance the physical security of the SQL Server installation, do the following tasks:

Place the server in a room accessible only to authorized persons.

Place computers that host a database in a physically protected location, ideally a locked computer room with monitored flood detection and fire detection or suppression systems.

Install databases in the secure zone of the corporate intranet and do not connect your SQL Servers directly to the Internet.

Back up all data regularly and secure the backups in an off-site location.

Use Firewalls

Firewalls are important to help secure the SQL Server installation. Firewalls will be most effective if you follow these guidelines:

Put a firewall between the server and the Internet. Enable your firewall. If your firewall is turned off, turn it on. If your firewall is turned on, do not turn it off.

Divide the network into security zones separated by firewalls. Block all traffic, and then selectively admit only what is required.

In a multi-tier environment, use multiple firewalls to create screened subnets.

When you are installing the server inside a Windows domain, configure interior firewalls to allow Windows Authentication.

If your application uses distributed transactions, you might have to configure the firewall to allow Microsoft Distributed Transaction Coordinator (MS DTC) traffic to flow between separate MS DTC instances. You will also have to configure the firewall to allow traffic to flow between the MS DTC and resource managers such as SQL Server.

For more information about the default Windows firewall settings, and a description of the TCP ports that affect the Database Engine, Analysis Services, Reporting Services, and Integration Services, see Configure the Windows Firewall to Allow SQL Server Access .

Isolate Services

Isolating services reduces the risk that one compromised service could be used to compromise others. To isolate services, consider the following guidelines:

Run separate SQL Server services under separate Windows accounts. Whenever possible, use separate, low-rights Windows or Local user accounts for each SQL Server service. For more information, see Configure Windows Service Accounts and Permissions .

Configure a Secure File System

Using the correct file system increases security. For SQL Server installations, you should do the following tasks:

Use the NTFS file system (NTFS). NTFS is the preferred file system for installations of SQL Server because it is more stable and recoverable than FAT file systems. NTFS also enables security options like file and directory access control lists (ACLs) and Encrypting File System (EFS) file encryption. During installation, SQL Server will set appropriate ACLs on registry keys and files if it detects NTFS. These permissions should not be changed. Future releases of SQL Server might not support installation on computers with FAT file systems.

If you use EFS, database files will be encrypted under the identity of the account running SQL Server. Only this account will be able to decrypt the files. If you must change the account that runs SQL Server, you should first decrypt the files under the old account and then re-encrypt them under the new account.

Use a redundant array of independent disks (RAID) for critical data files.

Disable NetBIOS and Server Message Block

Servers in the perimeter network should have all unnecessary protocols disabled, including NetBIOS and server message block (SMB).

NetBIOS uses the following ports:

UDP/137 (NetBIOS name service)

UDP/138 (NetBIOS datagram service)

TCP/139 (NetBIOS session service)

SMB uses the following ports:

Web servers and Domain Name System (DNS) servers do not require NetBIOS or SMB. On these servers, disable both protocols to reduce the threat of user enumeration.

Installing SQL Server on a domain controller

For security reasons, we recommend that you do not install SQL Server 2012 on a domain controller. SQL Server Setup will not block installation on a computer that is a domain controller, but the following limitations apply:

You cannot run SQL Server services on a domain controller under a local service account.

After SQL Server is installed on a computer, you cannot change the computer from a domain member to a domain controller. You must uninstall SQL Server before you change the host computer to a domain controller.

After SQL Server is installed on a computer, you cannot change the computer from a domain controller to a domain member. You must uninstall SQL Server before you change the host computer to a domain member.

SQL Server failover cluster instances are not supported where cluster nodes are domain controllers.

SQL Server Setup cannot create security groups or provision SQL Server service accounts on a read-only domain controller. In this scenario, Setup will fail.


Windows IT Pro, Microsoft Windows Information, Solutions, Tools, windows media server.#Windows #media


#

Windows IT Pro

Windows media server

Windows media server

Windows media server

Windows media server

Windows media server

On Demand Online Training

Windows media server

Hyper-V Advanced Topics

Windows media server

Mastering Windows 10 in the Enterprise

Windows media server

Understanding the Basics of Exchange 2016

Windows media server

Getting to Know Your Network Attacker

Windows media server

Network Troubleshooting for IT Professionals

Windows media server

Protecting Your Company Against a Hack

News & Information

Windows media server

More Azure AD fun with Part 2 and a few extra goodies!

Windows media server

What to Watch at VMworld 2017

Windows media server

Windows Server (Version 1709) Will Be Released Next Month at Microsoft Ignite

Windows media server

Demand for Open Source Skills Continues to Grow

Windows media server

Google Cloud Offers Cost Break for Users Not Picky on Performance

Windows media server

IT/Dev Connections 2017 News Update: Flashback to ITDC 2016, Windows 10 Workshop, and Women In Tech

Windows media server

Microsoft Releases Build 16273 for PC Testers; First New Build in Three Weeks

Windows media server

IaaS Provider Skytap Eyes Enterprise Growth with $45M in New Funding

Windows media server

Security Sense: Terrible Security Practices Have Become Indistinguishable from Parody

Windows media server

Resource: Windows Server 2016 Security Guide Whitepaper

Windows media server

A dive into some Azure AD design and deployment considerations, Part 1!

Windows media server

Windows Server Insider Preview Build Release Tracker (2017)

Windows media server

HP Printer Runt Surprises, Outshining Whitman s Services Giant

Windows media server

Google Launches Chrome Enterprise License in Chromebooks Push

Windows media server

Apache Foundation and Facebook in Standoff Over React.js License

Penton Tech Research

Windows media serverTake our Infrastructure Plan Survey!

Windows media server*NEW* 2016 IT Skills and Salary Survey

What do you see as the biggest security challenge your organization faces right now?

Stay on top of Security

Windows media serverITPro s Security Coverage

Windows media serverExpert Analysis and Insight from Troy Hunt

Heard on Social

Windows media server


How to configure sendmail (or postfix) to send confirmation emails using webmin?


#

have a centos 5.5 64 bit xen vps. I have a php script that automatically sends confirmation emails for people who sign up, it’s not sending it right now, I’ve been told to install webmin, and then install sendmail or postfix and configure it to send emails

I installed webmin, installed sendmail, and now what? if you know how to configure postfix then I’ll unintall sendmail and install postfix.

I do not want to have an inbox i can use google apps email service to do so, just want to send automated emails

I can do it via ssh, without webmin, just wanna know how, any tutorial or explanation would be so appreciated.

if you know how to configure another software similar to posftix and sendmail, I have no problem using it rather than using those 2. Basically I don’t care what email server I use, as long as the job gets done

You can go to Servers > Sendmail Mail Server. (If you don’t see it, click Refresh Modules toward the bottom.)

In most cases you shouldn’t need much configuration. PHP’s mail() function should work once sendmail is installed and running.

If it still doesn’t work, could you:

  1. describe how your application sends mail (and/or post the code)
  2. describe what you see on Webmin’s sendmail page esp. errors if there are any
  3. send a message to yourself from the command line using the mail command and describe the result

answered Sep 15 ’11 at 20:53

It’s nothing easier than installing Postfix from command line. Just install it with your favorite package manager yum install postfix. After that you configure it as described in the basic configuration readme .

If you think that this is too hard and not easy enough you should not install a mail server. Not knowing what you do will probably exposes the mail server to the public and will harm other innocent people (sending Spam).

On the other hand I do not understand is why people don’t use a search engine for these basics? First search hit reveals this complete HOWTO: http://wiki.centos.org/HowTos/postfix (This is 10 seconds work instead of 10 minutes writing this question).

answered Sep 15 ’11 at 22:52

i can install easily (i have webmin remember? and I can google), that s not my question, my question is similar to this one bit.ly/r86vQT but didn t want you to read hundreds of line of code, so didn t talk about it,so i opened a ticket on clip-bucket, lets see what they say user Sep 15 ’11 at 23:13

thank you sir, I will remove sendmail, install postfix and follow the document 😀 user Sep 15 ’11 at 23:30

Your Answer

2017 Stack Exchange, Inc


What Is Cloud Backup? A Beginners Guide To Cloud Backup #amazon #s3


#

This beginner’s guide on Cloud Backup briefly explains the benefits as well as pitfalls of Cloud Backup solutions, notable cost effective Cloud Backup service providers etc in a simple way.

What Is Cloud Storage?

Cloud Storage is nothing but storage solution for the Internet. Cloud Storage Service Provider will have a cluster of highly scalable servers with simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. In simple terms, Cloud Storage, from an end user perspective is nothing but an unlimited amount of online disk space that is secure, inexpensive, highly available (mostly) to store/backup/restore data from PCs, Laptops and Servers.

Image Source: Zmanda Backup Solution

Who Should Use Cloud Storage?

As mentioned earlier, Cloud Storage is nothing but Online Storage, so to access Cloud Storage, one needs to have high bandwidth internet connection. This requirement may be a problem for those who want to store time critical storage on the cloud. However, data recovery solutions that use cloud storage is viable as you would request for data restore not so frequently. For example, you are photographer and want to backup all of your photo collection. Then it makes perfect sense to have the photos backed up on the cloud. You would have secondary backup of your photos and incase your laptop crashes, you can always get your data restored almost immediately. Depending on the amount of photographic data that is there on the cloud it would take probably a few minutes or few hours. This is not really time critical. So, a Data Backup/Restore solution on Cloud Storage is Cloud Backup solution. Anyone who needs secured, backup on a remote server without much worries on maintaining the infrastructure will have to go for Cloud Storage Backup Solutions.

Data Encryption and Disaster Recovery On Cloud Backup

In Cloud Backup solution, your data resides on an external server outside your home or office, so the data needs to be protected. All of the Cloud backup solutions offer state of the art Data Encryption facility so that your data is compressed and encrypted so that others are unable to use them. Since the data that is backed up is on a remote location, there is a fair amount of disaster recovery is involved. More over, Cloud Storage servers will have data redundancy implemented internally.

Advantages of Cloud Backup Service

  • Safe and secured way of backing up data on a remote data storage
  • Accessible from anywhere
  • Usually cheap with pay as you go model
  • Inbuilt disaster recovery facility
  • Highly scalable – no cap on storage limit
  • Data backup can be automated
  • No need to worry about hardware infrastructure to maintain

Disadvantages of Cloud Backup Service

  • High Internet bandwidth is needed
  • Your data resides on 3rd party Servers, if they close the shop, then you need to worry about your data (If it ever happens, It can be handled easily though)
  • Most of the Cloud Backup service providers charge based on data backed up. So, if you have more data, then you will be charged more. However, there are more and more service providers who are giving flat rate plans such as Mozy that charges $5/month for unlimited usage.

It is seen by industry experts that for large enterprise with Large amount of data that frequently changes with limited network connectivity, the traditional on premise data backup solution works out better and cheaper.

Cloud Backup Vendors

There are many such vendors. But the most noted ones that are cost effective are;

Most of the Cloud Backup service providers use Amazon’s S3 Cloud Infrastructure. Amazon S3 is mostly used but has couple of alternatives such as Nirvanix. Eucalyptus. Enomaly Cloud etc.

Well, not really
Cloud Backup should support
Blocklevel or Delta backup, some of the above don t so if I m uploading 1 GB file, next time changed 1 Byte, will take it again.
I prefer Cloud Disaster Recovery
And Cloud Disaster Recovery means true global dedupelication technology with all the above, don t have!!
I prefer pure cloud backup, not proprietary storage, like Amazon,we all do remember carbonite lost their data in 2009.
Completely mobility access, I m paying for the cloud, i need for access anytime any where, why should i use a software without mobility access
If for business,it should have a true complete cloud management console
it should be easy to use and no real performance impact

Hope this will help

Forgot to say, I tested many software s and the best 2 I found were Mozy and Timeline Cloud, just for your info, but i went with Timeline Cloud, because they are using Amazon and they have cloud Disaster recovery

[ ] with any computer that Read More RECOMMENDED BOOKS REVIEWS AND OPINIONS What Is Cloud Backup? A Beginners Guide To Cloud Backup This beginner’s guide on Cloud Backup briefly explains the benefits as well as pitfalls of [ ]

Social comments and analytics for this post

This post was mentioned on Twitter by opentube: What Is #Cloud #Backup? A Beginners Guide To Cloud Backup http://bit.ly/bg54Vw

Great article explaining cloud backup services and Online Data Backup Services

This is very helpful. Thank you. I have an additional question. Is there a service that not only back-ups your info, but also allows you to access files and work on them remotely. In others words a service that would allow me to have a home computer and an office computer and access files from both sites.
Thank you.

Not much of a review without the market leaders in it.
Where is SOS Online Backup, for one? Where is SpiderOak?
This review is just an info-mercial for kick backs.


What Is Cloud Computing? Explained In Simple Words #beginners,cloud,cloud #computing,explanation,guide,hosting,online #apps,online #storage,server,simple,tutorial,web


#

I m sure you ve heard the following two words being used extensively online in the last couple of years. Cloud Computing. The next time you read something about a service that mentions Cloud , you will immediately know what they mean by that, and what exactly cloud computing is and how does it work.

What is Cloud Computing and how does it work, explained in simple terms?

Actually, you ve been using cloud computing all this time, unless you were living in a cave somewhere, and that cave somehow didn t have an internet connection.

So let me put this in as simple terms as possible:
Cloud Computing is the ability to use the power of other computers (located somewhere else) and their software, via the Internet (or sometimes other networks), without the need to own them. They are being provided to you, as a service .

That means you don t have to go and buy some super powerful gigantic computer system and risk have it sitting there, doing nothing. By utilizing the cloud, everything is ready for you whenever you might need it. Exactly where the hardware and software is located and how it all works doesn t matter to you, as a user. It s just somewhere up in the vast cloud that the Internet represents.
Now you know one of the reasons they call it the Cloud Computing.

For example, many businesses use cloud computing as a means of remote backup solutions to store and recover their data offsite.

To illustrate the point even better, let s go over one typical example of cloud computing that you must have used before (exception are those with the cave thing from above).

Google as an example of cloud computing

What happens when you type and search something on Google?
Have you ever thought about this? Does your PC go through all that information, sorts it out for you and display all the relevant results? No, it doesn t. Otherwise, you would wait much longer for a simple results page to display. A simple PC can t process all those billions of websites in a fraction of a second, like Google does. Your PC only serves as a messenger to tell Google what you are looking for. Everything else is done by some of Google s powerful computers located somewhere, Who-Knows-Where in the world.
Now, did you ever care about how or where that comes from? Why would you, right? Exactly. That s a great example of how cloud computing is used.

At this point, I m sure you understand what cloud computing is and the basic idea behind it. If that s everything you wanted to know, you can stop here, and go enjoy your life knowing what Cloud Computing is. If you are interested in just a little tiny bit more about it, continue reading to the end (not long from here).

3 Types of Cloud Computing

1. Infrastructure as a Service (IaaS) is basically when you buy raw computing hardware to use over the net, usually servers, or online storage. You buy what you need and pay-as-you-go. The best and the most basic example of this type of cloud computing is buying a web hosting for your website. You pay monthly fee to a hosting company for the storage on their servers and to have them serve up files for your website from those servers. Another good example of someone who provides these types of cloud services would be RackSpace cloud company .

2. Software as a Service (SaaS) is a case where you use the complete software application that s running on someone else s servers. The best example of this is Google Docs. which you can use for creating and storing text documents, presentations, spreadsheets and so on

3. Platform as a Service (PaaS) is case where you create applications using web-based tools so they run on systems software and hardware provided by another company. As an example, consider a situation where you develop your own e-commerce website but have the whole thing, including the shopping cart, checkout, and payment mechanism running on a merchant s server.

That should be all you need to know to get you started.

As you can see, the idea behind cloud computing is very powerful and useful beyond measure. Especially now when you actually know what it is.

Credit for a great comic image used above goes to CloudTweaks .

Popular Posts:


  • Can t Open Facebook Messages How To Fix It

  • What Is Twitter And How Does It Work Beginners Guide

  • 3 Hacks for Firefox That Will Double Your Internet Browsing Speed

  • 9 Websites To Play Piano Online for Free
  • Free Detailed Site SEO Analyzing and Tracking Tool

25 Responses

Using Resource Monitor to Troubleshoot Windows Performance Issues Part 1 #windows #server


#

Using Resource Monitor to Troubleshoot Windows Performance Issues Part 1

Hello AskPerf! Leonard with the Performance Team here to discuss the Resource Monitor tool and we can use it to troubleshoot Windows Performance issues. In this blog, (the first of 2 on the subject of underutilized tools) I will discuss Resource Monitor which is available on both client and server versions of Windows starting with Windows Vista. Resource Monitor can be launched from the advanced tools tab in “Performance Information and Tools”, which is located in Control Panel. It can also be launched directly by running Resmon.exe .

Resource Monitor is a method of viewing Perfmon data. In fact, Resource Monitor is composed of Perfmon data combined with Windows Event Tracing data. You can view this tracing session by launching Perfmon, expanding Data Collector Sets, then select Event Tracing Sessions. There you will see a session called WDC.GUID (the GUID will vary). You can confirm this provides the data for Resource Monitor by observing that this trace is only running when Resource Monitor is running. Also, when it is running, you can view the channels that provide the data. Launching Resource Monitor will also launch a background process of Perfmon to act as a data source.

Resmon will show the window below on first launch. Each new launch will show the view as configured when Resmon was closed.

There are 5 tabs to choose from. The overview tab gives a summary of the other ones. The main tabs are CPU, Memory, Disk and Networking. In each of the tabs the windows on the left can be collapsed, expanded and resized. It is also possible to filter each view by the process, for example you are only interested in seeing the activity for Explorer, check the box for that process and the bottom window will only show the activity for that process. With no processes selected, the bottom windows will show activity for all active processes. The graphs on the right can be resized between small, medium and large, but I would recommend keeping them at the default large setting. The numeric scale for the graphs will change as activity changes.

My Favorite features

The memory tab has one unique graph that provides a quick view of what physical memory is being used for.

It is easy to see the total physical memory and what it is being actively used along with showing what is hardware reserved. Hardware Reserved represents physical memory addresses that have been reserved by hardware (generally busses like PCI or video cards) and is not available for Windows to use. It is usually small on x64 systems (except servers that do memory mirroring) but can be several 100MB up to 1GB on x32 systems. This means a 4GB x86 system can have only 3 GB of accessible memory.

The network tab is useful in that it not only shows the process that is generating activity, but the IP address it is connected to.

I recently had an issue where the system process was showing high CPU activity on a Windows 2008 Server. Two of the things that run in system are the SMB and SMB2 processes. I suspected that the high CPU was due to network activity and was a load based problem and not a problem with a process. To confirm that was the cause, I used Process Explorer to determine the threads that were running in the system process. I confirmed that there were 15+ SMB and SMB2 threads that were always the highest consumer of CPU. I then ran Resmon and looked at all of the IP addresses that were associated with system. We identified a management server that was receiving a lot of data. Based on that information, we were able to narrow down the problem to the request coming from that server. While the problem could have been identified using different tools, Resmon provided the most efficient way to identify the problem.

I hope this overview of Resource Monitor will make it one of the tools you use the next time you need to look at performance data or activity on the system.

1. I opened Resmon.exe from command windows with elevated access.
2. Clicked on Monitor in Menu, and clicked on Start monitoring.
And I get below error

[Window Title]
Performance Monitor
[Main Instruction]
When attempting to start Resource Monitor, the following system error occurred:
Access is denied.
[OK]


Using Amazon Glacier or S3 as an Online Backup Server #cloud #storage,


#

Using Amazon Glacier or S3 as an Online Backup Server

Cloud storage solutions have become a simple, cost-effective solution for backing up your important data. Check out these feature-rich tools for helping efficiently manage your cloud storage data.

Amazon Glacier is an extremely low-priced cloud storage solution as low as one cent per gigabyte (GB), in fact that is designed for long-term file and data backups with infrequent retrievals or removals. It’s great for manually archiving data or maybe even serving as online backup for your PCs, especially if you also back up to a local drive.

Just keep in mind, there are additional fees If you re looking for a storage solution for frequent utilization, in which case you’ll likely want to consider another Amazon Web Service: the Simple Storage Service (S3). pricing for which starts at three cents per GB.

Amazon, however, may not always be the best storage option. For instance, if you want a quick and easy online PC backup solution, consider another company that s designed for PC backup. When backing up large amounts of data, such as half a terabyte (TB) or more, unlimited backup offerings from other companies like CrashPlan or Backblaze may be cheaper to utilize than even Amazon Glacier.

If you do choose one of the Amazon storage offerings, you can use a client program similar to an FTP client to upload, download, synchronize and back up your data. Here are several different clients to consider that support Amazon and other cloud services:

Fast Glacier. A multipurpose Windows client for Amazon Glacier that is free for personal use, and commercial licensing for it starts at $39.95. Fast Glacier supports manual uploads/downloads, syncing, and automated backup functionality. It features pause/resume for uploads, pre-upload compression and encryption, and download limiting to control data retrieval costs.

CrossFTP. An FTP-style client that supports FTP, Amazon S3, Amazon Glacier and Google storage client for Windows, Mac OS X and Linux. In addition to their free basic edition, they offer a Pro edition for $24.99 and an Enterprise edition for $39.99 that adds support for SFTP, WebDav, pre-upload encryption, syncing, scheduling and other features.

Duplicati. A free and open source backup client for Windows, Mac OS X and Linux that supports many different cloud and remote file services, including Amazon Glacier and S3, Windows Live SkyDrive, Google Drive (Google Docs), Rackspace Cloud Files, WebDAV, SSH and FTP. Duplicati supports incremental updates, encryption and signing using GNU Privacy Guard, and scheduling capabilities.

Simple Amazon Glacier Uploader. A simple Java-based cross-platform client that offers a GUI for the Glacier command-line tools. This likely isn t a client you ll want to use all the time, but it is good to keep on a flash drive for quick and simple access from any computer or OS.

Syncovery. A commercial file synchronization and backup software tool that is priced at $59.90 for the fully functioning edition, which runs on Windows, Mac OS X and Linux. Syncovery supports traditional FTP-like protocols and many cloud-based services, including those from Amazon S3, Microsoft, Rackspace, Google, DropBox and Box.com. It s a feature-rich solution supporting syncing, automated scheduled backups, compression and encryption, and many other functionalities.

CloudBerry. CloudBerry offers a large line of commercial backup product solutions designed for different types of machines or applications, such as desktop PCs ($29.99), servers ($79.99) and bare metal backups ($119.99). CloudBerry sells products for specific servers and applications as well, such as MS SQL servers ($149.99) and MS Exchange servers ($229.99). They even offer solutions for IT or managed service providers (MSPs). Their products support backing up to Amazon, Microsoft, Google, and Openstack cloud storage services and are feature-rich solutions.

Recommended for You


Audit Windows File Servers, Failover Clusters, NetApp Filers & EMC Servers, file


#

Windows File Server Monitoring and Auditing

Securely track the File Servers for access, changes to the documents in their files and folder structure, shares and permissions. View from the exclusive file audit reports with 50+ search attributes and filter based on user / file server / custom / share based reporting for crisp detailed information. Also, get instant email alerts on File Servers activities upon unauthorized actions / access to critical files folders. Find answers to the vital 4W’s – ‘Who effected what change in File Server, when and from where’.

  • Detailed forensics of all changes / failed attempts to file create, delete, modification and folder structure
  • Track file and folder access permissions owners
  • Audit Windows Failover Clusters for a secure, downtime-free and a compliant network environment
  • Monitor EMC Servers, NetApp Filers CIFS files / folders create, modify and delete, change permissions etc.,

File server audit software

Sample Reports

Failover Clusters

Audit and administer the crucial Windows File Server Failover Clusters. Monitor the business-critical files access / modifications trails. Track the file modifications to file / folder permissions changes! Monitor the network ‘file / folder, shares permission’ modifications! Audit the complete Windows File Server Failover Clusters setup with simple, detailed, easy-to-understand Compliance specific audit reports along with alerts.

  • An assortment of Failover Cluster audit, alerts and filter-based reporting capabilities
  • Detailed forensics of approved unapproved changes in file and folder structure
  • Automate tracking of changes through scheduled reports
  • Meet SOX, HIPAA, PCI, GLBA Compliance Requirements

File server audit software

Sample Reports

NetApp Filer

Centrally audit, monitor and report the changes on NetApp Filer CIFS Shares. View pre-configured audit reports and get instant alerts on every possible NetApp Filer CIFS files / folders create, modify and delete, change permissions etc., The reports and alerts contain a thorough event analysis which helps to quickly assess the situation for a better situation control. Meet security and IT Compliance needs with Compliance specific audit reports and submit the reports as XLS, CSV, PDF and HTML formats.

  • Monitor NetApp Filers changes to files, shares, accesses and permissions
  • Detailed audit trails of approved unapproved changes in file and folder structure
  • Schedulable reports e-mail alerts on critical changes
  • Meet SOX, HIPAA, PCI-DSS, FISMA, GLBA Compliance requirements

File server audit software

Sample Reports

EMC Server

Monitoring the EMC (VNX/VNXe/Celerra) storage devices is very critical for the confidential organizational datas’ stored. Keep a track of every successful and failed attempt to access, modify, move, copy and rename files / folders along with auditing the settings permission changes is very crucial. With the pre-configured audit reports and instant alerts, instantly know WHO is making WHAT change WHEN from WHERE. Ensure security and meet various Compliance regulations.

  • Track every file / folder access / modifications by Admin, Users, Helpdesk, HR etc.,
  • iew pre-configured reports and set email alerting for changes to monitored folders / files
  • Meet PCI, SOX, GLBA, FISMA, HIPAA Compliance with audit reports in XLS, CSV, PDF and HTML formats
  • Archive AD event data to save on disk space and view historical reports for security and forensics

File server audit software

Sample Reports

Compliance

SMBs or Large organizations have to comply with industry specific Compliance Act like SOX, HIPAA, GLBA, PCI-DSS, FISMA . With our Compliance specific pre-configured reports and alerts, we ensure your network is under 24/7 audit with periodic security reports and email alerts as standard procedure.

A Few Compliance Specific Reports

SOX Recent User Logon Activity, Logon Failures, Administrative User Actions, Domain Policy Changes, User Management, Logon History, Changes on Member Server

HIPAA All File or Folder Changes, Computer Management, OU Management, Logon Duration, Group Management, Terminal Services Activity

GLBA Local Logon Failures, Folder Permission Changes, Folder Audit Setting Changes (SACL), Successful File Read Access, Domain Policy Changes

PCI-DSS Logon history, Logon Failures, Successful File Read Access, Changes on Member Server, Radius Logon History (NPS)

FISMA Administrative User Actions, Failed Attempt to Delete File, Failed Attempt to Write File, All File or Folder Changes, Computer Management

File server audit software

File Server Audit Reports

Shown here are the File Servers, Failover Clusters, NetApp Filers, EMC Servers audit reports to give you a view into the file folder, permission changes and the various filters you can apply to further pin-point information.

File server audit software

All File or Folder Changes

File server audit software

File server audit software

File server audit software

File server audit software

Successful File Read Access

File server audit software

Failed attempt to Read File

File server audit software

Failed attempt to Write File

File server audit software

Failed attempt to Delete File

200+ ready-to-use audit reports for security, forensics Compliance


System Monitoring Tools For Ubuntu – Ask Ubuntu #hp #monitoring #tools #for


#

Indicator-SysMonitor does a little, but does it well. Once installed and run, it displays CPU and RAM usage on your top panel. Simple.

One of my personal favourites

Screenlet you’ll find a bunch of differently styled CPU and RAM monitors included in the screenlets-all package available in the Ubuntu Software Center.

Displays information about CPU, memory, processes, etc.

This command line tool will display statistics about your CPU, I/O information for your hard disk partitions, Network File System (NFS), etc. To install iostat, run this command:

To start the report, run this command:

To check only CPU statistics, use this command:

For more parameters, use this command:

The mpstat command line utility will display average CPU usage per processor. To run it, use simply this command:

For CPU usage per processor, use this command:

Saidar also allows to monitor system device activities via the command line.

You can install is with this command:

To start monitoring, run this command:

Stats will be refreshed every second.

GKrellM is a customizable widget with various themes that displays on your desktop system device information (CPU, temperature, memory, network, etc.).

To install GKrellM, run this command:

Monitorix is another application with a web-based user interface for monitoring system devices.

Install it with these commands:

Start Monitorix via this URL:

Glances are good. What it shows me is sometimes some critical logs. WHere to find whats the problem? Where are thouse logs? WARNING|CRITICAL logs (lasts 9 entries) 2016-03-23 19:09:48 2016-03-23 19:09:54 CPU user (72.7/76.6/80.6) 2016-03-23 19:09:28 2016-03-23 19:09:32 CPU IOwait (62.5/62.5/62.5) 2016-03-23 19:08:45 2016-03-23 19:08:48 CPU user (86.3/86.3/86.3)

2016-03-23 19:08:16 ___________________ LOAD 5-min (1.0/1.1/1.2) – Top process: php5-cgi 2016-03-23 19:08:09 2016-03-23 19:08:19 CPU IOwait (74.3/74.6/75.0) Kangarooo Mar 23 ’16 at 17:09

Following are the tools for monitoring a linux system

  1. System commands like top. free -m. vmstat. iostat. iotop. sar. netstat etc. Nothing comes near these linux utility when you are debugging a problem. These command give you a clear picture that is going inside your server
  2. SeaLion. Agent executes all the commands mentioned in #1 (also user defined) and outputs of these commands can be accessed in a beautiful web interface. This tool comes handy when you are debugging across hundreds of servers as installation is clear simple. And its FREE
  3. Nagios. It is the mother of all monitoring/alerting tools. It is very much customization but very much difficult to setup for beginners. There are sets of tools called nagios plugins that covers pretty much all important Linux metrics
  4. Munin
  5. Server density: A cloudbased paid service that collects important Linux metrics and gives users ability to write own plugins.
  6. New Relic: Another well know hosted monitoring service.
  7. Zabbix

answered Nov 20 ’13 at 13:30

top is monitoring Software, listing all the processes with CPU/RAM usage, Overall CPU/RAM usage and more Also it’s mostly installed by default

htop is like an extended version of top. It has all the features from above, but you can see child processes and customize the display of everything. It also has colors.

iotop is specifically for Monitoring Hard rive I/O It lists all processes and shows their Hard drive usage for read and write.

answered May 10 ’13 at 10:43

where is heat monitoring. and in your answer you have already included 3 utilities. check the question **i am looking for a single tool that has some basic function ** Qasim May 10 ’13 at 10:54

With the three tools I am just giving different options for the OP, but I am dissapointed to say that none of those have heat monitoring BeryJu May 10 ’13 at 10:59

You might want to try sysmon. Although not as fancy as Glances, it is very straightforward and easy to use.

If you want to get dirty and do a little scripting in python, here are some basics of system monitoring with Python to get you started.

You’ll need an external module called psutil to monitor most things. It’s easiest to use an external module installer instead of building from source.

Note: These examples are written in Python 2.7

Now that we have the modules installed, we can start coding.

First, create a file called usage.py .

Start by importing psutil

Then, create a function to monitor the percentage your CPU cores are running at.

Let’s break that down a bit, shall we?

The first line, cpu_num = psutil.cpu_percent(interval=1, percpu=True). finds the percentage that the cores in your CPU are running at and assigns it to a list called cpu_perc .

This loop right here

is a for loop that prints out the current percentage of each of your CPU cores.

Let’s add the RAM usage.

Create a function called ram_perc .

psutil.virtual_memory gives a data set containing different facts about the RAM in your computer.

Next, you can add some facts about your network.

Since psutil.net_io_counters() only gives us information about packets sent and received in bytes, some converting was necessary.

To get some information about swap space, add this function.

This one is pretty straightforward.

Temperature is kind of hard to do, so you may need to do some research of your own to figure out what will work with your hardware. You will have to display the contents of a certain file.

Disk usage is a lot easier than temperature. All you need to do is to pass the disk you want to monitor (i.e: / ) through a certain function.

The original output of psutil.disk_usage is this,

but you can also just receive total. used. free. or percent .

The completed program: (the aforementioned functions were combined)

The line temp = open(“/sys/class/thermal/thermal_zone0/temp”).read().strip().lstrip(‘temperature :’).rstrip(‘ C’) might not work with your hardware configuration.

Run this program from the command line. Pass the disks you want to monitor as arguments from the command line.

Hope this helps! Comment if you have any questions.

Nagios seems to be the most popular and most customizable but I would not choose it for GUI.

Zabbix’s open source solution monitors everything you have mentioned as well as provides time-based graphs for performance monitoring.

If you are looking for an even cleaner GUI, check out Zenoss. Zenoss is an open-source, web-based tool, but offers service analytics and root cause analysis with its propriety tool.


Storage Options for Blade Servers – Blades Made Simple #blade #server #storage


#

Storage Options for Blade Servers

As technology trends like software defined storage (SDS) become more adopted in data centers, it will be interesting to see how it will impact the blade server market especially with current research showing an expectation of growth over the next 5 years. To succeed, blade server vendors will have to find ways to adopt to changing technology trends- especially SDS. The problem is, there doesn t seem to be a lot of options when it comes to storage on blade servers. Even when you look at hyperconverged appliances from Dell EMC and HPE you find the underlying servers are rack servers, so if the future is hyperconverged, why aren t we seeing offerings on blade servers? Unfortunately, I don t have an answer to that question, however I think current blade environments offer a lot more than many people give them credit for, especially when it comes to storage options, so let s take a look.

Within the M-Series blade server portfolio, Dell EMC offers the PS-M4110 Storage blade. This blade is an iSCSI storage blade and offers features consistent with the Dell EMC P-Series (formerly EqualLogic) 4100 series storage product line. The PS-M4110 is a double-wide, half-height blade form factor and supports up to 14 x 2.5” drives. The PS-M4110 offers up to two, hot-pluggable 10GbE controllers with 4GB of usable memory per controller. It also features a dedicated 1 x 100Mb management port available through the Dell PowerEdge M1000e Chassis Management Controller (CMC). The PS-M4110 is available in 4 different configurations:

  • PS-M4110E – 14 x 1TB 7.2K NL SAS drives (14TB raw capacity )
  • PS-M4110X – 14 x 600GB, 900GB, 1.2TB or 1.8TB 10K SAS Drives (up to 25.2TB raw capacity )
  • PS-M4110XV – 14 x 600 GB 15K SAS drives (8.4TB raw capacity )
  • PS-M4110XS – 4 x 800GB SSD + 9 x 1.2TB 10K SAS drives (14.8TB raw capacity )

The 10U PowerEdge M1000e chassis can support up to 4 x PS-M4110 storage blades for up to 100TB of capacity.

Although these are iSCSI storage blades and any server in the datacenter environment can access them, they contradict the tenants of software defined storage and probably wouldn t be the best solution, but for traditional server-storage needs, the PS-M4110 could offer a unique solution placing both your compute and your iSCSI storage in the same chassis. To learn more about the PS-M4110, visit http://www.dell.com/us/business/p/equallogic-ps-m4110/pd .

In the PowerEdge FX2 family, Dell EMC offers the FD332 storage node. I wrote about this product last year [A Closer Look at the Dell FD332 for FX2 Architecture ] but in a quick summary, it provides up to 16 hot-pluggable 2.5 drives directly attached to one of the compute nodes within the FX2 chassis, and includes one or two RAID controllers for direct attaching to one or two compute nodes. Up to 3 x FD332 storage sleds can fit into a single 2U FX2 chassis for up to 188TB of capacity.

The FD332 can be used in different configurations and the available drive options are extensive (see the earlier post mentioned above) including SSD and SAS making the FD332 an ideal platform for SDS. It s also a key part of the VMware Virtual Ready Node I wrote about a few months ago [VMware Virtual SAN Ready Node on a Blade Server ] and would also be a great option for other software defined storage offerings like Dell EMC s ScaleIO. To learn more about the FD332 storage node, visit http://www.dell.com/us/business/p/poweredge-fx/pd .

Within the HPE BladeSystem product family, HPE offers the HPE D2220sb Storage Blade. This blade is a single width, half-height blade providing up to twelve (12) 2.5 hot-pluggable drives to a single C-class blade server. The D2220sb includes an integrated Smart Array P420i Controller and supports all small form factor (SFF) SAS and SATA HDDs and SSDs currently certified in HPE Smart Carriers although I haven t been able to find the detailed list of drives supported. Up to eight D2220sb storage devices can be supported off a single 10U BladeSystem c7000 enclosure for up to 192 TB of capacity. For more information on the HPE D2220sb Storage blade, go to https://www.hpe.com/h20195/v2/GetDocument.aspx?docname=c04111399 .

Within the HPE Synergy family, the HPE Synergy D3940 Storage Module can hold up to forty SFF drives and occupies two half height bays in the HPE Synergy 12000 Frame, with each frame supporting up to five D3940 Storage Modules. It also can provide direct attached storage for up to 10 compute modules with the optional SAS connectivity module in the chassis.

As with the other storage blade, the D3940 supports all small form factor (SFF) SAS and SATA HDDs and SSDs currently certified in HPE Smart Carriers but I haven t been able to locate specifics. For more details on the D3940, go to https://www.hpe.com/h20195/v2/GetDocument.aspx?docname=c04815141 .

The Lenovo Flex System offers a direct attach storage expansion for connectivity to the Lenovo Flex System x220 or x240 Compute Node. The Lenovo Flex System Storage Expansion Node provides up to 12 x 2.5 hot-plug drives and one RAID controller.

The supported drives include:

  • 300 GB 10K 6 Gbps SAS 2.5-inch SFF G2HS HDD
  • 600 GB 10K 6 Gbps SAS 2.5-inch SFF G2HS HDD
  • 900 GB 10K 6 Gbps SAS 2.5-inch SFF HS HDD
  • 1.2 TB 10K 6 Gbps SAS 2.5-inch G2HS HDD
  • 600 GB 15 K 6 Gbps SAS 2.5-inch G2HS HDD
  • 250 GB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD
  • 500 GB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD
  • 1 TB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD

10 K and 15 K Self-encrypting drives (SED)

  • 1.2 TB 10K 6 Gbps SAS 2.5-inch G2HS SED
  • 600 GB 10K 6 Gbps SAS 2.5-inch G2HS Hybrid
  • 300GB 15K 6Gbps SAS 2.5-inch G2HS Hybrid
  • 600GB 15K 6Gbps SAS 2.5-inch G2HS Hybrid
  • 200 GB SATA 2.5-inch MLC HS Enterprise SSD
  • 400 GB SATA 2.5-inch MLC HS Enterprise SSD
  • 800 GB SATA 2.5-inch MLC HS Enterprise SSD
  • 200 GB SAS 2.5-inch MLC HS Enterprise SSD
  • 400 GB SAS 2.5-inch MLC HS Enterprise SSD
  • 800 GB SAS 2.5-inch MLC HS Enterprise SSD
  • 1.6 TB SAS 2.5-inch MLC HS Enterprise SSD

SSDs: Enterprise Value

  • 120 GB SATA 2.5-inch MLC HS Enterprise Value SSD
  • 240 GB SATA 2.5-inch MLC HS Enterprise Value SS
  • 480 GB SATA 2.5-inch MLC HS Enterprise Value SSD
  • 800 GB SATA 2.5-inch MLC HS Enterprise Value SSD
  • 120 GB SATA 2.5-inch MLC HS Enterprise Value SSD
  • 240 GB SATA 2.5-inch MLC HS Enterprise Value SSD
  • 480 GB SATA 2.5-inch MLC HS Enterprise Value SSD
  • 800 GB SATA 2.5-inch MLC HS Enterprise Value SSD

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

Share this:


Cheyenne Web Server #server #cfg, #cheyenne #web #server #rebol #free #open #source


#

14-Sep-2013 – Cheyenne sources moved to git/Github [0032 ]

The Cheyenne codebase has been moved from svn/Googlecode to git/Github. The change was made to simplify contributions to Cheyenne, and to regroup my open source projects at the same place (like Red and CureCode ).

In the process of changing the backend building scripts to support git, we also extended the supported platforms for automated builds, we now supports Windows and Mac OSX. in addition to Linux. The new builds are generated twice a day, if new commits are found in the master branch.

Thanks very much to Tamas Herman for helping with the scripts coding and providing the Mac OSX machine, and thanks to the HackJam hackerspace group from Hong-Kong for the hosting!

Please find the latest Cheyenne builds on the updated Download page.

12-Aug-2013 – CORS support added [0031 ]

Thanks to the kind sponsoring of Alan Macleod, Cheyenne now has a module for handling Cross Origin Resource Sharing protocol, basically allowing Javascript to make requests to another domain than the one the script originated from.

In order to use the new mod-cors module, you need to:

  1. Uncomment the cors module line in global section of the httpd.cfg config file.
  2. Add at least one allow-cors directive in a virtual host settings block.

28-Nov-2011 – Websocket support upgraded to hybi-10 [0030 ]

The websocket communication protocol is quite a moving target, it changes several times a year, sometimes breaking compatibility with previous revisions. Cheyenne is now up-to-date and supports the last RFC revision (named “hybi-10”), starting from Cheyenne revision 155.

All protocol features have been implemented but some are not tested yet, as they require a custom-made client (the Javascript websocket API does not support those features yet AFAICT):

  • Ping/pong frames
  • Fragmented frames

The server-side user API has been left untouched, a ping command might be added later though. The websocket chat demo has been upgraded too and now supports Chrome 14+, FF8+ and IE10.

27-Nov-2011 – Cheyenne at PHP Tour 2011 [0029 ]

I have been invited a couple of days ago at PHP Tour 2011 event in Lille (France) to make a presentation of Cheyenne to the PHP community and demonstrate Cheyenne ability to interface with a PHP FastCGI server and serve PHP generated content. I had a great time with developers from PHP community and enjoyed great presentations from Zend engineers about their famous framework and cryptography support in PHP.

Here are my presentation slides (in french):

If you are experiencing issues seeing the slides in flash format, here’s a PDF version .

If you are wondering what slide 4 is about, it is just a warning for people confusing this Cheyenne server with the one embedded in the ADSL box of one of french ISP (which is just an Apache instance renamed, and that name used to appear when an internal error occured in the box).

8-Jun-2011 – Cheyenne at Chti’RUG2011 [0028 ]

I was at Chti’RUG2011 french event at Lille a week ago and did a presentation of Cheyenne. This is the same presentation I did at ReBorCon2011, just translated in french (available in PDF format ).

There is a french blog article by Olivier Auverlot talking about this meeting.


SQL Server cluster design: One big cluster vs #sql #server #load #balancing


#

SQL Server cluster design: One big cluster vs. small clusters

workload to balance utilization and to schedule planned maintenance without downtime.

This cluster design uses software heartbeats to detect failed applications or servers. In the event of a server failure, it automatically transfers ownership of resources (such as disk drives and IP addresses) from a failed server to a live server. Note that there are methods to maintain higher availability of the heartbeat connections, such as in case of a general failure at a site.

MSCS does not require any special software on client computers, so the user experience during failover depends on the nature of the client side of the client-server application. Client reconnection is often transparent, because MSCS has restarted the applications, file shares and so on, at exactly the same IP address. Furthermore, the nodes of a cluster can reside in separate, distant sites for disaster recovery.

SQL Server on a cluster server

SQL Server 2000 can be configured on a cluster with up to four nodes, while SQL Server 2005 can be clustered on up to eight nodes. When a SQL Server instance is clustered, the disk resources, IP address and services are forming a cluster group to allow for failover. For more technical information, refer to the article How to cluster SQL Server 2005.

SQL Server 2000 allows installation of 16 instances on a single cluster. According to Books-On-Line (BOL), SQL Server 2005 supports up to 50 SQL Server instances on a single server or processor, but…, only 25 hard drive letters can be used, so these have to be planned accordingly if you need many instances.

Note that the failover period of a SQL Server instance is the amount of time it takes for the SQL Server service to start, which can vary from a few seconds to a few minutes. If you need higher availability, consider using other methods, such as log shipping and database mirroring. For more information about disaster recovery and HA for SQL Server, go to Disaster recovery features in SQL Server 2005 and Microsoft’s description of disaster recovery options in SQL Server.

One big SQL Server cluster vs. small clusters

Here are the advantages of having one big cluster, consisting of more nodes:

  • Higher Availability (more nodes to failover to)
  • More load balancing options for performance (more nodes)
  • Cheaper maintenance costs
  • Growth agility. Up to four or eight nodes, depending on SQL version
  • Improved manageability and simplified environment (less to manage)
  • Maintenance with less downtime (more options for failover)
  • Failover performance unaffected by the number of nodes in the cluster

    Here are the disadvantages of having one big cluster:

  • Limited number of clustered nodes (what if a ninth node is needed?)
  • Limited number of SQL instances on a cluster
  • No protection against failure — if disk array fails, no failover can take place
  • Can’t create failover clusters at the database level or database object level, such as a table, with failover clustering

    Virtualization and clustering

    Virtual machines can also participate in a cluster. Virtual and physical machines can be clustered together with no problem. SQL Server instances can reside on a virtual machine, but performance may be impacted, depending on the resource consumption by the instance. Before installing your SQL Server instance on a virtual machine, you should stress test to verify that it can hold the necessary load.

    In this flexible architecture, you can load balance SQL Server between a virtual machine and a physical box when the two are clustered together. For example, develop applications using a SQL Server instance on a virtual machine. Then fail over to a stronger physical box within the cluster when you need to stress test the development instance.

    Important links describing Windows and/or SQL Server clustering

  • SQL Server clustering resources (This article contains important links and information about clustering).

    A cluster server can be used for high availability, disaster recovery, scalability and load balancing in SQL Server. It’s often better to have one bigger cluster, consisting of more nodes, than to have smaller clusters with fewer nodes. A big cluster allows a more flexible environment where instances can move from one node to the other for load balancing and maintenance.

    ABOUT THE AUTHOR:

    Michelle Gutzait works as a senior database consultant for ITERGY International Inc.. an IT consulting firm specializing in the design, implementation, security and support of Microsoft products in the enterprise. Gutzait has been involved in IT for 20 years as a developer, business analyst and database consultant. For the last 10 years, she has worked exclusively with SQL Server. Her skills include SQL Server infrastructure design, database design, performance tuning, security, high availability, VLDBs, replication, T-SQL/packages coding, and more.
    Copyright 2007 TechTarget


  • Cheap Dedicated Servers #cheap #dedicated #servers,dedicated #server,dedicated #server #hosting,,dedicated #server #host,cheap #dedicated


    #

    All dedicated server plans come standard with the following:

    • 2 IP addresses (up to 8 free IPs upon request).
    • Remote reboot capability.
    • Serial console access.
    • Full root access.
    • Dual quad-core 64bit Intel Xeon processors.
    • Full 100mbit burstable connection.

    We offer a variety of upgrades for our cheap dedicated server hosting plans. These add-ons may be selected during checkout. To place an order for an add-on, you must first select one of the dedicated serers above.

    30mbps Unmetered Connection

    Upgrade the connection speed on any of our unmetered plans to 30mbps! High data transfer speeds with no bandwidth overage fees.

    100mbps Unmetered Connection

    Upgrade the connection speed on any of our unmetered plans to a blistering 100mbps! High data transfer speeds with no bandwidth overage fees.

    Add a secondary 1TB drive to your dedicated server. Increase total storage space or configure in RAID-1 with your primary drive for fault tolerance.

    Add more memory to your server for an extra $3.50 per month for every additional 2GB.

    Our virtual private server plans come with two free dedicated IP addresses, but additional IPs are available. Websites running a SSL certificate (i.e. https://) must be assigned to a private/dediated IP address.

    Class C Range (/24) 253 usable IPs

    Full IPv4 Class C range (/24) providing you with 253 usable dedicated IP addresses.

    About

    Reprise Hosting is a web host providing you with absolutely unbeatable value on cheap dedicated servers and lease-to-own cPanel dedicated servers .

    Twitter

    Contact Us

    For the fastest response, please use our client portal or the contact form at the top of this page. Mailed correspondence can be sent to:

    • PO Box 34755
    • Las Vegas, NV 89119
    • 1.877.HOST.839

    Data center dedicated server #data #center #dedicated #server


    เปิดตัวอาคาร Data Center แห่งใหม่

    CAT data center บริการศูนย์ Data Center แบบครบวงจร ไม่ว่าจะเป็น ให้บริการรับฝากเซิร์ฟเวอร์ (Server Co-location), ให้เช่าพื้นที่ (Temp Office) มั่นใจด้วยมาตรฐาน TSI Level 3 และ ISO 27001: 2013 มีระบบรักษาความปลอดภัยที่แน่นหนา ระบบไฟฟ้า 2 แหล่งจ่าย พร้อมเชื่อมต่อเข้ากับ Internet Gateway ที่ใหญ่ที่สุดของประเทศเพียบพร้อมอุปกรณ์ฮาร์ดแวร์/ซอฟต์แวร์ระดับ Premium ครอบคลุมการให้บริการทั่วประเทศ ด้วยศูนย์ Data Center มากที่สุดในประเทศไทยถึง 8 แห่ง

    บริการเช่าวางเซิร์ฟเวอร์สำหรับผู้ใช้ บริการที่ต้องการดูแลระบบในแบบของ ท่านเอง พร้อมเชื่อมต่อเข้าอินเทอร์เน็ต ผ่านโครงข่ายความเร็วสูง

    บริการสำนักงานให้เช่าชั่วคราว ณ อาคาร CAT Tower ชั้น 14 ติดกับ Server Room พร้อมคอมพิวเตอร์และ อุปกรณ์ สำนักงาน เพื่อใช้ปฏิบัติงาน ในกรณีที่เกิดเหตุฉุกเฉินแก่หน่วยงานหลัก (Main Site) ไม่สามารถใช้งานตามแผนของ ระบบ Disaster Recovery Site (DRSite)

    เปิดตัว CAT data center Nonthaburi II มาตรฐาน TSI level 3 แห่งแรกและแห่งเดียวในอาเซียน Thu, 08/20/2015 – 14:58

    บริการ CAT data center เปิดตัวอาคาร Data Center แห่งใหม่ ที่นนทบุรี พร้อมรองรับการใช้งานอย่างเต็มรูปแบบ Tue, 08/04/2015 – 14:54

    K.nabanana

    unc venenatis augue nec tincidunt vestibulum. Curabitur pellentesque ipsum ut est tincidunt molestie. Pellentesque ornare urna unc venenatis augue nec tincidunt vestibulum. Curabitur pellentesque ipsum ut est tincidunt molestie. Pellentesque ornare urna

    k.bCEO Donec vitae 2

    Vivamus vestibulum sit amet ligula ut molestie. Nullam in leo vel ligula laoreet finibus. Sed neque risus, tempus id libero a, tempor elementum eros. Sed gravida vitae odio pharetra maximusVivamus vestibulum sit amet ligula ut molestie. Nullam in leo vel ligula laoreet finibus. Sed neque risus, tempus id libero a, tempor elementum eros. Sed gravida vitae odio pharetra maximus

    K.aCEO Donec vitae

    Nunc venenatis augue nec tincidunt vestibulum. Curabitur pellentesque ipsum ut est tincidunt molestie. Pellentesque ornare urna eu erat feugiat,Nunc venenatis augue nec tincidunt vestibulum. Curabitur pellentesque ipsum ut est tincidunt molestie. Pellentesque ornare urna eu erat feugiat,


    GeSI home: thought leadership on social and environmental ICT sustainability #global #e-sustainability


    #

    Building a sustainable world through responsible, ICT-enabled transformation

    Developing key tools, resources and best practices to be part of the sustainability solution

    Providing a unified voice for communicating with ICT companies, policymakers and the greater sustainability community worldwide

    UNFCCC / Momentum for Change

    How digital solutions will drive progress towards the sustainable development goals

    SMARTer2030 Action Coalition

    Members

    Project Portfolio

    Thought Leadership

    News Events

    Interview with Carmen Hualda, CSR Manager at Atlinks Holding Atlinks Holding is the winner of this year’s Leadership Index in the Manufacture & Assembly of ICT Equipment sector (SMEs). We speak to their CSR-QHSE Manager, Carmen Hualda. Read More Big Data for big impact: Let’s accelerate sustainability progress We now live in an era of exponential growth for data flows driven by the proliferation of connected objects in the Internet of Things (IoT) ecosystem. Read More Innovation our way to the SDGs – a forum summary report The Global e-Sustainability Initiative (GeSI) and Verizon recently hosted a multi-stakeholder forum to identify the potential for information and communications technology (ICT) to catalyze progress towards the 17 UN Sustainable Development Goals (SDGs). Leaders from the ICT industry, other industry sectors, the technology startup sector, financial community, sustainability NGOs, academia, multilateral organizations, government, and media convened at the Verizon Innovation Center in San Francisco to spend a day focusing on the potential for innovative technology to address four priority solutions core to advancing the SDGs: (1) Food and agriculture; (2) Energy and climate; (3) Smart, sustainable communities; (4) Public health. Read More

    To practise what we preach the GeSI website is hosted on an environmentally-friendly data centre located in Toronto, Canada. Green methods were employed wherever possible in the construction and for the ongoing and future operation of the data centre.

    Become a Member

    Each of us has the opportunity to help change the world. Join GeSI to work directly with members of the ICT sector and the greater sustainability community worldwide to alter the direction of how technology influences sustainability.


    A multiplayer MMO game #agma, #agario #private #server, #agar.io #play #game, #agario


    #

    Europe

    North America

    So adventurer, you wish to become a gold member!?

    Upgrading your agma account to GOLD means you will receive numerous benefits in the game and in the forums. You will be marked one of the elite prestige, people will look up to you in the game. Your membership and donator rank will be visible when you chat and play. Plus more benefits:

    Get instant access to exclusive Gold member skins. gold nickname when you play, gold crown when you chat, unlimited fast feed by holding W, and more!

    Gold Nickname ingame

    White Sparkly forum nickname color

    Unlimited Change forum username

    Double Starting Mass

    Unlimited Freeze your own cell (F key)

    A wearable VIP Hat(Coming Soon)

    Gold crown when you chat (coming soon)

    Donator Rank for Charity

    Unlimited Fast feed
    (Hold W)

    And more benefits

    Custom Skins – Coming Soon!

    Upload your own skins here!

    How it works:
    1. Buy an empty skin slot
    2. Upload an image of your choice
    3. New skins must be approved by staff before use
    4. Play with your skin after approval

    Image requirements:
    – png files only
    – maximum file size: 1MB
    – recommended dimension: 512 x 512 pixels (min = 128 x 128, max = 1024 x 1024)
    – no abusive or inappropriate content

    Upload a YouTube video of Agma.io!

    Record and upload a gameplay video of agma.io, and include your skin image and name in your video description. We will find your video on YouTube and add your skin to the game!

    Wearables

    Abilities

    Ice Barrage – Freeze Opponent

    Macro Split

    (1 day)

    Freeze yourself

    (F Key)

    Sleight of Hand

    (1 day – Fast Feed)

    2x Spawn Size

    (1 day)

    2x Exp

    2x Speed

    (1 day)

    Locked (Requires level 10)

    Locked (Requires level 20)

    Minions

    Minions/Bots are cells which will follow your mouse or your cell, and suicide into you giving you their mass. You can control them by splitting them or ejecting mass from them.
    Minions are highly sought after, and only the most precious experiences are gained when playing with these 100% smooth minions! What are you waiting for? Give them a try!
    Click here to turn on Minion Panel Interface (lets you start minions ingame – Top of screen).

    10 Bots

    1 hour


    SQL Server 2012 s Hardware and Software Requirements #sql #hardware #requirements, #microsoft


    #

    SQL Server 2012 s Hardware and Software Requirements

    SQL Server 2012 is designed to run on a wide range of computer systems from laptop and desktop systems to supercomputer class systems so its minimum hardware requirements are surprising low. The minimum processing requirement is a 1.0GHz CPU for a 32-bit x86 implementation and a 1.4GHz CPU for a 64-bit x64 implementation. Microsoft s recommended minimum processor speed is 2.0GHz.

    The minimum memory requirements for SQL Server 2012 are also quite low. The low-end SQL Server 2012 Express edition requires a minimum of 512MB of RAM, whereas the other editions require a minimum of 1GB. Microsoft s recommended minimum RAM for SQL Server is 4GB.

    These days, it s hard to buy even a desktop system with anything lower than a 1GHz processor and 1GB of RAM, so these hardware requirements shouldn t be a problem for most businesses. Of course, most production implementations will require more processing power and greater amounts of memory.

    Each SQL Server 2012 edition has different OS requirements. In addition, the 32-bit x86 versions and the 64-bit x64 versions of the SQL Server 2012 editions have somewhat different OS requirements. The following table lists all the supported Windows OSs for the principal editions of SQL Server 2012.

    Supported Windows OSs for the Principal Editions of SQL Server 2012

    SQL Server Edition

    Windows OSs That Support
    32-Bit SQL Server

    Windows OSs That Support
    64-Bit SQL Server

    SQL Server Enterprise

    Windows Server 2012 64-bit Datacenter, Standard, Essentials, and Foundation editions

    Windows Server 2008 R2 SP1 64-bit Datacenter, Enterprise, Standard, and Web editions

    Windows Server 2008 SP2 64-bit Datacenter, Enterprise, Standard, and Web editions

    Windows Server 2008 SP2 32-bit Datacenter, Enterprise, Standard, and Web editions

    Windows Server 2012 64-bit Datacenter, Standard, Essentials, and Foundation editions

    Windows Server 2008 R2 SP1 64-bit Datacenter, Enterprise, Standard, and Web editions

    Windows Server 2008 SP2 64-bit Datacenter, Enterprise, Standard, and Web editions

    SQL Server Business Intelligence

    Windows Server 2012 64-bit Datacenter, Standard, Essentials, and Foundation editions

    Windows Server 2008 R2 SP1 64-bit Datacenter, Enterprise, Standard, and Web editions

    Windows Server 2008 SP2 64-bit Datacenter, Enterprise, Standard, and Web editions

    Windows Server 2008 SP2 32-bit Datacenter, Enterprise, Standard, and Web editions

    Windows Server 2012 64-bit Datacenter, Standard, Essentials, and Foundation editions

    Windows Server 2008 R2 SP1 64-bit Datacenter, Enterprise, Standard, and Web editions

    Windows Server 2008 SP2 64-bit Datacenter, Enterprise, Standard, and Web editions

    SQL Server Standard

    Windows Server 2012 64-bit Datacenter, Standard, Essentials, and Foundation editions

    Windows Server 2008 R2 SP1 64-bit Datacenter, Enterprise, Standard, Foundation, and Web editions

    Windows 8 32-bit and 64-bit

    Windows 8 Pro 32-bit and 64-bit

    Windows 7 SP1 64-bit Ultimate, Enterprise, and Professional editions

    Windows 7 SP1 32-bit Ultimate, Enterprise, and Professional editions

    Windows Server 2008 SP2 64-bit Datacenter, Enterprise, Standard, Foundation, and Web editions

    Windows Server 2008 SP2 32-bit Datacenter, Enterprise, Standard, and Web editions

    Windows Vista SP2 64-bit Ultimate, Enterprise, and Business editions

    Windows Vista SP2 32-bit Ultimate, Enterprise, and Business editions

    Windows Server 2012 64-bit Datacenter, Standard, Essentials, and Foundation editions

    Windows Server 2008 R2 SP1 64-bit Datacenter, Enterprise, Standard, Foundation, and Web editions

    Windows 8 64-bit

    Windows 8 Pro 64-bit

    Windows 7 SP1 64-bit Ultimate, Enterprise, and Professional editions

    Windows Server 2008 SP2 64-bit Datacenter, Enterprise, Standard, Foundation, and Web editions

    Windows Vista SP2 64-bit Ultimate, Enterprise, and Business editions

    For a more complete discussion of SQL Server 2012, see Migrating to SQL Server 2012 .


    High availability for Bitbucket Server – Atlassian Documentation #high #availability #server


    #

    High availability for Bitbucket Server

    This page describes how to set up a single Bitbucket Server instance in a highly available configuration.

    For Active/Active high availability with Bitbucket Server. see Bitbucket Data Center instead.

    For guidance on using Bitbucket Data Center as part of your disaster recovery strategy. see the Disaster recovery guide for Bitbucket Data Center .

    If Bitbucket Server is a critical part of your development workflow, maximizing application availability becomes an important consideration. There are many possible configurations for setting up a HA environment for Bitbucket Server, depending on the infrastructure components and software (SAN, clustered databases, etc.) you have at your disposal. This guide provides a high-level overview and the background information you need to be able to set up a single Bitbucket Server in a highly available configuration.

    Note that Atlassian’s Bitbucket Data Center product uses a cluster of Bitbucket Server nodes to provide Active/Active failover. It is the deployment option of choice for larger enterprises that require high availability and performance at scale, and is fully supported by Atlassian. Read about Failover for Bitbucket Data Center .

    High availability

    High availability describes a set of practices aimed at delivering a specific level of availability by eliminating and/or mitigating failure through redundancy. Failure can result from unscheduled down-time due to network errors, hardware failures or application failures, but can also result from failed application upgrades. Setting up a highly available system involves:

    • Change management (including staging and production instances for change implementation)
    • Redundancy of network, application, storage and databases
    • Monitoring system(s) for both the network and applications
    • Technical failover mechanisms, either automatic or scripted semi-automatic with manual switchover
    • Standard Operating Procedure for guided actions during crisis situations

    This guide assumes that processes such as change management are already covered and will focus on redundancy / replication and failover procedures. When it comes to setting up your infrastructure to quickly recover from system or application failure, you have different options. These options vary in the level of uptime they can provide. In general, as the required uptime increases, the complexity of the infrastructure and the knowledge required to administer the environment increases as well (and by extension the cost goes up as well).

    Understanding the availability requirements for Bitbucket Server

    Central version control systems such as Subversion, CVS, ClearCase and many others require the central server to be available for any operation that involves the version control system. Committing code, fetching the latest changes from the repository, switching branches or retrieving a diff all require access to the central version control system. If that server goes down, developers are severely limited in what they can do. They can continue coding until they’re ready to commit, but then they’re blocked.

    Git is a distributed version control system and developers have a full clone of the repository on their machines. As a result, most operations that involve the version control system don’t require access to the central repository. When Bitbucket Server is unavailable developers are not blocked to the same extent as with a central version control system.

    As a result, the availability requirements for Bitbucket Server may be less strict than the requirements for say Subversion.

    Consequences of Bitbucket Server unavailability

    2-10 min (application failure)

    hours-days (system failure)

    • Single node, no secondary server available
    • Application and server are monitored
    • Upon failure of production system, automatic restarting is conducted via scripting
    • Disk or hardware failure may require reprovisioning of the server and restoring application data from a backup

    • Secondary server is available
    • Bitbucket Server is NOT running on secondary server
    • Filesystem and (optionally) database data is replicated between the ‘active’ server and the ‘standby’ server
    • All requests are routed to the ‘active’ server
    • On failure, Bitbucket Server is started on the ‘standby’ server and shut down on the ‘active’ server. All requests are now routed to the ‘standby’ server, which becomes ‘active’.

    • Secondary service is available
    • Bitbucket Server is running on both the ‘active’ server and the ‘standby’ server, but all requests are routed to the ‘active’ server
    • Filesystem and database data is replicated between the ‘active’ server and the ‘standby’ server
    • All requests are routed to the ‘active’ server
    • On failure, all requests are routed to the ‘standby’ server, which becomes ‘active’
    • This configuration is currently not supported by Bitbucket Server, because Bitbucket Server uses in-memory caches and locking mechanisms. At this time, Bitbucket Server only supports a single application instance writing to the Bitbucket Server home directory at a time.

    • Provided by Bitbucket Data Center. using a cluster of Bitbucket Server nodes and a load balancer.
    • Bitbucket Server is running, and serving requests, on all cluster nodes.
    • Filesystem and database data is shared by all cluster nodes. Clustered databases are not yet supported.
    • All requests are routed to the load balancer, which distributes requests to the available cluster nodes. If a cluster node goes down, the load balancer immediately detects the failure and automatically directs requests to the other nodes within seconds.
    • Bitbucket Data Center is the deployment option of choice for larger enterprises that require high availability and performance at scale.

    Automatic correction

    Before implementing failover solutions for your Bitbucket Server instance consider evaluating and leveraging automatic correction measures. These can be implemented through a monitoring service that watches your application and performs scripts to start, stop, kill or restart services.

    1. A Monitoring Service detects that the system has failed.
    2. A correction script attempts to gracefully shut down the failed system.
      1. If the system does not properly shut down after a defined period of time, the correction script kills the process.
    3. After it is confirmed that the process is not running anymore, it is started again.
    4. If this restart solved the failure, the mechanism ends.
      1. If the correction attempts are not or only partially successful a failover mechanism should be triggered, if one was implemented.

    Cold standby

    The cold standby (also called Active/Passive) configuration consists of two identical Bitbucket Server instances, where only one server is ever running at a time. The Bitbucket home directory on each of the servers is either a shared (and preferably highly available) network file system or is replicated from the active to the standby Bitbucket Server instance. When a system failure is detected, Bitbucket Server is restarted on the active server. If the system failure persists, a failover mechanism is started that shuts down Bitbucket Server on the active server and starts Bitbucket Server on the standby server, which is promoted to ‘active’. At this time, all requests should be routed to the newly active server.

    For each component in the chain of high availability measures, there are various implementation alternatives. Although Atlassian does not recommend any particular technology or product, this guide gives options for each step.

    System setup

    This section describes one possible configuration for how to set up a single instance of Bitbucket Server for high availability.


    Minecraft Server Hosting #minecraft #server #hosting, #minecraft #hosting, #minecraft #server #host, #minecraft


    #

    High Performance Minecraft Server Hosting

    Dedicated Team

    Our dedicated team understands the need for fast and reliable service when it comes to your minecraft server hosting. We’re here to provide top notch service and support, and we’re always happy to answer any questions you might have.

    Friendly Support

    If you ever have any questions or issues with any of our hosting solutions, just let us know! Our team is ready to answer and resolve any questions and issues you might have. If you’re not a customer and have any questions, we’ll be happy to answer them!

    Always Improving

    We believe there’s no such thing as a “perfect” minecraft hosting company. That’s why we’re constantly working on improving our services and control panel to ensure that every customer is receiving the best service for the money they pay.

    High Performance Servers

    Enterprise Grade Hardware, High Performance Server Hosting

    Powerful Hardware

    The servers we use to host Minecraft servers are configured with hand picked enterprise grade hardware to make them run as efficiently and reliably as possible.

    Solid State Drives

    Every server we build are equipped with high speed solid state drives (SSDs). This allows for not only fast performing servers, but also reduces lag for players.

    Gigabit Network

    Stop worrying about lag! Every server in our network is connected to a gigabit network (1gbps). Our high speed network reduces lag and lets you enjoy the game.

    Great Location

    Located in secure, state-of-the-art datacenters, our servers are not only secured but also provide a great connection to you and other players.

    Our Mission

    NetherBox is run by the very finest in the Minecraft server hosting industry. We have a strong passion to develop the Minecraft community, by providing inexpensive servers with premium service. We are firm believers that good business stems from a good product, and we work tirelessly to ensure each and every level of our service leads the industry standard.

    Our Partners


    OneProvider – Dedicated servers #hosting, #dedicated #server, #dedicated #hosting #server, #server #hosting


    value=”http://oneprovider.com/”>Home value=”#”>Why? value=”/hosting-provider/why-one-provider”> Why OneProvider? value=”/hosting-provider/onepanel”> OnePanel™ value=”/dedicated-servers-locations”>Dedicated servers value=”/dedicated-servers-locations”> All Server Locations value=”/dedicated-servers-in-north-america”> North America value=”/dedicated-servers-in-europe”> Europe value=”/dedicated-servers-in-asia”> Asia value=”/dedicated-servers-in-south-and-central-america”> South and Central America value=”/dedicated-servers-in-oceania”> Oceania value=”/dedicated-servers-in-africa”> Africa value=”/dedicated-servers/dedicated-server-promotions”>Promotions! value=”/dedicated-servers/dedicated-server-promotions”> Current Promotions value=”/dedicated-servers/clearance-deals”> Clearance Deals value=”/onecloud”>OneCloud™ value=”/onecloud/ssd-virtual-servers”> SSD Virtual Servers value=”/onecloud/students”> OneCloud for Students value=”#”>Complex solutions value=”/complex-dedicated-hosting-solutions/complex-solutions”> Complex Solutions value=”/complex-dedicated-hosting-solutions/colocation”> Colocation value=”#”>Support value=”/support/support-center”> Support Center value=”/support/vip-support”> VIP Support value=”/support/frequently-asked-questions”> F.A.Q. value=”/about-us”>About us value=”/about-us/terms-of-service”> Terms of Service value=”/about-us/acceptable-usage-policy”> Acceptable Usage Policy value=”/about-us/service-level-agreement”> SLA value=”/about-us/privacy-policy”> Privacy Policy

    What is OneProvider?

    Customer Panel billing system

    • OneProvider’s unique OnePanel
    • Unified billing system
    • 7 node monitoring system

    Enhanced Customer Support

    • 24 / 7 / 365 Technical Support
    • Server monitoring Remote Reboot
    • Ticket system and LiveChat

    Locations near you!

    • Determining your location.

    All our locations are listed below!

    Why a Dedicated server?

    Dedicated servers are more flexible than shared hosting, as your organization have full control over the server(s), including choice of operating system and hardware. Try the power of our worldwide hosting service today!


    Dedicated Server Hosting UK-Dedicated Hosting-Cheap Linux Hosting #dedicated #hosting #server #uk


    #

    Choose Your Plans

    Monthly Priced by the Year

    • Buy Annual and Save NOW!
    • Set-Up Fee
    • Availability
    • CPU Model [Intel 64bit]
    • CPU Cores [Threads]
    • Memory [DDR3 RAM] [GB]
    • Storage HDD [Hard Disk = Volume] [GB]
    • Storage SSD [Solid State = Speed] [GB]
    • RAID Capability [Storage Redundancy]
    • IPv4 Addresses Availability [1 included]
    • HIGH SPEED Bandwidth [Mbps]
    • MODERN Latest Generation HARDWARE
    • HIGHEST Performance & Reliability
    • FULL Hardware & Network WARRANTY
    • ECC RAM [Error Check & Correct]
    • Storage Choices [Select Options]
    • Hybrid Storage Enabled [HDD+SSD]
    • GIGABIT Network Port
    • IPv6 Addresses Included
    • UNLIMITED Network Transfer
    • FULL Super-User ACCESS [SSH & WEB]
    • CONTROL PANEL Installed [Host Ready]
    • FLEXIBLE Control Panel Choice
    • Managed SUPPORT Included 24×7 [L1]
    • Flexible MANAGED SUPPORT [L2 L3 VIP]
    • FREE Server DOMAIN Name [Life] 1yr
    • SECURE Server CERTIFICATE [SSL] 1yr
    • VIRTUALISATION ENABLED
    • RECOMMENDED Usage [Best Fit]

    4UK-200

    (With annual price Excl. VAT)

    Powerful Features

    • Host Domains & Web Sites
    • Host E-Mail & Web-Mail Accounts
    • File & Server Management
    • Award Winning Multi-Tier Support
    • Cancel anytime
    • No hidden charges
    • Free Migration Services
    • Webmin Server Panel
    • Webmin + Virtualmin Hosting Panel
    • cPanel / WebHost Manager
    • Apps Easy Installer [Softaculous!]
    • Basic Support 24×7 [L2]
    • Advanced Support 24×7 [L3]
    • Full Support Managed [VIP+L3+L2]
    • Offsite Backup Set-Up [Monthly one-off]
    • Offsite Backup Small [SSD plans]
    • Offsite Backup Medium [HDD plans]
    • Offsite Backup Large [RAID plans]

    4UK-200

    4UK-300

    4UK-400

    4UK-500

    Real package specification for Dedicated Processor, Memory & Storage. Fully Allocated on boot

    Top notch Modern Memory sizing. now Twice More (and even more) than market wide offerings

    No setup fees, 24/7 support included, migration options and services, flexible extra options, professional expert consultancy, backup vault

    Unmetered & Unlimited Fair Share on Acceptable Usage for Web Hosting

    Operating System & Database Services benefit greatly from fast storage, targeted everywhere for provisioning

    Web serving, Email and User Data now get Twice the Space they need, special packages oriented towards content and caching

    RUN your favourite beloved Linux & BSD OS with in house support. CentOS (L1÷3), Fedora, Debian, Arch, FreeBSD (L3), OpenBSD (L3)

    Where supported by OS the complete suite of various control panels & management tools pre-installed for easy start up

    Super competitive pricing for value packed base packages, truly outstanding and industry lowest pricing for the offer

    Dedicated Hosting Server UK

    In Cheap Linux Dedicated hosting UK. an entire server system is dedicated to your website. You do not have to share the server system with any other website or for other client usage. You get complete authority over the independent server system to run any kind of application you want.

    Key benefits of UK Linux Dedicated Servers

    Linux Dedicated server UK offers you the maximum flexibility to use the entire server resources for the best performance of your hosted websites. With no sharing of the server, your websites and online resources are fully under your control. The range of servers come with huge RAM and Space and can be fully managed by our round the clock Technical Support Team.

    Over 300 FREE Scripts with 1-click install using Softaculous!


    Cloud Computing – Dedikert server – VPS – Managed Hosting – Hosted


    #

    Public Cloud

    Med Public Cloud trenger dere ikke bygge deres egne maskinvareplattformer. Her kan dere i stedet få fullt utbytte av den store skalerbarheten og fleksibiliteten som er støttespilleren i disse tjenestene. Dere leier den ressursen og kapasiteten som tilsvarer behovene deres, og når disse endres, kan dere enkelt og raskt fjerne eller legge til nye ressurser. Les mer om Public Cloud

    • Virtuell Privat Server Med en VPS får dere følelsen av en egen server til en lav kostnad. I løpet av minutter har dere deres egen server med valgfritt operativsystem.
    • Elastic Cloud Server En lettadministrert nettskyserver som enkelt kan oppgraderes/nedgraderes ved behov, og der dere betaler for brukt tid. Automatisk skalerbarhet, enorm kraft.
    • Hosted Exchange En sikker e-postløsning, inkludert delte mapper, kontakter og kalendre, der alle data og filtre lagres i Sverige – Microsoft Exchange 2010.
    • Webhotell Rimelig lagringsplass for hjemmesider på Windows- eller Linux-server med e-postkontoer, installering av gratis applikasjoner, gratis domene og support.
    • Hosted Hyper-V Kraftig virtuell server i nettskyen med dedikerte ressurser i et redundant miljø med failover – Windows Server 2008 R2.
    • Hosted VMware Et virtuelt datasenter der dere raskt bygger, replikerer eller flytter deres skalerbare nettsky som dere vil.
    • Hosted Azure Pack Et letthåndterlig, flyttbart og fleksibelt virtuelt datasenter i Skandinavia som IaaS / PaaS-tjeneste innenfor Microsofts Cloud OS-økosystemet.

    Private Cloud

    Med Private Cloud leier dere deres egen plattform i vårt svenske datasenter som er helt og fullt dedikert til dere. Dere deler ikke engang kraft med noen. Denne kan dere selv utforme seg eller så kan Ipeers spesialister skreddersy en plattform som er optimalisert etter deres krav, mål og midler. Våre erfarne teknikere kan også stå for driften, slik at dere kan fokusere på kjernevirksomheten deres. Les mer om Private Cloud

    • Dedikert server Komplette og kostnadseffektive dedikerte serverløsninger – Windows eller Linux – med høy kapasitet og tilgjengelighet.
    • Skreddersydd hosting Her tilpasser dere maskinvare, offsite backup og lastbalansering eller gruppering helt etter behov. Våre erfarne spesialister kan stå for driften.
    • Colocation Plasser serverløsningen deres i datasenteret vårt og opplev den høyest mulige tilgjengeligheten og sikkerheten med overvåking av spesialister døgnet rundt.
    • Hosted VMware Hybrid Få multisite-redundans og rask skalerbarhet uten å gå på kompromiss med sikkerheten ved å forlenge deres private nettsky inn i vår nettsky med Disaster Recovery.
    • Hybrid Hosted Azure Pack Stor skalerbarhet og samtidig opprettholdt sikkerhet – utvid deres Hyper-V-sky til skandinaviske datasenter med testet Microsoft Cloud OS teknologi.

    Vi finns tilgjengelige 24/7/365


    T-SQL Programming Part 5 – Using the CASE Function #microsoft, #articles, #microsoft


    #

    T-SQL Programming Part 5 – Using the CASE Function

    Have you ever wanted to replace a column value with a different value based on the original column value? Learn how, with the T-SQL CASE function.

    The CASE function is a very useful T-SQL function. With this function you can replace a column value with a different value based on the original column value. An example of where this function might come in handy is where you have a table that contains a column named SexCode, where 0 stands for female, 1 for male, etc. and you want to return the value “female” when the column value is 0, or “male” when the column value is 1, etc. This article will discuss using the CASE function in a T-SQL SELECT statement.

    The CASE function allows you to evaluate a column value on a row against multiple criteria, where each criterion might return a different value. The first criterion that evaluates to true will be the value returned by the CASE function. Microsoft SQL Server Books Online documents two different formats for the CASE function. The “Simple Format” looks like this:

    And the “Searched Format” looks like this:

    Where the “input_expression” is any valid Microsoft SQL Server expression, the “when_expression” is the value in which the input_expression is compared, the “result_expression ” is the value that will be return for the CASE statement if the “when_expression” evaluates to true, ” . n ” represents that multiple WHEN conditions can exist, the “else_result_expression ” is the value that will be returned if no “when_expression” evaluates to true and in the “Searched Format” the “Boolean_expression” is any Boolean express that when it evaluates to true will return the “result_expression”. Let me go through a couple of examples of each format to help you better understand how to use the CASE function in a SELECT statement.

    For the first example let me show you how you would use the CASE function to display a description, instead of a column value that contains a code. I am going to use my earlier example that I described at the top of this article where I discussed displaying “female” or “male” instead of the SexCode. Here is my example T-SQL Code:

    Here is the output from this T-SQL code:

    This example shows the syntax in action for a CASE function using the “Simple Format”. As you can see the CASE function evaluates the PatientSexCode to determine if it is a 0, 1, or 2. If it is a 0, then “female” is displayed in the output for the “Patient Sex” column. If the PatientSexCode is 1, then “male” is display, or if PatientSexCode is 2 then “unknown” is displayed. Now if the PatientSexCode is anything other than a 0, 1 or 2 then the “ELSE” condition of the CASE function will be used and “Invalid PatientSexCode” will be displayed for the “Patient Sex” column.

    Now the same logic could be written using a “Searched Format” for the CASE function. Here is what the SELECT statement would look like for the “Searched Format”:

    Note the slight differences between the “Simple” and “Searched” formats. In the “Simple” format I specified the column name for which row values will be compared against the “when_expressions” ,where as in the “Searched” format each WHEN condition contains a Boolean expression that compares the PatientSexCode column against a code value.

    Now the CASE function can be considerably more complex than the basic examples I have shown. Suppose you want to display a value that is based on two different columns values in a row. Here is an example that determines if a Product in the Northwind database is of type Tins or Bottles, and is not a discontinued item.

    The output for the above command on my server displays the following:

    As you can see I’m using a “Searched Format” for this CASE function call. Also, each WHEN clause contains two different conditions. One condition to determine the type (tins, or bottles) and another condition to determine if the product has been discontinued. If the QuantityPerUnit contains the string “Tins” and the Discontinue column value is 0 then the “Type of Availability” is set to “Tins”. If the QuantityPerUnit contains the string “Bottles” and the Discontinue column value is 0 then the “Type of Availability” is set to “Bottles”. For all other conditions, the “Type or Availability” is set to “Not Tins , Not Bottles , or is Discontinued.

    The WHEN clauses in the CASE function are evaluated in order. The first WHEN clause that evaluates to “True” determines the value that is returned from the CASE function. Basically, multiple WHEN clauses evaluate to “True”, only the THEN value for the first WHEN clause that evaluates to “True” is used as the return value for the CASE function. Here is an example where multiple WHEN clauses are “True.”

    The output on my machine for this query looks like this:

    If you look at the raw titles table data in the pubs database for the title “You Can Combat Computer Stress!” you will note that the price for this book is $2.99. This price makes both the “price 12.00” and “price 3.00” conditions “True”. Since the conditions are evaluated one at a time, and the “price 12.00” is evaluated prior to the “price 3.00,” the “Price Category” for the title “You Can Combat Computer Stress!” is set to “Cheap”.

    The CASE function can appear in different places within the SELECT statement, it does not have to only be in the selection list within the SELECT statement. Here is an example where the CASE function is used in the WHERE clause.

    The output for this query looks like this:

    Here I only wanted to display books from the titles table in pubs database if the price category is ‘Average’. By placing my CASE function in the WHERE clause I was able to accomplish this.

    Conclusion

    As you can see the CASE function is an extremely valuable function. It allows you to take a data column and represent it differently depending on one or more conditions identified in the CASE function call. I hope that the next time you need to display or use different values for specific column data you will review the CASE function to see if it might meet your needs.

    See All Articles by Columnist Gregory A. Larsen


    Steps to Rebuild Master Database In SQL Server #backup #database #sql #server


    #

    Rebuild Master Database in SQL Server Without Backup

    It has been observed that many SQL Server administrators, in fact majority of them, backup their user databases regularly, but turn a blind eye to the system databases. While most of them do not understand the value of these databases, some remain in their comfort zone owing to the years of no failure. Whatever be the reason, we will help you to be acquainted with the methods to restore the system databases, should the admins find themselves in a crucial situation.

    Reasons Behind Master Database Rebuilding

    There are a number of reasons due to which, the master database needs to be rebuild. Some of them are:

    • At times, user delete some crucial information like linked servers, logins, SQL Server configuration, and other user objects.
    • The master database is not coming online and the admin does not have a backup of the master database.
    • It can also happen that the master database has been corrupted due to various hardware and software failure and can no longer be used.
    • The administrator might want to make a clone of the server or might want to restore the master database to a new instance.

    How to Rebuild Master Database in SQL Server?

    The following procedure will help you to rebuild the master system database. The entire process, which we will discuss in this write up, will be divided into three sections for the convenience of the users.

    1. Pre-Rebuild Process

    Since, only the master database is corrupt, therefore, in order to ensure the authenticity of other system database files, it gets necessary to take their backup. The files that need to be backed up are MSDB Data, MSDB Log, model model Log In order to do so, follow the below mentioned steps:

    • Open SQL Server Configuration Manager.
    • From the left pane, select SQL Server Services option. This will list all the services that are currently running.
    • Right click on the service and select Stop option for stopping all the services one by one.
    • Exit SQL Server Configuration Manager.
    • Browse to the location where all the system files of a particular SQL Server instance are stored.

    C:\Program Files\Microsoft SQL Server\[INSTANCE NAME]\MSSQL\DATA

    Note: Keep in mind this location, as this is where the master database files will be rebuild automatically.

    • Cut all the healthy system files, save them in a new folder, and delete corrupted master mastlog, tempdb, templog files.

    2. Rebuilding Process

    The steps that are followed for rebuilding are:

    • In the command prompt, change the directory location of the server instance to the location where the SQL Server 2016 installation media.
    • Open Command Prompt as Administrator.
    • Since my installation media is stored in the F: Drive, change the directory location by entering f: and press Enter.
    • The next step is to run the following command:

      Note: In case your account name contains a blank space, keep in mind to enclose the account name within quotes. Moreover, if you are specifying multiple accounts, separate them with a space.
    • When all the system databases are rebuilt, it returns no message in the command prompt. In order to verify whether the process has been completed successfully, examine the summary.txt file. the location of this file is:
      C:\Program Files\Microsoft SQL Server\130\Setup Bootstrap\Logs

    3. Post-Rebuild Process

    • The first step after rebuilding is to restart all the services, which were stopped earlier.
    • If you have backups of model and MSDB databases, restore them.
    • In case the backup is not present, simply replace the rebuilt MSDB and model files with the files that were backed up in the I. section. (This should be done after stopping the services).

    In this blog, we have described the full procedure for rebuilding the master database in the scenario when the backup is not available.We believe that with its help you will be able to rebuild the master database successfully.


    Are there any open source mobile device management tools? #mobile #device #management


    #

    Are there any open source mobile device management tools?

    A number of free and open source mobile device management tools are available, but you may want to consider your goals before pursuing one.

    If your goal is to find a free mobile device management (MDM) tool, consider a basic device management tool or service such as Apple Configurator 2. Google Android Device Manager or Miradore Online. Note, however, that mobile device management tools’ management capabilities vary widely, so start by deciding what you’re trying to accomplish. For example, Apple Configurator can configure iPhones and iPads running iOS 7 or later, while Google’s service can find, lock or erase any device running Android 2.3 or later. Miradore offers a free cloud service for individuals and small teams, including capabilities such as device enrollment, inventory and profile configuration; if you want more enterprise features, you’ll pay a fee.

    Download this free guide

    Download Our 17-Page Mobile Application Management Handbook

    Experts provide best practices for mobile application delivery and management to help deal with the flood of mobile devices, new applications and data security demands.

    By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

    You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy .


    SQL Server Rounding Functions – Round, Ceiling and Floor #round #sql #server


    #

    SQL Server Rounding Functions – Round, Ceiling and Floor

    By: Jeremy Kadlec | Read Comments (17) | Related Tips: 1 | 2 | 3 | 4 | More > Functions – System
    Problem

    I saw your recent tip on Calculating Mathematical Values in SQL Server and have some related issues as I try to round values in my application. My users and me have a difference of opinion on some of the calculations in our reporting applications. All of the code is in T-SQL, but I think the reporting issues are related to data types and rounding down or rounding up rules. Do you have any insight into these issues? I would like to see some examples with a variety of coding options.

    Solution

    Rounding can become misunderstood if the underlying data types and rounding functions are not understood. Depending on the data type (integer, float, decimal, etc.) the rounded value can be different. In addition, depending on the SQL Server rounding function (ROUND(), CEILING(), FLOOR()) used in the calculation the values can differ as well. As such, it is important to find out the user rounding requirements then translate those requirements into the appropriate T-SQL command.

    From a definition perspective, let’s start here:

    • ROUND – Rounds a positive or negative value to a specific length and accepts three values:
      • Value to round
        • Positive or negative number
        • This data type can be an int (tiny, small, big), decimal, numeric, money or smallmoney
      • Precision when rounding
        • Positive number rounds on the right side of the decimal point
        • Negative number rounds on the left side of the decimal point
      • Truncation of the value to round occurs when this value is not 0 or not included
    • CEILING – Evaluates the value on the right side of the decimal and returns the smallest integer greater than, or equal to, the specified numeric expression and accepts one value:
      • Value to round
    • FLOOR – Evaluates the value on the right side of the decimal and returns the largest integer less than or equal to the specified numeric expression and accepts one value:
      • Value to round

    Let’s walk through each function with a few different data types to see the results.

    SQL Server ROUND, CEILING and FLOOR Examples for Integer Data Types

    Example 1a – In this first example let’s just look at rounding a positive integer for the precision value of 1 yields all three rounding functions returning the same value. In this example we are using a variable with the functions and check out the result commented out on the right of the function.

    Example 1b – Since the CEILING AND FLOOR functions do not have any optional values, let’s test some options with the ROUND function. In this example, let’s see the impacts of a negative number as the precision as well as the specifying additional positions that exceed the value to round. Check out these results with the result commented out on the right of the function.

    Example 1c – Let’s expand the digits in this example with the ROUND function and see the impacts with the result commented out on the right of the function.

    Example 1d – Let’s round a negative integer and see the impacts with the result commented out on the right of the function.

    SQL Server ROUND, CEILING and FLOOR Examples for Decimal, Numeric and Float Data Types

    Example 2a – With a decimal data type and the ROUND function with various length parameters (i.e. 1, 2 or 3) yields different final values in our example. The 5 in the second digit to the right of the decimal point is significant when the length parameter is 1 when rounding the value. In addition, with the decimal data type the CEILING and FLOOR functions take the decimal places into consideration for differing values as well.

    Example 2b – Here is a quick example of using the numeric data type with the ROUND function. This follows much of the same behavior as the decimal data type.

    Example 2c – In the final example, with a float data type you can see the same type of behavior as was the case with the decimal and numeric examples above with the ROUND, CEILING and FLOOR functions.

    Next Steps

    Last Update: 2017-01-31

    Technically, there aren’t an “insufficient number of digits” from example 1b. When rounding to the nearest 100 (or 1,000), 6 is just closer to zero. Same thing in 1c; 444 is closer to zero than to 1,000 or 10,000.

    Good tip and explanation. This is pretty logical overall, but sometimes you really need to stop and think it through. These examples are a great help with that.

    Monday, August 12, 2013 – 5:00:56 PM – Scott Coleman

    In answer to ClaudioRound’s question “why this rounding does not work” (for 172.765).

    Subtracting 128 from this value drops the two leftmost bits, so it gains two more fractional bits resulting in 101100.11000011110101110000101000111101011100001010010. (The mantissa is always 53 bits long in a float.) This is about 44.76500000000000057, so even though it has the same fractional digits the value of “ROUND(44.765, 2)” is 44.77.

    Another fun fact is that “ROUND(CAST(172.7650000000000160090000000000000099 AS FLOAT), 2)” returns 172.76, but if you add a trailing 0 then “ROUND(CAST(172.76500000000001600900000000000000990 AS FLOAT), 2)” returns 172.77. Don’t ask me why.

    The moral of the story is that if you really care about exact fractional values then don’t use FLOAT or REAL. Even casting it to DECIMAL before ROUNDING may help.

    Wednesday, June 12, 2013 – 1:56:53 PM – Dave


    Cloud Server vs #dedicated #cloud #server


    #

    Try UpCloud for free! Sign up now for a completely free trial, no strings attached. Sign up now

    While comparing different server providers you have probably come across mentions of different kinds of hosting models such as virtual private servers (VPS), dedicated servers and cloud servers. Even while any of these will undoubtedly get you started, it s important to choose the service that would best suit your needs. The wisdom in your choice will come from knowing what each of these server types provides. In this post, we have collected information about different server options to help make the distinction among a broad range of available hosting services and service providers.

    Server models explained

    Before plunging into the deep end, let s take a look at how each of the server types operates.

    Dedicated servers offer close to metal implementation with little overhead, and they ve been traditionally the go-to-solution for high performance demanding tasks. As the name implies, each server is dedicated privately to one client. The customer receives access to a physical server with the agreed upon hardware specifications, processing and storage, all in one unit.

    VPS clients get a share of a physical server for a number of hardware resources they ve paid for, and multiple clients often share one physical host machine. From the client s perspective a VPS barely differs from a dedicated server with a comparable low to mid range configuration, but thanks to the virtualization layer, the server provider can maintain a uniform range of host hardware while offering multiple different virtual server configurations, which then, in turn, translates to a wider range of server options and lower prices than with dedicated servers.

    Cloud servers are often confused with the VPS, as both are based on virtualization and come with many of the same advantages. Much of the definition, however, depends on the particular host provider. We at UpCloud have taken cloud computing to another level by creating a distinctly different virtualized environment. In dedicated servers and most virtual private servers the storage disks and the processing power are all on one physical host machine, but with UpCloud the storage backend and the compute nodes are run separately. This provides many advantages such as easy scalability and redundancy through automation over the traditional virtualization platforms while still guaranteeing a highly competitive performance and pricing. In short this is our definition of a cloud server regarding the technical backend.

    Server comparisons

    With the basic knowledge of how these server models are built, we have the basis to compare them in performance. For our comparisons, we used an online benchmarking service at serverbear.com, which operated a benchmark site for many years hosting test results from a wide range of server providers but has since unfortunately been discontinued. ServerBear utilized an open source tool called UnixBench, that runs a thorough test the computing performance. ServerBear also used other methods to understand the I/O performance and network speed, for example.

    The first comparison is between a mid-range dedicated server Dell R210 from LeaseWeb and UpCloud 4GB/4CPU preconfigured instance. These two were chosen due to their similar system specifications, both running a single CPU with 4 cores and 4 GB of RAM, both are also fairly closely priced.

    Dedicated server vs. Cloud server

    All three were quite similar in processor performance, but the advantage of the MaxIOPS storage shows up again in both read and write scores.

    While these benchmarks might seem like there are barely any differences between each of these server models when comparing pure performance, it is only to emphasize how many advertised benefits from dedicated servers and VPS s alike are available in the modern cloud.

    The real benefit of cloud servers comes from offering great performance with superior reliability. As mentioned before, the storage and computing hardware running your server are physically separated to allow improved redundancy. Due to this, automation can assign new working compute nodes to your storage with minimal outages, which might not be the case with dedicated or traditional VPS environments.

    How the prices are calculated

    Just staring at performance numbers, however, will not portray the whole picture, and often the most important detail comparing server providers is the price. But not all server models are priced the same way. Commonly dedicated servers will require at minimum a monthly commitment. Some VPS providers have moved to hourly pricing following the example set by cloud hosting companies, but the rest are still more often paid on monthly basis.

    The difference then comes from how these servers can be operated, as cloud servers offer simple configuration and fast deployment. Spinning up a server for a quick test would cost just a few cents. The same would not be possible with dedicated servers or the common VPS s with monthly charges.

    Cloud customers simply pay for what they need, when they need it. This also applies when a server is shut down. With most VPS hosts it makes no difference whether the server is running at full load or turned off, as all the resources for the server are kept reserved at all times, just as with dedicated servers.

    At UpCloud you only pay for what you use with our freely scalable servers. If you don t need a particular server for a while, you can shut it down and save money on the CPU and RAM. Should you not want to keep the fixed IP-address either, you can even delete the server while preserving the storage. With this approach, you can simply create a new instance to use the old storage later when you need it again.

    Making the selection

    Your needs will of course define which package would be the best fit for you, but as the results have shown, cloud servers are in almost all cases the better option. Below you can find a table with the main points of our comparison.


    Windows 2003 Performance Monitor #performance #monitor #server


    #

    Windows 2003 Performance Monitor

    The performance monitor, or system monitor, is a utility used to track a range of processes and give a real time graphical display of the results, on a Windows 2003 system. This tool can be used to assist you with the planning of upgrades, tracking of processes that need to be optimized, monitoring results of tuning and configuration scenarios, and the understanding of a workload and its effect on resource usage to identify bottlenecks.

    Bottlenecks can occur on practically any element of the network and may be caused by a malfunctioning resource, the system not having enough resources, a program that dominates a particular resource. In fact, 40% network utilization is considered a bottleneck.

    Using perfmon will help to identify these bottlenecks and allow you to take action.

    It can be opened by navigating to the performance icon in the administrative tools folder in the control panel, from the start menu or by typing perfmon.msc in the run box.

    System Monitor


    Adding a counter

    Right click anywhere on the graph and choose Add Counter.

    The Add Counter box consists of the following options:


    • Computer: The source system of the object. You can choose to select the local computer or another computer on your network – type \\computer_name in the appropriate box.
    • Object: The subsystem of interest. This refers to the virtual part of the computer that you want to monitor. Memory, Processor or Network Interface, for example.
    • Counter: The aspect of performance of interest. This refers to what parts of the object you want to monitor – they differ depending on the object.
    • Instance: The specific object to be measured when multiple objects of the same type exist on a single system. For example, if you go to the Process performance object, the instances list will display all the active processes on the specified computer.

    System monitor properties

    Right click anywhere on the graph and choose Properties. This brings up the System Monitor Properties window that will allow you to customize the appearance and settings. You can change the view to graph, report or histogram style, the monitoring time interval and the colour of the counter lines, amongst others.

    Using the monitor for network related performance.

    The performance monitor can be a great tool to help with investigating the performance of your network. You are able to monitor things such as the Network Interface, TCP, UDP packet flow, terminal services sessions, and ICMP, amongst others. You can then compare the collected data and keep it as a record or use it for problem analysis.

    In my example I have chosen to use the Network Interface as the performance object.
    The following counters were added:
    Current Bandwidth – to display the amount of bandwidth the network interface has.
    Packets/Sec – to display the amount of packets transferred per second.
    Bytes Total/Sec – to display the total amount of bytes per second.

    The image below displays a graph of network activity that took place within the space of five minutes. The purple line represents the number of packets per second, the yellow line represents the total bytes per second and the light green line shows how much bandwidth is available.

    To simulate this activity I navigated to a share on another computer on the network and browsed through the folders.

    With the use of logs you are able to capture data that you can analyze later. Logged counter data information can be exported to spreadsheets and databases for future review and reporting. Alerts allow you to set an action that will be performed when specified counters reach a given value. These actions include sending a network message, executing a batch file, recording an item in the application log of the event viewer, and to start logging performance data.

    You can use Alerts to send out warnings when disk space is running low or when network or level of CPU utilization poses a risk.

    Logs

    There are two types of logging features:


    • Counter Logs: are used to record the measurements of specific counters
    • Trace Logs: are used to record memory and resource events.

    The above image displays the counter log window that allows you to specify which counters should be monitored. The schedule permits you to set the start and stop time of logging. Go to the Log Files tab if you want to customize the name, size and location of the log file.

    The above displays the trace log window which allows you to change what events will be logged by the system provider. Click ‘Provider Status’ to bring up a window that will show what system trace log providers and available and their current status. If you wish to add non system providers then select that option and press Add. You can run the this process as a different user, type the username in the Run As box and press the Set Password box to enter the password of the user.

    Keep in mind that the more events you choose to log the more space will be required, especially if you choose page faults.

    Alerts

    Right click anywhere on the white screen and choose “New Alert Setting” to bring up the properties window for a new alert. In my example I have set it to monitor the packets received errors and if they exceed three then an alert will be triggered. The schedule tab gives you the option to set the start of stop times of the scan.

    Apart from bottlenecks slowing down the entire system, they do not allow you to take full advantage of your network infrastructure. Using the performance monitor on your Windows Server will help you identify where the problem is coming from. If this tool is used with correct configuration and planning to suit your network environment then the administrator can benefit from being able to tackle problems in less time, therefore making the situation more efficient.

    Post Views: 1,315

    Featured Links


    VPS Canada Web Hosting Canada Virtual Private Server Canadian Virtual Private Servers


    #

    Virtual Private Servers

    Introduction

    Welcome to VPSVille, one of the most advanced VPS service providers on the internet. Virtual Private Servers (VPS) are the future of web hosting. A VPS performs and executes exactly like a stand-alone server; Your VPS can be rebooted independently and have root access, users, IP addresses, memory, processes, files, applications, system libraries and configuration files. This website is being hosted on one of our VPS’s.

    Focus

    We specialize in virtual private servers for online business and nothing else. We don’t sell domain names, website templates, SSL certificates or other services. Your VPS is our focus.

    Full Control

    You have the ability to modify any aspect of your hosting environment to suit your business requirements. You have full control of the server, and can install almost any software you wish on your VPS, such as custom software and your own firewall security policies.

    Stellar 24/7 Support

    VPSVille employs Linux geeks that are not afraid to get their hands dirty. Our administration staff is comfortable and experienced in troubleshooting complex system issues. We are always available to answer your questions and support your endeavors.

    Satisfaction Guaranteed

    VPSVille offers an unconditional 30 day money back guarantee on all services. We are confident that our VPS servers will meet your business requirements.

    Location

    Many companies claim to be in Canada but their equipment is really elsewhere. Search engines such as google.ca will rank you higher and your site’s responsiveness will be better. VPSVille is located in downtown Toronto and is well connected to Canada’s Fibre and exchange networks. We also own equipment in London, Dallas and Los Angeles to give you maximum flexibility as you expand.

    Instant Scalability

    We hope your business grows, and we are here to help make it happen. You can start with a small VPS and upgrade it when needed allowing you to keep your expenses low as you grow. Upgrades do not alter the data on your hard drive, so you can upgrade your VPS instantly when you need it.

    Smart Automation

    We streamline and automate in every way possible to save you time. You won’t need to contact customer support for common tasks involving your VPS. You will have access to installs, backups, DNS, Reverse DNS, account information, upgrades, downgrades and server reboots without speaking to anyone. We provide that information and control in our custom control panel .

    Web Control Panels

    As a web hosting company your main concern should not be managing your server, it should be building your business. Our affordable, web-based control panels (cPanel, DirectAdmin or LxAdmin) give you and your clients the ability to manage many aspects of the server via an intuitive and secure interface leaving you with time to spare.

    We are Green

    A VPS server is the green alternative to a dedicated server. Unused CPU cycles are given to other users to ensure power is not wasted. At VPSVille we feel it is our economic, social and environmental responsibility to do our part to reduce power consumption in all areas of our business.


    How can I make MS SQL Server available for connections? Stack Overflow


    #

    I’m trying to connect to MS SQL Server (running on my machine) from a Java program. I’m getting the following long winded exception:

    Exception in thread “main” com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host localhost, port 1433 has failed. Error: “Connection refused: connect. Verify the connection properties, check that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port, and that no firewall is blocking TCP connections to the port.”.

    When I check “Properties” and click “View Connection Properties” in the Object Explorer of MS SQL, I find that the “Server is Unavailable.” This seems possibly related to the exception message.

    How can I make the server available?

    I am using SQL Server 2008, and I have now enabled TCP/IP, and restarted my instance. I am still told that “Server is unavailable.”

    Any other ideas?

    I ran into this problem as well. The MSKB article applies to SQL server 2005.

    As the “SQL Server Surface Area Configuration” tool has been dropped in lieu of “Facets” – this wasn’t obvious to me.

    I resolved this by setting the TCPAll port and enabling the relevant IP.

    Steps

    Open the Sql Server Configuration Manager (Start -> Programs -> Microsoft SQL Server 2008 -> Configuration Tools)

    Expand SQL Server Network Configuration -> [Your Server Instance]

    Double click TCP/IP

    Ensure Enabled is Yes

    Scroll to the bottom and set the TCP Port under IPAll, (1433 by default)

    Find the IP address you want to connect to and set Enabled and Active to Yes

    Before messing about with connections, first check that the SQL Server Service is actually running.

    You can do this by either using the SQL Server Configuration Manager (located in the configuration tools folder) or in the standard services console in the Windows control panel.

    Once you have checked the service is up and running, you need to ensure that SQL Server has been configured to allow remote connections.

    See below for an explanation on how to do this:

    Open up Sql Server Configuration Manager. ( Start | Programs | Whatever version of sql server | Configuration Tools)

    Browse down to ‘SQL Server Services’ and restart your instance. However you could do this in Management Studio by right clicking on the instance and selecting restart.

    If your restart fails then, check out Computer Management | Event Viewer | Application and look for sql server events. it will record successful and error messages here.

    answered Jul 6 ’09 at 12:49

    Here are my screenshots. If you can’t read the words, then download the image or copy and paste the image into Paint.

    “Additional Properties” Tab

    Inside Microsoft SQL Server Management Studio:

    answered Jan 13 ’15 at 20:06

    First off, check that the sql server service is running. If you’re using SQL 2005 or 2008, check Configuration manager (2008) or Surface are configuration tool (2005) to make sure the TCP/IP protocol is enabled and TCP/IP connections are allowed. With SSE(express) these are off by default, which would cause your problem. Also just in case you’re running multiple instances, you may need SQL browser service running. If this is the case, you should be able to connect object explorer by using (local) as the server address, since this will use a local/shared memory connection.

    answered Jul 6 ’09 at 12:52

    From 2005 and up the SQL server browser service has te be running.
    That one fooled me many times.

    answered Sep 28 ’09 at 11:22

    Your Answer

    2017 Stack Exchange, Inc


    How to get HTTPS: Setting up SSL on your website – Expert


    #

    How to get HTTPS: Setting up SSL on your website

    If you are collecting ANY sensitive information on your website (including email and password), then you need to be secure. One of the best ways to do that is to enable HTTPS, also known as SSL (secure socket layers), so that any information going to and from your server is automatically encrypted. The prevents hackers from sniffing out your visitors sensitive information as it passes through the internet.

    Your visitors will feel safer on your site when they see the lock while access your website knowing it s protected by a security certificate.

    Overview

    The best thing about SSL is it s simple to set up, and once it s done all you have to do is route people to use HTTPS instead of HTTP. If you try to access your site by putting https:// in front of your URLs right now, you ll get an error. That s because you haven t installed an SSL Certificate. But don t worry we ll walk you through setting on up right now!

    Setting up HTTPS on your website is very easy, just follow these 5 simple steps:

    1. Host with a dedicated IP address
    2. Buy a certificate
    3. Activate the certificate
    4. Install the certificate
    5. Update your site to use HTTPS

    Step 1: Host with a dedicated IP address

    In order to provide the best security, SSL certificates require your website to have its own dedicated IP address. Lots of smaller web hosting plans put you on a shared IP where multiple other websites are using the same location. With a dedicated IP, you ensure that the traffic going to that IP address is only going to your website and no one else s.

    An affordable host I recommend for a dedicated IP is StableHost. At this time it s under $6/month, but you can get it cheaper if you order for a full year. They re my host and I ve been blown away with their support and performance. Oh, and here s a coupon for 40% off: expert40

    If you don t have a plan with a dedicated IP you can ask your current web host to upgrade your account to have a dedicated IP address. There will probably be a charge for it it could be one-time or monthly fees.

    Step 2: Buy a Certificate

    Next you ll need something that proves your website is your website kind of like an ID Card for your site. This is accomplished by creating an SSL certificate. A certificate is simply a paragraph of letters and numbers that only your site knows, like a really long password. When people visit your site via HTTPS that password is checked, and if it matches, it automatically verifies that your website is who you say it is and it encrypts everything flowing to and from it.

    Technically this is something you can create yourself (called a self-signed cert ), but all popular browsers check with Certificate Authorities (CA s) which also have a copy of that long password and can vouch for you. In order to be recognized by these authorities, you must purchase a certificate through them.

    NameCheap is where I buy my certificates. They have a few options, but the one that I find best is the GeoTrust QuickSSL. At this time it s $46 per year, and it comes with a site seal that you can place on your pages to show you re secure which is good for getting your customers to trust you. You ll simply buy it now, and then set it up by activating and installing it in the next steps.

    Step 3: Activate the certificate

    Note: Your web host may do this step for you check with them before proceeding. This can get complicated and if you can wait 1-2 days it may be best to let them do it.

    If you re activating the certificate yourself, the next step is to generate a CSR. It s easiest to do this within your web hosting control panel such as WHM or cPanel. Go to the SSL/TLS admin area and choose to Generate an SSL certificate and Signing Request . Fill out the fields in the screen below:

    Host to make cert for is your domain name, and the contact email can be blank. When you ve filled it out, you ll see a screen like this:

    Step 4: Install the certificate

    Note: Your web host may also do this step for you too check with them before proceeding. This can get complicated and if you can wait 1-2 days it may be best to let them do it.

    If you re installing up the certificate yourself, this is the easiest step you ll ever do. You have the certificate in hand, all you need to do is paste it into your web host control panel. If you re using WHM.CPanel, click the Install an SSL Certificate from under the SSL/TLS menu.

    Paste it into the first box and hit submit. That s it! Now try to access your site via https://www.domain.com you should be secure!

    Step 5: Update your site to use HTTPS

    At this point if you go to https://yoursite.com you should see it load! Congrats, you ve successfully installed SSL and enabled the HTTPS protocol! But your visitors aren t protected just yet, you need to make sure they re accessing your site through HTTPS!

    Keep in mind that you typically only need to protect a few pages, such as your login or cart checkout. If you enable HTTPS on pages where the user isn t submitting sensitive data on there, it s just wasting encryption processing and slowing down the experience. Identify the target pages and perform one of the two methods below.

    You can update all links to the target pages to use the HTTPS links. In other words, if there s a link to your cart on your home page, update that link to use the secure link. Do this for all links on all pages pointing to the sensitive URLs.

    However, if you want to ensure that people can only use specific pages securely no matter what links they come from, it s best to use a server-side approach to redirect the user if it s not HTTPS. You can do that with a code snippet inserted on top of your secure page. Here s one in PHP:

    Another server-side approach is to use mod-rewrite. This won t require you to change any of your website files, but will need you to modify your apache configuration. Here s a nice mod-rewrite cheat sheet. or just use this example:

    This will ensure that if anyone accesses a page via HTTP they will automatically be redirected to HTTPS.

    Tips

    • Understand that HTTPS doesn t mean information on your server is secure, it only protects the TRANSFER of data from your visitor s computer to yours, and the other way too. Once the sensitive data is on your server it s up to you to keep that data safe (encrypt in database, etc).
    • Some people just look for a lock on the page, not on the browser. After you ve installed SSL you might want to try adding a lock icon on your pages just to let them know it s secure if they don t look in the url bar.

    Summary

    What makes a website secure? A properly installed security certificate.

    Congratulations! You ve successfully protected your website by installing an SSL cert and made your visitors less prone to attacks. You can breathe easy knowing that any information they submit on your website will be encrypted and safer from packet sniffing hackers.

    Resources Used

    God of The internet says:

    A SSL cert means nothing these days. Its a false sense of security. Anything you do online is open to public attacks and eyes. This includes bank logins and transactions. The SSL cert is just a way for these companies to grab your money.As a security expert, I can tell you this from first hand. I can sit anywhere in a public place where people use their wireless device and steal any info they send across the airwaves including bluetooth.

    This appears to be the internet equivalent of saying we are all going to die .yes but in the mean time we all have to live, so comments like this are extremely unhelpful with out giving a solution, so thanks for increasing the sense of vulnerability and may be you can give your solution? If SSL is useless then what do you suggest?

    Hadi Altaha says:

    No excuse any more for not having EVERYTHING SSL on the internet. It is too easy (thank you for this still relevant article) AND now always FREE thanks to Let s Encrypt (https://letsencrypt.org/ ). I use Dreamhost, and the combination is truly a fix it and forget it solution. Just apply for the certificate, follow the rules on this article and you are done. It automatically renews.

    NO MORE EXCUSES!

    Thanks for your information. Today, I read about HTTPS. Google Says, Its a Ranking signal. So, I am going to buy a ssl certificate. Can you please tell me which ssl provider is best?


    Enabling the Windows Server Backup feature in Windows Server 2008 #remote #server


    #

    Enabling the Windows Server Backup feature in Windows Server 2008

    NTBackup is not a fancy backup mechanism, but it generally worked and could easily be scripted. Windows Server 2008 replaces NTBackup with Windows Server Backup.

    I bet at one time or another you had an NTBackup script for a specific part of a server that you used between versions of Windows. Unfortunately, these scripts will no longer work, so the migration to Windows Server Backup will need to be considered for the local backup mechanism.

    Windows Server 2008 offers many components that are not part of the base installation but need to be added as a feature. Figure A shows the Windows Server Backup being added through the Add Features Wizard. Figure A

    When adding Windows Server Backup, be sure to add the command-line tools option because it is not a default. Once you select the features, the server will not need to be rebooted in most configurations. At this point, you can enter the Windows Server Backup program to perform tasks.

    If you selected to install the command-line tools, you can also work on creating backup tasks with the wbadmin command. The wbadmin command offers a more extensive command-line environment compared to NTBackup; all functionality can be performed via the command line. Windows Server Backup also has PowerShell support so that backups can integrate more closely to products like Exchange or SQL.

    Both Windows Server Backup and wbadmin can be used to interact with remote computers for jobs to back up the remote system, which is entirely new functionality.

    For more information on the new backup functionality, read the TechNet article Windows Server Backup Step-by-Step Guide for Windows Server 2008 .

    Stay on top of the latest Windows Server 2003 and Windows Server 2008 tips and tricks with our free Windows Server newsletter, delivered each Wednesday. Automatically sign up today!

    About Rick Vanover

    Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

    Full Bio

    Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.


    2017 Carbonite Review #server #cloud #backup #reviews


    #

    Carbonite Review

    Store large quanities of files on any plan

    All of Carbonite s personal plans include an unlimited amount of storage space, making them a perfect choice for individuals hoping to back up their entire hard drive. Only one computer may be backed up per license, but there s no limit to the amount of data that can be backed up from that computer. There are also no restrictions on the sizes of individual files. Whether it s a small document or a large video file, it can all be uploaded to the cloud, synced across devices, then restored later if you need it. These unlimited personal plans are also ideal for users who are looking for a cloud storage plan they can stick with for a number of years without having to worry about increasing storage limits or paying higher monthly fees.

    Plans that fit easily into your budget

    Considering the fact that all of Carbonite s personal plans come with an unlimited amount of storage space, their rates are surprisingly affordable. Their Basic plan only costs about $5 per month per computer and you re able to upload as many files as you want to the cloud. One thing to be aware of with Carbonite s pricing structure is that you must pay for at least one year of service upfront. There are also two- and three-year terms available if one of these suits you better. The longer you subscribe for, the more money you save. If you plan on sticking with the same cloud storage service for a considerable amount of time, you re better off paying for a longer term when you sign up if you can manage it.

    Quickly restore files to your computer

    In the event of a computer crash, restoring your files from Carbonite is as simple as logging in to your online account and downloading everything back onto your device. However, when time is of the essence, this may not be ideal. That s why Plus and Prime members have additional restoration options designed to speed up the process. These users are able to create a local backup of their hard drive in addition to storing their files online. Restoring this way is going to take much less time because you aren t limited by the download speeds of your Internet provider. In addition, Prime members are also eligible for courier recovery service, where your files are quickly shipped to you following a computer crash. These additional options are quite rare among cloud storage providers, but they could really help to eliminate some of the headache of restoring your files, so it s worth considering when choosing a cloud service.

    Discuss files within the cloud

    Using Carbonite s Sync Share center, you re able to invite others to view certain files and add comments for the rest of your group to see and respond to. This resource is particularly valuable to teams that need a simple way to collaborate on projects. It allows you to easily access all the necessary documents and stay up to date on what your other team members are saying without having to bounce back and forth between your cloud storage account and your email inbox.

    Back up your servers

    Business owners with a lot of data to back up should take a closer look at Carbonite s server plans. They provide high levels of security and the ability to customize your settings to suit your organization s needs. Like Carbonite s other backup plans, you re able to try out their server backup service for 30 days before committing to a purchase, which is a good place to start if you re not sure if the service is right for you. Businesses who go with the Server Pro Bundle plan aren t limited to just backing up servers either. This plan includes as many laptops, desktops, and servers as you want as long as your files don t exceed your total storage space.

    The Bad

    Must download a separate app for sharing

    The Carbonite desktop app provides you with the tools you need to back up and restore your files, but sharing them with others requires you to download their Sync Share app as well. Most other online storage services incorporate all of these tools into a single app, but Carbonite requires you to move back and forth between the two different apps. This isn t going to be a big deal if you re only looking for an online backup service, however.

    Mac users have fewer options

    Though Carbonite works with both Windows and Macs, their Plus and Prime plans are only open to Windows users. This means Apple customers need to look elsewhere if they re not content with the Basic plan. However, as all of Carbonite s plans include unlimited cloud storage space and 24/7 customer support, the Basic plan works just fine for most people. The only reason you d want to upgrade is if you were interested in backing up external hard drives or making a local backup of your files.

    Annual pricing only

    Carbonite requires you to pay for a year s worth of service at a time, which could be a problem if you re on a budget. Personal users or small businesses may not have the money to pay for an entire year of cloud storage upfront, so it d be nice to see Carbonite give these individuals the option of paying monthly for their services.

    Missing a free storage plan

    Because Carbonite only offers unlimited plans to their personal users, you must pay if you want access to their services. For most individuals looking to do a full backup of their computer hard drive, this won t be a problem. But, if you only intend to back up a few important files, a company that offers free storage space is a smarter choice. The majority of the industry does have some sort of free cloud storage plan, usually with about 2GB of space, though some give you much more.

    Back up one computer only

    Carbonite s plans only allow individuals to back up a single computer per license. If you have two or more computers that need backing up, you must buy additional licenses. This is true for the personal plans, though businesses can pay extra to upload files from as many devices as they want. Carbonite is not the only company to place restrictions on the number of computers that may be backed up, but there are also several companies without them and these may be better options for some. Dropbox is a good one to look into because they offer similar services to Carbonite while allowing you to back up files from an unlimited number of devices.

    The Details