Example Relational Database Data Models #sedona #hotels

#hotel room booking


The following data model is designed to hold information relating to a Hotel Room Booking System.

For this scenario we need to define the following facts:

These facts define the requirements which the Database must meet and should be agreed between the Database User and the Database Designer prior to physical creation.

A local hotel needs a system that keeps track of its bookings (future, current and archived), rooms and guests. A room can be of a particular type and a particular price band. Room prices vary from a room to a room (depending on its type, available facilities, band, etc.) and from a season to a season (depending on the time of the year).

A room booking can include more than one room and more than one customer.

The draft facts have been defined as:

The Entities required should include:

The Entities are related as follows:

  • A Customer can make one or many Bookings.
  • A Booking can be for one or many Guests – the Guest is not necessarily the person who makes the Booking.
  • A Room can be on one or many Bookings.
  • A Room may have many different Room Facilities.

When asking questions of the database we may need to know:

  1. How many rooms are currently available for booking?
  2. What facilities are available in particular rooms?
  3. Which Guests are booked in this week?

The following data model allows these questions to be answered and allows the information contained above to be stored logically and in a structured manner.

Please download the data model relating to Relationships for Hotel Room Booking System which is contained in the zipped up Microsoft Word document.

Quantitative Data Analysis #quantitative #data #analysis #software


The Process of Quantitative Data Analysis

Quantitative analysis is largely a matter of knowing what data to look for and then knowing what to do with it. Analysts must know a number of different statistical techniques to manipulate raw data and turn it into something useful for individuals and companies. That means not only must analysts know the basics of statistical analysis, but they must also employ the cutting edge technologies to model and calibrate their data.

Before an analysis can even begin, analysts must prepare a data plan that will guide them through the process of analyzing a giving situation. Analysts must understand the context of the market or company they are evaluating, as well as the nuances of the given subject of analysis, both of which require research and creative thinking. If an analyst fails to understand the market environment or misses a crucial piece of data, they may produce flawed recommendations and cost a company or individual a great deal of money and even lose their own job. As a result, having a plan requires a great deal of attention to detail and assessing all possibilities in a given situation.

The second step to analysis is finding the data. Analysts can collect this data through personal observation or by amassing reports compiled by others, either the owner of a set of financial assets or disinterested third parties. If an analyst is preparing an investment plan, for instance, he or she must look at stock reports, risk assessments of available company stocks, costs of derivatives, individual portfolio data, and more. The data for each step of a single recommendation can be vast and overwhelming, and it is the analyst’s job to process, quantify, and prioritize all of the available information before it can be useful.

Once all of the data is secured, organized, and quantified, it is time for the actual analysis to begin. The individual steps of quantitative analysis depend upon the data plan. Sometimes the information sought can be found with an easy analysis of descriptive statistics looking at means, medians, standard deviations and the like. Other times the analyst seeks more complex information such as correlations, probabilities, and skewness, looking respectively for associations between different data, frequency and likelihood of specific events, and outliers to larger bits of data.

For even more complex data, an analyst must deploy statistical and mathematical models to make sense of the information they have collected. Statistical models are formal ways of describing the relationships between data variables. Analysts apply these models to their data in an attempt to understand how one bit of data relates to the next. If the data fits a certain model, the analyst can draw certain conclusions about that data. Using modeling, analysts can also simulate what will happen if they recommend one course of action over another.

Click on your state to receive information about Schools near you!

How the analyst interprets the results of their analysis and modeling determines what recommendations they make. If for instance the analyst found that based on their data a certain type of stock that had been declining was likely to increase over the next seven months, due to a cyclical nature of those stocks in the market, the analyst would recommend adding stocks of that type to a portfolio. Others may have been reluctant to invest in that same stock because of its current downward trajectory, but because the stock fit a model the analyst applied to it, the investor may enjoy the benefits of the analyst’s interpretation of the data and subsequent prediction.

The successful analyst can employ many different techniques for collecting, analyzing, and interpreting financial data. Some rely more heavily on mathematical models and stochastic calculus to determine the exact right price value of an asset. Others prefer statistical modeling and educated guesses on the future of particular markets. Either way, quantitative data analysis requires an exact and efficient mind to turn raw data into successful financial action.

Click On Your City To Get Information of The Best Schools.

How to encrypt sensitive data? Put it in an encrypted container #data


How to encrypt sensitive data? Put it in an encrypted container

Abrham Sorecha asked how to password-protect a folder.

You can’t effectively password-protect a folder without encrypting it. And strictly speaking, you can’t truly encrypt a folder, because a folder is not actually a container. It just looks like one to the user. The data comprising the files inside any given folder may be strewn all over the drive’s media.

But there are alternatives. You can encrypt every file in the folder. You can put the folder into an encrypted .zip archive, or into an encrypted vault.

You can effectively encrypt a folder with Windows’ own Encrypted File System (EFS)—at least if you have something more expensive than a Home edition of Windows. You need a Pro or Ultimate edition of Windows to get EFS.

Just to clear things up, these versions of Windows have two encryption tools. EFS encrypts files and—in a sense—folders. BitLocker encrypts partitions and drives.

To encrypt a folder with EFS, select Properties, click the Advanced button, and check Encrypt contents to secure data. When you close out the Properties dialog box, keep Apply changes to this folder, subfolders, and files.

This will encrypt every file in this folder and its subfolders. New files created or dragged here will also get encrypted. Other people will be able to see the files and the file names, but they won’t be able to open the files. Only you—or at least, only someone logged on as you—can open these files.

EFS’s way of handling encryption makes a lot of sense in an office environment, where you can assume you’ve got an organized and knowledgeable IT department, but you can’t assume that employees understand the word encryption. When the user is logged on, the files appear to be unencrypted; otherwise, they can’t be read. But someone, probably IT, needs to know where the special, generated encryption key is kept—elsewhere in the office—in case Windows has to be reinstalled or the data transferred elsewhere.

No wonder Microsoft keeps this out of the hands of Home edition users.

Instead, I’m recommending VeraCrypt. a free, open-source fork of the gone and much-missed TrueCrypt. This version was created and is being maintained by French security consultant Mounir Idrassi.

If you’re familiar with TrueCrypt, you’ll be right at home. Its simple (if unattractive) user interface is almost identical to the earlier program’s. Like TrueCrypt, you can create an encrypted file container, or encrypt a partition or your entire drive. You can hide a container (VeraCrypt calls them volumes ) inside another file if you like.

The collapse of TrueCrypt has left a lot of us feeling shaken. I can’t promise that if the NSA really wanted to get to your files, they couldn’t crack VeraCrypt—or EFS. But if you’re worried about a typical hacker—or of the NSA sucking up your data along with everyone else’s—I think you’d be safe with either of these.

To comment on this article and other PCWorld content, visit our Facebook page or our Twitter feed.

Texas Colocation, Fiber, Data Center, and Network Services Company #colocation #texas, #dark


Alpheus offers extensive Ethernet coverage leveraging our Texas fiber network. We are one of the few service providers with dense metro network coverage for Ethernet services, reaching over 129,000 Ethernet-qualified addresses in Texas.

Ethernet services are particularly appealing to enterprise businesses with scalable bandwidth, ease of LAN / WAN integration, connectivity for multiple locations and support for data, voice, video, IP and VPN services. For wholesale carriers, Alpheus Ethernet service offers a straightforward technology upgrade from TDM with an NNI connection. Ordering and provisioning processes and simplified, creating more opportunity to sell Ethernet services to their end-users.

Alpheus offers a complete suite of Ethernet solutions:

  • Point-to-Point Ethernet Private Line (EPL)
  • Point-to-Multipoint Ethernet Virtual Private Line (EVPL)
  • Ethernet LAN (ELAN)

Learn more about Alpheus Ethernet services and download our Ethernet-Qualified address list

Alpheus fiber infrastructure is the preferred Texas network for delivering metro access, regional transport and sophisticated networking solutions. We are one of the largest fiber-optic network owner/operators in Texas. For over a decade, Alpheus has been providing wholesale services to our nation’s largest telecommunications providers and Texas businesses.

Alpheus’ network covers Dallas-Fort Worth, Houston, San Antonio, Austin, and the Rio Grande Valley (Corpus Christi, Laredo, McAllen and Harlingen). In each metro market, we have extensive fiber networks and broad reach, connecting over 300 COs / Carrier POPs and major data centers.

Serving our customers with over 6,000 route miles of fiber in Texas, Alpheus Network Services portfolio includes:

  • Metro Ethernet: point-to-point, point-to-multipoint, any-to-any scalable from 1Mbps to GigE
  • Private Line: T1, DS3, OC-N, Ethernet
  • Managed Wavelength: protected and unprotected 1G, 2.5G, 10G
  • Texas Regional Longhaul
  • Dedicated Internet Access: T1, DS3, OC-N, Ethernet
  • Type II Offnet solutions

Learn More about the Alpheus Network download our network maps and service locations

Alpheus’ resilient data centers are located in Texas’ largest markets and our personalized support ensures stress-free migration. All of our data centers are strategically located close to the Central Business Districts, offering diverse network facilities, redundant data center infrastructure, multiple carrier Internet backbone and fiber DWDM connections to the Alpheus core network. Our Austin and Houston data centers are SSAE-16 certified.

Alpheus offers flexible data center colocation options based on customer requirements. As both a network and data center service provider, Alpheus can offer customers streamlined management and one-stop customer care. The unmatched scalability of our fiber network and multi-data center infrastructure give businesses and carriers the ability to easily customize private, public and hybrid cloud solutions and implement mission-critical applications to meet any operating environment.

Alpheus is known for taking care of our customers and providing customized solutions to meet the changing demands of the marketplace. We are more nimble, agile and responsive than our larger competitors. We give our customers personalized service that are not often found in this age of multi-layer IVRs and outsourced support. When our customers call Alpheus, they speak to an Alpheus employee, an expert in our industry, located in Houston Texas. Our customers have a specific Customer Account Manager that they know by name. Our 24/7/365 Network Operations Center is staffed by level 2 technicians so we can help customers as quickly and directly as possible.

We have worked with companies across various industries to successfully lowered their telecom spend, while expanding networking capabilities with new technology. Learn more about Alpheus’ solutions for the following industries:

  • Banking and Finance
  • Energy
  • Government
  • Healthcare
  • Information Technology
  • Legal
  • Media

What Analytics, Data Mining, Data Science software #data #analytic #software


What Analytics, Data Mining, Data Science software/tools you used in the past 12 months for a real project Poll

The 15th annual KDnuggets Software Poll got huge attention from analytics and data mining community and vendors, attracting over 3,000 voters.

Many vendors asked their users to vote in the poll, but RapidMiner was especially successful and had the most votes.
One vendor, alas, has created a special page hardcoded to vote only for their software. In a fair campaign, it is normal to advocate for your candidate, but it not OK to give voters a ballot with only one option. Voters should be able to consider all the choices. The invalid votes from this vendor were removed from the poll, leaving 3285 valid votes used for this analysis.

The average number of tools used was 3.7, significantly higher than 3.0 in 2013.

The boundary between commercial and free software is shrinking. (Note: since RapidMiner has introduced a commercial version relatively recently, we counted RapidMiner as a free software for the analysis below).

This year, 71% of voters used commercial software and 78% used free software. About 25% used only commercial software, down from 29% in 2013. About 28.5% used free-software only, slightly down from 30% in 2013. 49% used both free and commercial software, up from 41% in 2013.

About 17% of voters report using Hadoop or other Big data tools, compared to 14% in 2013 (and 3% in 2011).

This implies Big Data usage growth slowly, and still is primarily the domain of a select group of analysts in web giants, government agencies, and very large enterprises. Most data analysis is still done on “medium” and small data.

The top 10 tools by share of users were

  1. RapidMiner, 44.2% share (39.2% in 2013)
  2. R, 38.5% ( 37.4% in 2013)
  3. Excel, 25.8% ( 28.0% in 2013)
  4. SQL, 25.3% ( na in 2013)
  5. Python, 19.5% ( 13.3% in 2013)
  6. Weka, 17.0% ( 14.3% in 2013)
  7. KNIME, 15.0% ( 5.9% in 2013)
  8. Hadoop, 12.7% ( 9.3% in 2013)
  9. SAS base, 10.9% ( 10.7% in 2013)
  10. Microsoft SQL Server, 10.5% (7.0% in 2013)

Among tools with at least 2% share, the highest increase in 2014 was for

  • Alteryx, 1079% up, to 3.1% share in 2014, from 0.3% in 2013
  • SAP (including BusinessObjects/Sybase/Hana), 377% up, to 6.8% from 1.4%
  • BayesiaLab, 310% up, to 4.1% from 1.0%
  • KNIME, 156% up, to 15.0% from 5.9%
  • Oracle Data Miner, 117% up in 2014, to 2.2% from 1.0%
  • KXEN (now part of SAP), 104% up, to 3.8% from 1.9%
  • Revolution Analytics R, 102% up, to 9.1% from 4.5%
  • TIBCO Spotfire, up 100%, to 2.8%, from 1.4%
  • Salford SPM/CART/Random Forests/MARS/TreeNet, up 61%, to 3.6% from 2.2%
  • Microsoft SQL Server, up 50%, to 10.5% from 7.0%

Revolution Analytics, Salford Systems, and Microsoft SQL server have showed strong increases for 2 years in the row.
The growing analytics market was also reflected in more tools (over 70).
New analytics tools (not counting languages like Perl or SQL) that received at least 1% share in 2014 were

  • Pig 3.5%
  • Alpine Data Labs, 2.7%
  • Pentaho, 2.6%
  • Spark, 2.6%
  • Mahout, 2.5%
  • MLlib, 1.0%

Among tools with at least 2% share, the largest decline in 2014 was for

  • StatSoft Statistica (now part of Dell), down 81%, to 1.7% share in 2014, from 9.0% in 2013 (partly due to lack of campaigning for Statistica, now that it is part of Dell)
  • Stata, down 32%, to 1.4% from 2.1%
  • IBM Cognos, down 24%, to 1.8% from 2.4%
  • MATLAB, down 15%, to 8.4% from 9.9%

Statistica share has now declined for 2 years in a row (was 14% in 2012).

The following table shows results of the poll.
% alone is the percent of tool voters used only that tool alone. For example, just 1% of Python users have used only Python, while 35% of RapidMiner users indicated they used that tool alone.
For tools not included last year, there are no 2013 numbers.

What Analytics, Big Data, Data mining, Data Science software you used in the past 12 months for a real project? [3285 voters]

Legend: Red: Free/Open Source tools
Green: Commercial tools

SAP Cloud Computing #sap #cloud #computing, #sap #cloud #computing #solution, #sap #cloud


WFTCloud offers SAP, ERP Cloud computing solutions & systems! WFTCloud.com offers SAP on the cloud computing solutions, services & systems including Cloud ERP & CRM on-demand solutions at an unmatched cost. Utilize WFT’s expertise for SAP cloud computing solutions including Cloud ERP & CRM on-demand solutions for your business. Call Now.

Pay per Use model for Cloud SAP ERP systems & ERP on the Cloud solutions.

We drastically reduced your SAP implementation cost by introducing a pay per use model for online SAP access, cloud SAP ERP system, on demand SAP & ERP on the cloud solutions. To know more about our pricing packages for cloud SAP ERP solutions, on demand ERP, web based ERP systems & SAP ERP on the cloud services Contact Us Now!

SAP Certified provider of SAP, ERP cloud services.

WFTCloud is a certified provider of SAP cloud computing solutions, cloud SAP ERP systems, ERP on the cloud, on demand ERP, web based ERP systems & SAP cloud services. Get implementation of cloud SAP ERP system, ERP on the cloud, on demand ERP, web based ERP system & SAP cloud services at a fraction of conventional cost.

© Copyright 2013. WFTCloud. All rights reserved.

1992 Honda NSX Type-R (since mid-year 1992 for Japan ) specs review


1992 Honda NSX Series I Type R versions

Copyright. Under the Copyright, Designs and Patents Act 1988, the content, organization, graphics, design, compilation, magnetic translation, digital conversion and other matters related to the automobile-catalog.com site (including ProfessCars and automobile-catalog.com ) are protected under applicable copyrights, trademarks and other proprietary (including but not limited to intellectual property) rights. The automobile-catalog.com website is only for the on-line view using the internet browser. The commercial copying, redistribution, use or publication by you of any such matters or any part of this site is strictly prohibited. You do not acquire ownership rights to any content, document or other materials viewed through the site. Reproduction of part or all of the contents of this web-site in any form is prohibited and may not be recopied and shared with a third party. The incorporation of material or any part of it in any other web-site, electronic retrieval system, publication or any other work (whether hard copy, electronic or otherwise), also the storage of any part of this site on optical, digital or/and electronic media is strictly prohibited. Except as expressly authorized by automobile-catalog.com, you agree not to copy, modify, rent, lease, loan, sell, assign, distribute, perform, display, license, reverse engineer or create derivative works based on the Site or any Content available through the Site. Violations of copyright will be prosecuted under the fullest extent of the law.
The full Terms and Conditions of using this website and database can be found here.

Examples of the direct competition of Honda NSX Type-R in 1992:

(all performance data from ProfessCars simulation, top speed theor. without speed governor)

The same class cars with similar kind of fuel, power and type of transmission:

1992 Mitsubishi GTO Twin-Turbo
3-litre / 181 cui
206 kW / 280 PS / 276 hp (JIS net)

1992 Mitsubishi GTO Twin-Turbo
3-litre / 181 cui
206 kW / 280 PS / 276 hp (JIS net)

1992 Mitsubishi GTO Twin-Turbo Special
3-litre / 181 cui
206 kW / 280 PS / 276 hp (JIS net)

1992 BMW M3 Coupe
3-litre / 182 cui
210 kW / 286 PS / 282 hp (ECE)

1992 Nissan Fairlady Z 300ZX Twin Turbo 2seater 5-speed
3-litre / 181 cui
206 kW / 280 PS / 276 hp (JIS net)

1992 Nissan Fairlady Z 300ZX Twin Turbo 2seater T-Bar Roof 5-speed
3-litre / 181 cui
206 kW / 280 PS / 276 hp (JIS net)

1992 Nissan Fairlady Z 300ZX Twin Turbo 2by2 T-Bar Roof 5-speed
3-litre / 181 cui
206 kW / 280 PS / 276 hp (JIS net)

1992 Nissan Fairlady Z 300ZX Twin Turbo 2by2 T-Bar Roof 5-speed
3-litre / 181 cui
206 kW / 280 PS / 276 hp (JIS net)

1992 Nissan 300ZX Twin Turbo 2+2 5-speed
3-litre / 181 cui
208 kW / 283 PS / 279 hp (ECE)

1992 Nissan 300ZX Twin Turbo 5-speed
North America
3-litre / 181 cui
223.7 kW / 304 PS / 300 hp (SAE net)

Mining of Massive Datasets, clustering algorithms in data mining.#Clustering #algorithms #in #data


Mining of Massive Datasets

The book, like the course, is designed at the undergraduate computer science level with no formal prerequisites. To support deeper explorations, most of the chapters are supplemented with further reading references.

The Mining of Massive Datasets book has been published by Cambridge University Press. You can get a 20% discount by applying the code MMDS20 at checkout.

By agreement with the publisher, you can download the book for free from this page. Cambridge University Press does, however, retain copyright on the work, and we expect that you will obtain their permission and acknowledge our authorship if you republish parts or all of it.

We welcome your feedback on the manuscript.

The MOOC (Massive Open Online Course)

We are running the third edition of an online course based on the Mining Massive Datases book:

The course starts September 12 2015 and will run for 9 weeks with 7 weeks of lectures. Additional information and registration.

The 3rd edition of the book (v3.0 beta)

We are developing the third edition of the book.

You can see the current state of the new edition, along with a description of the changes so far here.

The 2nd edition of the book (v2.1)

The following is the second edition of the book. There are three new chapters, on mining large graphs, dimensionality reduction, and machine learning. There is also a revised Chapter 2 that treats map-reduce programming in a manner closer to how it is used in practice.

Together with each chapter there is aslo a set of lecture slides that we use for teaching Stanford CS246: Mining Massive Datasets course. Note that the slides do not necessarily cover all the material convered in the corresponding chapters.

Download the latest version of the book as a single big PDF file (511 pages, 3 MB).

Download the full version of the book with a hyper-linked table of contents that make it easy to jump around: PDF file (513 pages, 3.69 MB).

The Errata for the second edition of the book: HTML.

Note to the users of provided slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If you make use of a significant portion of these slides in your own lecture, please include this message, or a link to our web site: http://www.mmds.org/.

Comments and corrections are most welcome. Please let us know if you are using these materials in your course and we will list and link to your course.

Stanford big data courses


CS246: Mining Massive Datasets is graduate level course that discusses data mining and machine learning algorithms for analyzing very large amounts of data. The emphasis is on Map Reduce as a tool for creating parallel algorithms that can process very large amounts of data.


CS341 Project in Mining Massive Data Sets is an advanced project based course. Students work on data mining and machine learning algorithms for analyzing very large amounts of data. Both interesting big datasets as well as computational infrastructure (large MapReduce cluster) are provided by course staff. Generally, students first take CS246 followed by CS341.

CS341 is generously supported by Amazon by giving us access to their EC2 platform.


CS224W: Social and Information Networks is graduate level course that covers recent research on the structure and analysis of such large social and information networks and on models and algorithms that abstract their basic properties. Class explores how to practically analyze large scale network data and how to reason about it through models for network structure and evolution.

You can take Stanford courses!

If you are not a Stanford student, you can still take CS246 as well as CS224W or earn a Stanford Mining Massive Datasets graduate certificate by completing a sequence of four Stanford Computer Science courses. A graduate certificate is a great way to keep the skills and knowledge in your field current. More information is available at the Stanford Center for Professional Development (SCPD).

Supporting materials

If you are an instructor interested in using the Gradiance Automated Homework System with this book, start by creating an account for yourself here. Then, email your chosen login and the request to become an instructor for the MMDS book to [email protected] You will then be able to create a class using these materials. Manuals explaining the use of the system are available here.

Students who want to use the Gradiance Automated Homework System for self-study can register here. Then, use the class token 1EDD8A1D to join the “omnibus class” for the MMDS book. See The Student Guide for more information.

Previous versions of the book

Version 1.0

The following materials are equivalent to the published book, with errata corrected to July 4, 2012.

T-SQL Programming Part 1 – Defining Variables, and logic #articles, #databases, #microsoft


T-SQL Programming Part 1 – Defining Variables, and IF. ELSE logic

Whether you are building a stored procedure or writing a small Query Analyzer script you will need to know the basics of T-SQL programming. This is the first of a series discusses defining variables, and using the IF. ELSE logic.

This is the first of a series of articles discussing various aspects of T-SQL programming. Whether you are building a stored procedure or writing a small Query Analyzer script you will need to know the basics of T-SQL programming. This first article will discuss defining variables, and using the IF. ELSE logic.

Local Variables

As with any programming language, T-SQL allows you to define and set variables. A variable holds a single piece of information, similar to a number or a character string. Variables can be used for a number of things. Here is a list of a few common variable uses:

  • To pass parameters to stored procedures, or function
  • To control the processing of a loop
  • To test for a true or false condition in an IF statement
  • To programmatically control conditions in a WHERE statement

More than one variable can be defined with a single DECLARE statement. To define multiple variables, with a single DECLARE statement, you separate each variable definition with a comma, like so:

Here is an example of how to use the SELECT statement to set the value of a local variable.

One of the uses of a variable is to programmatically control the records returned from a SELECT statement. You do this by using a variable in the WHERE clause. Here is an example that returns all the Customers records in the Northwind database where the Customers Country column is equal to ‘Germany’


T-SQL has the “IF” statement to help with allowing different code to be executed based on the results of a condition. The “IF” statement allows a T-SQL programmer to selectively execute a single line or block of code based upon a Boolean condition. There are two formats for the “IF” statement, both are shown below:

Format one: IF condition then code to be executed when condition true

Format two: IF condition then code to be executed when condition true ELSE else code to be executed when condition is false

In both of these formats, the condition is a Boolean expression or series of Boolean expressions that evaluate to true or false. If the condition evaluates to true, then the “then code” is executed. For format two, if the condition is false, then the “else code” is executed. If there is a false condition when using format one, then the next line following the IF statement is executed, since no else condition exists. The code to be executed can be a single TSQL statement or a block of code. If a block of code is used then it will need to be enclosed in a BEGIN and END statement.

Let’s review how “Format one” works. This first example will show how the IF statement would look to execute a single statement, if the condition is true. Here I will test whether a variable is set to a specific value. If the variable is set to a specific value, then I print out the appropriate message.

The above code prints out only the phrase “The number is 29”, because the first IF statement evaluates to true. Since the second IF is false the second print statement is not executed.

Now the condition statement can also contain a SELECT statement. The SELECT statement will need to return value or set of values that can be tested. If a SELECT statement is used the statement needs to be enclosed in parentheses.

Here I printed the message “Found A-D Authors” if the SELECT statement found any authors in the pubs.dbo.authors table that had a last name that started with an A, B, C, or D.

So far my two examples only showed how to execute a single T-SQL statement if the condition is true. T-SQL allows you to execute a block of code as well. A code block is created by using a “BEGIN” statement before the first line of code in the code block, and an “END” statement after that last line of code in the code block. Here is any example that executes a code block when the IF statement condition evaluates to true.

Above a series of “PRINT” statements will be executed if this IF statement is run in the context of the master database. If the context is some other database then the print statements are not executed.

Sometimes you want to not only execute some code when you have a true condition, but also want to execute a different set of T-SQL statements when you have a false condition. If this is your requirement then you will need to use the IF. ELSE construct, that I called format two above. With this format, if the condition is true then the statement or block of code following the IF clause is executed, but if the condition evaluates to false then the statement or block of code following the ELSE clause will be executed. Let’s go through a couple of examples.

For the first example let’s say you need to determine whether to update or add a record to the Customers table in the Northwind database. The decision is based on whether the customer exists in the Northwind.dbo.Customers table. Here is the T-SQL code to perform this existence test for two different CustomerId’s.

The first IF. ELSE logic checks to see it CustomerId ‘ALFKI’ exists. If it exists it prints the message “Need to update Customer Record”, if it doesn’t exist the “Need to add Customer Record” is displayed. This logic is repeated for CustomerId = ‘LARS’. When I run this code against my Northwind database I get the following output.

As you can see from the results CustomerId ‘ALFKI’ existed, because the first print statement following the first IF statement was executed. Where as in the second IF statement CustomerId ‘LARSE’ was not found because the ELSE portion of the IF. ELSE statement was executed.

If you have complicated logic that needs to be performed prior to determining what T-SQL statements to execute you can either use multiple conditions on a single IF statement, or nest your IF statements. Here is a script that determines if the scope of the query is in the ‘Northwind’ database and if the “Customers” table exists. I have written this query two different ways, one with multiple conditions on a single IF statement, and the other by having nested IF statements.

As you can see I tested to see if the query was being run from the Northwind database and whether the “Customers” table can be found in sysobjects. If this was true, I printed the message “Table Customers Exists”. In the first example I had multiple conditions in a single IF statement. Since I was not able to determine which parts of the conditions in the IF statement where false the ELSE portion printed the message “Not in Northwind database or Table Customer does not exist”. Now in the second example where I had a nested IF statement I was able to determine whether I was in the wrong database or the object “Customers” did not exist. This allowed me to have two separate print statements to reflect exactly what condition was getting a false value.

I hope that this article has helped you understand how to declare and use local variables, as well as IF. ELSE logic. Local variables are useful to hold the pieces of information related to your programming process. Where as the IF statement helps control the flow of your program so different sections of code can be executed depending on a particular set of conditions. As you can see nesting IF statements and/or having multiple conditions on an IF statement allows you to further refine your logic flow to meet your programming requirements. My next article in this T-SQL programming series will discuss how to build a programming loop.

See All Articles by Columnist Gregory A. Larsen

GILD Stock Price & News – Gilead Sciences Inc #gilead #sciences #inc.


Gilead Sciences Inc. GILD (U.S. Nasdaq)

P/E Ratio (TTM) The Price to Earnings (P/E) ratio, a key valuation measure, is calculated by dividing the stock’s most recent closing price by the sum of the diluted earnings per share from continuing operations for the trailing 12 month period. Earnings Per Share (TTM) A company’s net income for the trailing twelve month period expressed as a dollar amount per fully diluted shares outstanding. Market Capitalization Reflects the total market value of a company. Market Cap is calculated by multiplying the number of shares outstanding by the stock’s price. For companies with multiple common share classes, market capitalization includes both classes. Shares Outstanding Number of shares that are currently held by investors, including restricted shares owned by the company’s officers and insiders as well as those held by the public. Public Float The number of shares in the hands of public investors and available to trade. To calculate, start with total shares outstanding and subtract the number of restricted shares. Restricted stock typically is that issued to company insiders with limits on when it may be traded. Dividend Yield A company’s dividend expressed as a percentage of its current stock price.

Key Stock Data

P/E Ratio (TTM)
Market Cap
Shares Outstanding
Public Float
Latest Dividend
Ex-Dividend Date

Shares Sold Short The total number of shares of a security that have been sold short and not yet repurchased. Change from Last Percentage change in short interest from the previous report to the most recent report. Exchanges report short interest twice a month. Percent of Float Total short positions relative to the number of shares available to trade.

Short Interest (06/30/17)

Shares Sold Short
Change from Last
Percent of Float

Money Flow Uptick/Downtick Ratio Money flow measures the relative buying and selling pressure on a stock, based on the value of trades made on an “uptick” in price and the value of trades made on a “downtick” in price. The up/down ratio is calculated by dividing the value of uptick trades by the value of downtick trades. Net money flow is the value of uptick trades minus the value of downtick trades. Our calculations are based on comprehensive, delayed quotes.

Stock Money Flow

Uptick/Downtick Trade Ratio

Real-time U.S. stock quotes reflect trades reported through Nasdaq only.

International stock quotes are delayed as per exchange requirements. Indexes may be real-time or delayed; refer to time stamps on index quote pages for information on delay times.

Quote data, except U.S. stocks, provided by SIX Financial Information.

Data is provided “as is” for informational purposes only and is not intended for trading purposes. SIX Financial Information (a) does not make any express or implied warranties of any kind regarding the data, including, without limitation, any warranty of merchantability or fitness for a particular purpose or use; and (b) shall not be liable for any errors, incompleteness, interruption or delay, action taken in reliance on any data, or for any damages resulting therefrom. Data may be intentionally delayed pursuant to supplier requirements.

All of the mutual fund and ETF information contained in this display was supplied by Lipper, A Thomson Reuters Company, subject to the following: Copyright © Thomson Reuters. All rights reserved. Any copying, republication or redistribution of Lipper content, including by caching, framing or similar means, is expressly prohibited without the prior written consent of Lipper. Lipper shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon.

Bond quotes are updated in real-time. Source: Tullett Prebon.

Currency quotes are updated in real-time. Source: Tullet Prebon.

Fundamental company data and analyst estimates provided by FactSet. Copyright FactSet Research Systems Inc. All rights reserved.

Free Backup Software & Data Protection: FBackup #backup #software,fbackup,free #backup #software,free #backup,free


Free Backup Software

Main Features

It’s free for personal and commercial purposes

FBackup is a backup software free for both commercial and personal use. This means that you can save some money by not having to buy another backup program.

  • Automatic backups

    You define a backup job, set it to run automatically, and forget about it. FBackup will automatically run the backup at the scheduled date, so you have the benefits not only of having your data protected, but you’ll also save precious time.

  • Backup with standard zip compression

    When using “full backup”, the sources will be archived using standard zip compression. FBackup uses ZIP64 compression, which means that it can create zip files over 2GB in size. Also, you can password protect your backup zip files.

  • Exact copies of files

    If you don’t want to have the files stored in one zip file, FBackup can make exact copies of the backup sources using “mirror backup”. Since FBackup will also back up empty folders, you can use this backup type to create in the destination a “mirror” copy of the original files. So it’s not only a file backup software.

  • Protection against WannaCry & other ransomware

    WannaCry Ransomware is one of the most aggressive crypto-viruses and FBackup protects your data against it. With FBackup you can create backups of your important data and store those online in Google Drive. This way, even if your data gets encrypted by WannaCry or other ransomware viruses, you’ll still have uninfected copies stored online.

    • Easy to use

      The main functions of a backup program are backing up and restoring. These are very easy to run with FBackup by using the included backup wizard. Just start the wizard, select What, Where, How and When to run the backup and you’re all set. For restoring you just need to open the restore wizard and you’ll be asked where you want the restore data to be saved (original location, different one. ).

    • Run actions before/after backup

      For each backup job, you can define an action to execute before or after the backup. For example, you can select “Clear backup” before the backup runs, so that all the previous backed up files will be cleared before loading the new ones. As an after-backup action, you can set it to stand by, log off, hibernate or even shut down the computer once the backup has successfully finished.

    • Automatic updates

      FBackup automatically checks for updates weekly, so you’ll know when a new version is released. The option to check for updates can be disabled, but we recommend that it is enabled so that FBackup will be up-to-date.

    • Multiple backup destinations

      By default, your backups will be stored on the local Windows partition. To be sure you have a secure backup, we highly recommend you to store the backups on other destinations supported by FBackup (such as an external USB/Firewire drive, or on a mapped network drive). This way, if your computer suffers a hardware failure, you’ll have your data safe on an external location.

    • Backups in the Cloud

      With FBackup you can back-up your files and folders in the Cloud to your existing Google Drive account. Simply connect your account with FBackup and you’ll be able to use it as a Cloud destination. This lets you combine the best of both worlds, your favorite free backup program with world-renowed free cloud storage.

    • Backup plugins

      You can load plugins for backing up or restoring specific program settings or other custom data (like game saves, email data, etc.). Once loaded in FBackup, these plugins will list the sources needed to be backed up for that particular program in “Predefined Backups.” You can see a list of all the available backup plugins here: Free Backup Plugins

    • Backup open files

      If a file is in use by another program at the time of the backup, FBackup will still be able to back up that file, because it uses the Volume Shadow Service that Windows provides. So, as long as you’re using Windows 10, 8/8.1, 7, Vista, XP, 2012/2008/2003 Server (32/64-bit), FBackup will back up those open files. As an example, you will be able to back up your Outlook emails and settings without closing the program first.

    • Multi-language

      You can choose a language for the user interface from the languages currently supported. If you want to help us translate the website or its interface into another language, you can do so by visiting the Languages page.

    FBackup is a free data backup software, it is not recommended for full system backup (disk image backups).

    NexStor – Data Systems Integration #data #integration #solution


    NexStor works with the worlds’ most innovative technology vendors to deliver solutions to help organisations manage the data explosion.

  • Providing you with the right solution

    As a vendor independent company, we are as open and forward thinking as possible in providing the right solution.

  • Experience is essential

    NexStor can deliver real business improvement, our data storage & data management solutions have helped speed up and secure global business operations since our establishment in 2004.

  • Support & maintenance

    With a number of service level agreements, an engineer could be onsite within 2 hours, Nexstor can support hardware from companies including EMC, HP, IBM, Cisco, NetApp, Dell, and more.

    Reliability – Redundancy: Understanding the High-Availability Data Center – The Data Center


    Reliability Redundancy: Understanding the High-Availability Data Center

    written by Chris Alberding April 21, 2016

    High availability in the data center refers to systems and components that are continuously operational for a long time. It typically means the systems have been thoroughly tested, are regularly maintained and have redundant components installed to ensure continuous operation.

    How does a data center ensure reliable power? What level of redundancy is necessary for a high-availability data center? These are two critical issues that weigh on the minds of data center and IT leaders. Why? Because they understand that uninterrupted power is the lifeblood of their operations. And they know the devastating effects an unplanned outage can have on the well-being of their organizations.

    Downtime can result from a power outage, equipment failure, natural disaster, human error, fire, flood or a wide range of other causes. It can lead to lost revenue, customers, productivity, equipment and brand loyalty. As a data center or IT leader, your goal is to provide continuous operation of your facility under all circumstances. Many factors contribute to data center reliability. People, processes and equipment all play a huge role in increasing availability.

    Data center managers address reliability by implementing many measures, such as hiring and training the right staff members and developing, implementing and testing proven procedures. They also make sure the data center infrastructure has built-in redundancy and reliability—for power, network connectivity, fire detection, moisture detection, lightning protection, sophisticated monitoring systems, generator and UPS backup systems, fire-detection and fire-suppression systems, moisture-detection systems, and lightning protection.

    To create higher levels of redundancy, for example, you can configure servers to switch responsibilities to a remote server when needed. This backup process is referred to as failover. Failover is a backup method that uses a secondary component to take over functioning whenever the primary component becomes unavailable. Secondary components can assume operation during scheduled maintenance or when an unexpected power outage occurs.

    Failover techniques make systems more fault-tolerant and are necessary to ensure constant availability of mission-critical operations. When a primary component offloads tasks to a secondary component, the procedure is seamless to end users.

    In addition to configuring failover components, high availability also involves good design factors. All aspects of data center infrastructure must be evaluated for durability, beginning with a thorough understanding of each component’s metrics as published by the manufacturer, including capacity limitations and life expectancy.

    Let’s examine three systems areas that data center managers should consider when looking to improve reliability.

    Redundant Systems and Components

    Providing redundant systems and components can help eliminate single points of failure in the IT infrastructure. But each data center manager must determine the appropriate level of redundancy for their operation. A thorough analysis is needed to arrive at an effective redundancy strategy.

    Certainly, incorporating redundancy into a data center operation is critical. Achieving 100 percent redundancy, however, comes with a hefty price tag. And it’s important to note that high levels of redundancy don’t always mean a system is more reliable. Although this point may seem counterintuitive, increasing component redundancy creates a much more complex infrastructure. As complexity increases, management of the infrastructure becomes more challenging. Working with local data center experts can help you arrive at the right redundancy strategy for your organization.

    Backup Systems

    Backup systems include the proper configuration of generator units and uninterruptible power supply (UPS) systems. In a generation system, every available generator unit can be programmed to start automatically during a loss of utility power. As long as sufficient fuel is available, the generators power the entire data center load until the utility power source is restored.

    When regular power is restored, the generators transfer the load back to the utility and stop operating. The transition to and from the backup-generator power is seamless when configured properly. The most effective designs will incorporate the necessary generators to supply power, as well as backup generators should any one unit fail.

    Redundancy should also be built into the UPS system so that one failing module won’t affect the overall capacity of the system. Both generator and UPS systems can be configured for automatic and manual power transfer. Automatic transfer is critical during unexpected outages. Manual transfers are used for scheduled maintenance and testing of data center equipment and procedures without interfering with normal operations.

    Detection and Monitoring Systems

    Although cyber-attacks get the bulk of publicity, environmental factors can be equally devastating to IT equipment and data center facilities. To minimize the impact of downtime, a data center operation must integrate detection systems. These systems can alert you to a problem before it becomes a crippling event.

    Detection and monitoring systems will monitor environmental factors such as the following:

    • Temperature: Sensors will measure the heat being generated by equipment as well as the air-conditioning system’s intake and discharge.
    • Humidity and moisture: Sensors ensure high moisture levels won’t corrode electronic components and low levels won’t cause static electricity. They also monitor for leaks inside cooling equipment, leaks in pipes and flooding from a disaster.
    • Airflow: Sensors ensure air is properly flowing through racks and to/from the air-conditioning system.
    • Voltage: Sensors detect the presence or absence of line voltage.
    • Power: Monitoring systems measure current coming into the facility and determine when electrical failures occur.
    • Smoke: In addition to advising data center personnel of a potential fire, smoke alarms can also be configured to report directly to the local fire department.
    • Video surveillance: Real-time surveillance of data center activities, especially in sensitive areas, provide data center managers with a first-hand look of what’s going on in the facility, including who’s entering and exiting.

    To meet an organization’s requirements and avoid costly consequences, data centers must deliver continuous uptime. Any unplanned downtime, even for just a few minutes, can disrupt your business operations and result in dire consequences. Even installing the best equipment available on the market cannot guarantee business continuity. A high-availability, reliable data center requires redundant designs, the right configuration of backup systems and advanced monitoring systems.

    About the Author

    Chris Alberding is the Vice President of Product Management at FairPoint Communications. a leading provider of advanced communications technology in northern New England and 14 other states across the U.S.

    Reliability Redundancy: Understanding the High-Availability Data Center was last modified: April 21st, 2016 by Chris Alberding

    How to unformat memory card #format #recovery, #recover #formatted #files, #recover #data

    Recover data after memory card and computer hard drive reformat

    How to recover deleted formatted photo video files from memory card/computer hard disk/sd card/usb drive

    Is there any data recovery program to recover files after windows hard drive reformat? I formatted the wrong drive and want to get back lost files. Any format recovery software that can recover deleted files? I pressed format button by mistake and formatted my video camcorder’s memory card, how can I recover formatted files on the memory card? How to unformat memory card, hard drive, removable device and restore lost data. In this article, we will discuss about format recovery.

    The format recovery solution discussed here supports memory storage such as memory card, SD card, Compact Flash card CF card, MicroSD card, xD picture card, MultiMedia MMC card, SD mini, MicroSD, SDHC, SDXC, MicroSDHC, MicroSDXC card, usb drive, xBox 360, comptuer hard drive, usb key, SD card, external hard drive, flash drive, pen drive, android phones and tablet, removable drive, GoPro, memory stick Pro, Duo, Pro Duo, Pro-HG, Micro(M2), SanDisk Cruzers, Transcend, PNY, Olympus, Kingston, Lexar, OCZ, Patriot, Silicon Power, OCZ Patriot Memory, computer hard disk and external usb hard drive, seagate, Western Digital WD, Maxtor, Hitachi, Samsung hard drive HDD, DSC and DSLR digital cameras and video cameras, Nikon Coolpix, Canon Powershot, EOS, Kodak, FujiFilm, Casio, Olympus, Sony Cybershot, SamSung, PanasonicFuji, Konica-Minolta, HP, Agfa, NEC, Imation, Sanyo, Epson, IBM, Goldstar, LG, SHARP, Lexar, Mitsubishi, Kyocera, JVC, Leica, Phillips, Toshiba, SanDisk Chinon, Ricoh, Hitachi, Pentax, Kinon, Verbatim, Vivitar, Yashica, Argus, Lumix, Polaroid, Sigma, android phones and tablet device such as Samsung Galaxy S5, S4, S3, S2, Tab, Note 3, Note 2, Ace 3, POCKET Neo, Gear, Trend, Ace 2, Express, Mini 2, Galaxy Y, Young, Ace, Nexus, Google Nexus 10, 7, Nexus 5, HTC Touch, HTC One X, Telstra One XL, Sony Xperia Z, Motorola Droid, Amazon 7″ Kindle Fire HD, 8.9″ Kindle Fire HD, Kindle Fire 2, Nokia X, AT ?>

    How Can I Become a Data Modeler? #learn #data #modeling, #how #can


    How Can I Become a Data Modeler?

    Research what it takes to become a data modeler. Learn about job duties, education requirements, job outlook and salary to find out if this is the career for you. Schools offering Cloud Computing degrees can also be found in these popular choices.

    What is Data Modeler?

    Data modelers design computer databases that help bankers, scientists and other professionals organize data in computer systems. They then use these databases to run statistical analysis and extract meaning full information. This information is written up in reports that are presented to business executives, lead researchers or other employers. Learn more about this job, including career preparation, outlook and earning potential, in the table below.

    Education Field of Study

    Management information systems or other computer-related field

    Source: *U.S. Bureau of Labor Statistics, **PayScale.com

    What Do Data Modelers Do?

    Data modelers organize data in a way that makes databases easier to access. To accomplish this, data modelers often work with data architects to create the best applicable database design or structure for a system. They analyze and identify the key facts and dimensions to support the system requirements. Additional duties include restructuring physical databases, managing data, reorganizing database designs, and maintaining data integrity by reducing redundancy in a system.

    What Skills Do I Need?

    To work as a data modeler, you need to be familiar with the different types of data models. These models explore the domain, concepts of the domain, and the internal make-up of databases involving tables and charts.

    Since you must alter various database designs and domains, you need to be familiar with basic modeling tools such as ERWIN or Embarcadero. You also need to have knowledge of database computer language, SQL, and have some experience implementing Oracle or Terandata database systems.

    How Can I Prepare?

    While there are no specific degree requirements for this position, employers prefer candidates who have technical aptitude rather than a specific degree. In order to gain a deeper knowledge of database structures, algorithms and programming languages, students should focus on a degree that offers coursework in information technology, computer science or programming.

    How Much Will I Make?

    According to the U.S. Bureau of Labor Statistics, all types of computer occupations – including data modelers – should see 12% job growth from 2014-2024 (www.bls.gov ). PayScale.com reported that the median salary for data modelers was $82,385 per year in 2017.

    What Are Some Related Alternative Careers?

    Actuaries use statistics, finance theories and financial data to analyze the financial risks of business decisions and help come up with solutions to reduce those risks. Professionals in this profession often hold a bachelor’s degree in statistics, mathematics or actuary science. Computer systems analysts examine a companies computer system, with the objective of finding solutions to problems or ways to make systems more efficient. They typically have a bachelor’s degree in a computer-related field.

    To continue researching, browse degree options below for course curriculum, prerequisites and financial aid information. Or, learn more about the subject by reading the related articles below:

    Popular Schools

    An admission advisor from each school can provide more info about:

    • Programs Curriculum
    • Tuition Financial Aid
    • Admissions Starting Dates

    The results below may include sponsored content but are popular choices among our learners.

    ESupport UndeletePlus – Easily undelete, unerase, and recover deleted files #data #recovery,


    eSupport UndeletePlus Features

    Now includes Photo SmartScan for enhanced photo recovery.
    Lost a photo? Photo SmartScan will find it!

    • Easily recover documents, photos, video, music and email.
    • Quickly recover files – even those emptied from the Recycle Bin.
    • File recovery after accidental format – even if you have reinstalled Windows.
    • Recover files from Hard Drives, USB Thumb Drives, Camera Media Cards, Floppy Disks and other storage devices.

    Benefits of eSupport UndeletePlus

    • No More frustrating searches for deleted files. With eSupport UndeletePlus it’s easy!
    • Great Support. If you need assistance, our eSupport UndeletePlus support staff is here to help.
    • Trust that the eSupport UndeletePlus engineering staff is constantly working to develop the best file recovery technology.
    • Fast Scan Engine. a typical hard drive can be scanned for recoverable files within minutes.
    • Support for hard drives formatted with Windows FAT16, FAT32 and NTFS file systems.
    • eSupport UndeletePlus Supports standard IDE/ATA/SCSI hard drives.

    eSupport UndeletePlus
    A quick and effective way to restore deleted or lost files.

    It can also recover files that have been emptied from the Recycle Bin, permanently deleted files within Windows using the Shift + Delete, and files that have been deleted from within a Command Prompt.

    “I must say your program is great

    I was able to recover pictures of my baby that I thought I lost forever. My wife loves me again. )”

    Local 2627, DC 37, AFSCME – New York City Electronic Data Processing

    Welcome to Local 2627

    IMPORTANT: You Cannot Be Forced To Take A Position or Promotion

    Dear Brothers and Sisters,

    You cannot be forced to take any position or promotion. You can not be forced to give up your civil service position to take a non-civil service position. If your agency changes or tries to change your civil service status without your permission, contact Local 2627 immediately. Be cautious, not all positions offered are protected DC 37 or union positions.

    Former Deputy Mayor Goldsmith supports Insourcing

    Buffett says Stop Coddling the Super-Rich

    Who gets paid overtime according to the Fair Labor Standards Act

    DCAS List Restoration Q A

    1. What do I do if I’m called from a civil service list for a job interview?
      Follow the instructions on your interview. (more )

    Where to Apply for Exams

    Where to Apply for Jobs

    • For a list of New York City jobs, click here and click here .
    • For a list of City University of New York jobs, click here .
    • For a list of Department Of Education NYC jobs, click here .

    Insourcing Job Postings

    Advanced Shop Steward Training

    For all who have attended Shop Steward training and wish to attend Advanced Shop Steward training you should call the DC 37 Education Fund at 212-815-1700. The classes start in September. Call for a reservation.


    Good and Welfare is generally flowers or a fruit basket sent to members that are seriously ill or members who suffer the death of an immediate family member or a new birth in the members immediate family. As I stated generally, there are some exceptions that we decide on as they occur.

    Per the Citywide Contract: Article 5, Section 5, Sub-Section 6:

    Family member shall be defined as: spouse; natural, foster or step parent; child, brother or sister; father-in-law; mother-in-law; any relative residing in the household; and domestic partner, provided such domestic partner is registered pursuant to the terms set forth in the New York City Administrative Code Section 3-240 et seq.

    When you send a request for Good and Welfare for a member you always have to include the member’s name, agency and phone number. We need to know the where to send, the what to send and the who to send to.

    Payday Loans: A Trap for Working People

    Recommended Temperatures at Work Site

    XSL-FO, XML to PDF, PostScript, AFP, HTML, SVG, Print #renderx, #xep, #visualxsl,

    CloudFormatter is a complete installation of RenderX XEP in the Cloud. A small client-side application is used to bundle and send your document to RenderX’s Cloud which returns PDF.

    Now our users can leverage the best formatting solution for their documents while eliminating the complexity in installation, integration, and setup. Simple client-side code can accept XML+XSL or XSL-FO. The client-side application can bundle all images referenced in the document and sends to a remote formatter via a web message, returning resulting PDF directly to your application. RenderX’s CloudFormatter is the ideal solution for on-demand formatting of documents to PDF.
    Learn more.

    At the heart of each RenderX publishing solution is RenderX XEP Engine. XEP is continually improved in the quality of formatting, standards’ conformance, support for advanced features and in compliance to strict requirements to the formatted output of print-ready materials.

    XEP core product comes with support for output of PDF, PDF/X, PDF/A, PostScript and our own XML output capabilities. Additional output modules are available: PDF Forms, AFP, Microsoft XPS, PPML, SVG and HTML formats.
    Learn more.

    XEPWin is a combination of code and applications targeted at XEP users and programmers on the Windows platform. XEPWin installs surrounding the XEP Engine and wraps all functionality with a .NET service, exposing all core rendering functionality to .NET applications and programming interfaces.

    XEPWin core product comes with support for output of PDF, PDF/X, PDF/A, PostScript and our own XML output capabilities. Additional output modules are available as add-ons to the core product. These include PDF Forms, AFP, Microsoft XPS, PPML, SVG and HTML formats.
    Learn more.

    Visual-XSL (VisualXSL) is a graphical-based application for designing XSL style sheets primarily used as an overlay for forms. With an easy-to-use, drag-and-drop interface, Visual-XSL (VisualXSL) does all the hard work for you.

    VisualXSL comes bundled with XEPWin. XEPWin supports output of PDF, PDF/X, PDF/A, PostScript and our own XML output capabilities. Additional output modules are available as add-ons to the core product. These include AFP, Microsoft XPS, PDF Forms, HTML and SVG formats.
    Learn more.

    VDPMill is a complete solution with very high performance rendering of both large print files as well as singular large reports.

    VDPMill can generate very large batch print files hundreds of thousands of pages in a single file. Through the use of a multi-threaded formatting grid for documents, the components of this print file can be formatted simultaneously to meet any performance demands.
    Learn more.

    RenderX has an effective application for you to deliver TransPromo variable marketing advertisements within a complete solution. TransPromo advertising in a PDF, PostScript and AFP electronic and print format can easily be injected into your application.

    TransPromo, aka statement-based marketing , integrates a TRANSactional document with PROMOtional marketing and provides an opportunity to blend marketing messages with must-read transaction statements, such as invoices and statements, to influence behavior and ultimately drive business volume.
    Learn more.

    RenderX provides standalone software products as well as server and desktop components that can be integrated into larger business solutions, all based on XEP – our original commercial engine. All of our products support content in multiple languages and any level of layout complexity. They arrange and format text, tables, graphics, and images to generate professional, typeset-quality print products and enhanced electronic products for distribution with advanced features such as interactive links, bookmarks and electronic security.

    Based on patented XML to PDF technology RenderX products are integral to three primary technical applications:

    Our patented software is used in many industries to generate database reports (batch reports). These dynamic reports can be for display on the web; or a document in a work flow inside of a business system; or to stream data to printers for mailings such as bills and statement rendition. It’s used in many ways across many industries.
    Learn more.

    When implemented as a server component to combine structured and unstructured content, our software provides dynamic typeset documents. Many leading organizations use our software to create high volumes of documents such as applications, financial prospectus, mortgages and loan packages.
    Learn more.

    Used as standalone, turn-key publishing software, hundreds of our customers are creating a wide variety of static documents such as educational materials, technical manuals, user guides, legislation, and books.
    Learn more.

    RenderX provides exceptional support to assist our customers in using our products in their applications. Our customers work in tight cooperation with our engineers and get professional advice about both RenderX software and information technology in general.

    December 22, 2016
    EnMasse 3.1 released

    Cache management option;
    Increased speed: up to +34%;
    Informative log format;
    New sample client.
    More news.

    May 23, 2016
    XEP 4.25 released

    PDF/UA compliance, RGBA;
    new algorithms: font parsing,
    linearization, image caching;
    PDF forms: comb field support.
    More news.

    April 19, 2016
    EnMasse 3.0 released

    New load balancer:
    Improved performance: +17%;
    Improved stability and security;
    HTTPS support for SOAP server.
    More news.

    August 6, 2015
    EnMasse 2.4 released

    Improved stability on Linux;
    3rd-party XSLT-transformers;
    cross-domain formatting.
    More news.

    RSS: Subscribe

    Cyber Security Europe 2017 – Cyber Security Europe #ip #expo, #ip #expo


    Securing the Digital Enterprise

    Global Head of Security Research

    James Lyne is global head of security research at the security firm Sophos. He is a self-professed ‘massive geek’ and has technical expertise spanning a variety of the security domains from forensics to offensive security. Lyne has worked with many organisations on security strategy, handled a number of severe incidents and is a frequent industry advisor. He is a certified instructor at the SANS institute and often a headline presenter at industry conferences.

    Lyne is a big believer that one of the biggest problems of security is making it accessible and interesting to those outside the security industry. As a result, he takes every opportunity to educate on security threats and best practice always featuring live demonstrations and showing how the cyber criminals do it.

    Lyne has given multiple TED talks, including at the main TED event. He’s also appeared on a long list of national TV programs to educate the public including CNN, NBC, BBC News and Bill Maher.

    As a spokesperson for the industry, he is passionate about talent development, regularly participating in initiatives to identify and develop new talent for the industry.

    Global VP Security Research

    Rik Ferguson is Global VP Security Research at Trend Micro. He brings more than seventeen years of security technology experience to this role. Ferguson is actively engaged in research into online threats and the underground economy. He also researches the wider implications of new developments in the Information Technology arena and their impact on security both for consumers and in the enterprise, contributing to product development and marketing plans.

    Ferguson writes the CounterMeasures blog and is the lead spokesperson for Trend Micro. He is often interviewed by the BBC, CNN, CNBC, Channel 4, Sky News and Al-Jazeera and quoted by national newspapers and trade publications throughout the world. Ferguson also makes a regular appearance as a presenter at global industry events. In April 2011 he was formally inducted into the InfoSecurity Hall of Fame.

    Rik Ferguson holds a Bachelor of Arts degree from the University of Wales and is a Certified Ethical Hacker and CISSP-ISSAP in good standing.

    Chief Research Officer

    Mikko Hypponen is a worldwide authority on computer security and the Chief Research Officer of F-Secure. He has written on his research for the New York Times, Wired and Scientific American and lectured at the universities of Oxford, Stanford and Cambridge.

    Principal Security Strategist

    Wendy Nather is Principal Security Strategist at Duo Security. She was formerly a CISO in the public and private sectors, led the security practice at independent analyst firm 451 Research, and helped to launch the Retail Cyber Intelligence Sharing Center in the U.S. A co-author of the “Cloud Security Rules,” she was listed as one of SC Magazine’s Women in IT Security Power Players in 2014.

    Graham Cluley is an award-winning security blogger, researcher, podcaster, and public speaker. He has been a well-known figure in the computer security industry since the early 1990s when he worked as a programmer, writing the first ever version of Dr Solomon’s Anti-Virus Toolkit for Windows.

    Since then he has been employed in senior roles by companies such as Sophos and McAfee.

    Graham Cluley has given talks about computer security for some of the world’s largest companies, worked with law enforcement agencies on investigations into hacking groups, and regularly appears on TV and radio explaining computer security threats.

    Graham Cluley was inducted into the InfoSecurity Europe Hall of Fame in 2011.

    RSA, a Dell Technologies Business

    Rohit Ghai most recently served as president of Dell EMC’s Enterprise Content Division (ECD), where he was instrumental in setting a compelling vision, transforming go-to-market and revitalizing the portfolio for the digital era through strategic partnerships and acquisitions. Ghai was responsible for all aspects of the ECD business, including setting strategic vision, sales and services, channel strategy, product development, marketing, finance, support and customer success.

    Previously, Ghai was chief operating officer of ECD, and responsible for the division’s strategy, development and marketing of all products and solutions. He joined EMC in December 2009 to run product development.

    He has more than 20 years of experience in IT in both startup and big company settings, with expertise in digital transformation in highly regulated markets, and knowledge across software, and systems and security. Ghai joined Dell EMC from Symantec, where he held a variety of senior engineering and general management roles. Previously, he was at Computer Associates in a number of senior management roles in the BrightStor and eTrust business units, and led the CA India operations as chief technology officer. Ghai joined CA through the acquisition of Cheyenne Software – a startup in the backup and data protection space.

    Ghai holds a master’s degree in Computer Science from the University of South Carolina and a bachelor’s degree in Computer Science from the Indian Institute of Technology (IIT), Roorkee.

    Cyber Security Europe 2016 Highlights

    Big data is the killer app for the public cloud #big #data


    Big data is the killer app for the public cloud

    Big data analytics are driving rapid growth of public cloud computing. Why? It solves real problems, delivers real value, and is pretty easy to implement on public clouds.

    Don’t take my word for it. Revenues for the top 50 public cloud providers shot up 47 percent in the fourth quarter of 2013. to $6.2 billion, according to Technology Business Research.

    TBR’s latest figures reveal the extent to which public cloud providers are using big data to drive their own operations, get new customers, and expand features and functions. Although public cloud customers want storage and compute services, many implementing big data systems these days find that the public cloud is also the best and most cost-effective platform for big data.

    Public clouds providers, such as Amazon Web Services, Google, and Microsoft, offer their own brands of big data systems in their clouds, whether NoSQL or SQL, that can be had by the drink. This contrasts to DIY big data, which means allocating huge portions of your data centers to the task and, in some cases, spending millions of dollars for database software.

    Big data is driving public cloud adoption for fairly obvious reasons:

    • The cloud cost is a fraction of that to purchase big data resources on demand.
    • Cloud-to-cloud and cloud-to-enterprise data integration got much better in the last few years, so it’s easy to set up massive databases in the clouds and sync them with any number of operational databases, cloud-based or on-premise.
    • In most cases, public clouds can provide better performance and scalability for most big data systems because they can provide autoscaling and autoprovisioning.

    So, big data + cloud = match made in heaven? There are always issues with new technologies, but in this case the bumps in the road have been slight. I suspect that big data will continue to drive more public cloud usage in the future.

    David S. Linthicum is a consultant at Cloud Technology Partners and an internationally recognized industry expert and thought leader. Dave has authored 13 books on computing and also writes regularly for HPE Software’s TechBeacon site.

    Grandstream Networks #voip, #ip, #sip #devices, #voice, #data, #surveillance, #networking, #video, #grandstream,


    Featured Products

    • Supports 2 SIP profiles through 2 FXS ports and a single 10/100Mbps port
    • TLS and SRTP security encryption technology to protect calls and accounts
    • Automated provisioning options include TR-069 and XML config files
    • Supports 3-way voice conferencing
    • Failover SIP server automatically switches to secondary server if main server loses connection
    • Supports T.38 Fax for creating Fax-over-IP
    • Supports a wide range of caller ID formats
    • Use with Grandstream’s UCM series of IP PBXs for Zero Configuration provisioning

    • Supported by Grandstream s DP750 DECT Base Station
    • 5 DP720 handsets are supported by each DP750
    • Supports a range of 300 meters outdoors and 50 meters indoors from the DP750 base station
    • Supports up to 10 SIP accounts per handset
    • Full HD audio on both the speakerphone and handset
    • 3.5mm headset jack, 3-way voice conferencing
    • Automated provisioning options include TR-069 and XML config files
    • DECT authentication encryption technology to protect calls and account

    • 1080p Full-HD video, up to 9-way video conferences, support for 3 monitor outputs through 3 HDMI outputs
    • 9-way hybrid-protocol conferencing with no external MCUs/servers or extra software licenses
    • PTZ camera with 12x zoom
    • Can be installed in three simple steps
    • Plug and Play connection to Grandstream s IPVideoTalk video conferencing service
    • Supports most SIP and H.323 video conferencing platforms

    • 6 lines, 6 SIP accounts, 7-way voice conferencing
    • Runs Android 4.4 and offers full access to the Google Play Store and all Android apps, such as Skype, Google Hangouts and more
    • Bluetooth to support syncing of headsets and mobile devices
    • Built-in 7-way conference bridge
    • 4.3 (800×480) capacitive touch screen for easy use
    • Auto-sensing Gigabit port, built-in PoE support
    • Built-in WiFi support offers mobility and network flexibility
    • Full HD audio support to maximize voice quality
    • Daisy-chain support to combine two GAC2500 together
    • TLS and SRTP security encryption

    • 12 lines, 6 SIP accounts, 5 soft keys and 5-way voice conferencing
    • 48 on-screen digitally customizable BLF/speed-dial keys
    • 4.3 inch (480×272) color-screen LCD
    • Dual Gigabit ports, integrated PoE
    • Integrated Bluetooth
    • Supports up to four GXP2200EXT Modules for BLF/speed-dial access to up to 160 contacts

    Latest news

    By: Kate Clavet, Content Marketing Specialist I August 8, 2017 The success of technology is based on constant

  • By: Phil Bowers, Senior Marketing Manager I August 4, 2017 We have been talking about integration on our blog quite

  • Resellers and installers throughout the UK can now purchase Grandstream’s Award-Winning Communication Solutions from CIE Group

    Our events

    Meet Grandstream’s Video Conferencing Solutions

  • RAID recovery #undelete,unerase,data #recovery,recovery,recover,recovering,software, #file,files,restore,retrieve,deleted,ntfs,utility,utilities,un #delete,partition,memory #card, #hd,hdd,hard #drive,disk,disks,drive,window,windows,2000,nt,xp,cd,cds,pc,tool,tools, #program,programs,download,raid,unformat,microsoft,system,systems,network,email,fix, #repair,bad,erased,how #to,houston,texas,crash,shareware,computer,service,services,


    How to successfully recover data from a failed RAID

    Recovering data from a failed RAID can easily turn into a costly ordeal. Please read this page carefully before proceeding. If you would like to get help from experts, please consider using our fee-based RAID recovery service.

    First determine whether the RAID is hardware-based or software-based. The recovery procedures are very different.

    Recovering a hardware RAID

    First determine if the problem is caused by the underlying RAID mechanism. If it is not, follow the simpler recovery procedures for an ordinary drive. The following causes of problem are not related to the RAID:

    • Virus attacks.
    • The volume being deleted, resized, reformatted or otherwise changed in Disk Manager or other disk management utilities.

    If the problem seems to be in the RAID mechanism, determine the operating state of the RAID and take the appropriate actions.

    Avoid the most common mistakes that may cause data to become unrecoverable.

    Hardware RAID operating states

    Current status is normal.
    No controller or disk errors.
    No recent change in RAID configuration.

    RAID is displayed as a single disk.
    Volume configuration has not been changed (screenshot ).

    Volume is inaccessible or accessible with missing files.

    RAID mechanism is operating normally .
    Problem may be unrelated to RAID.

    RAID is displayed as a single disk.
    No drive letter or unformatted volume (screenshot ).

    RAID is displayed as a single disk.
    Volume has been deleted, reformatted, resized (screenshot ).

    RAID is displayed as a single disk.
    Disk Manager is not aware of degradation (screenshot ).

    Volume is accessible

    RAID is degraded due to a disk failure

    Current status is normal.
    RAID failed and was rebuilt unsuccessfully.

    RAID is displayed as a single disk (screenshot ).

    Volume is inaccessible or accessible with missing files.

    Current status is normal.
    RAID settings have been changed.

    Current status is normal.
    Disks have been reconfigured and disk order may have changed.

    Abnormal RAID status such as “offline”, “inactive”, “undefined”, etc.
    There may be disk or controller hardware errors.

    RAID is not displayed. Sometimes the individual member disks are displayed as unformatted disks (screenshots ).

    Volume is inaccessible.

    Recovering a software RAID

    First determine if the problem is caused by the underlying RAID mechanism. If it is not, follow the simpler recovery procedures for an ordinary drive. The following causes of problem are not related to the RAID:

    • Virus attacks.
    • The volume being reformatted.

    If the problem seems to be in the RAID mechanism, determine the operating state of the RAID and take the appropriate actions.

    Note that a RAID 0 is also referred to as a striped volume.

    Software RAID operating states

    Cloud computing reduces HIPAA compliance risk in managing genomic data #hipaa #data


    There is no question that the resources required to process, analyze, and manage petabytes of genomic information represent a huge burden for even the largest academic research facility or healthcare institution. That burden becomes even greater when one factors in the need to handle these data in compliance with an alphabet soup of regulatory regimes: HIPAA, CLIA, GCP, GLP, 21 CFR Part 11, and their counterparts outside the United States, including data privacy laws in jurisdictions such as the European community.

    In this context, the use of cloud-based solutions to manage, analyze, store, and share data can provide some relief. Computer and storage resources are instantly available on demand. There is no need to lease brick-and-mortar facilities, purchase equipment, or hire staff to maintain them.

    Despite the advantages of cloud computing. organizations are often hesitant to use it because of concerns about security and compliance. Specifically, they fear potential unauthorized access to patient data and the accompanying liability and reputation damage resulting from the need to report HIPAA breaches. While these concerns are understandable, a review of data on HIPAA breaches published by the US Department of Health and Human Services (HHS) shows that these concerns are misplaced. In fact, by using a cloud-based service with an appropriate security and compliance infrastructure, an organization can significantly reduce its compliance risk.

    The Health Insurance Portability and Accountability Act of 1996 ( HIPAA ), as amended by the Health Information Technology for Economic and Clinical Health Act of 2009 ( HITECH ), protects individually identifiable health information, which the rule calls protected health information, or PHI.

    Opinions differ as to whether a human genome, stripped of identifiers such as name or social security number, constitutes PHI. Whether these data constitute PHI depends on whether there are sufficient publicly available reference data sets to create a reasonable basis to believe that a genome can be associated with an identified individual. Recent publications suggest that if these data are not currently classified as PHI, they will be soon [1]. As a consequence, organizations that handle genomic data are well advised to implement systems that treat a whole genome as PHI, even if public reference data sets are not yet common enough to make it PHI today.

    Entities that are obligated to comply with HIPAA are often particularly concerned with the obligation to report HIPAA breaches and the associated potential harm to their reputations. These reporting obligations create powerful incentives for organizations to implement systems and processes to reduce risk.

    How Have Large HIPAA Breaches Happened?

    Since 2009, some 21 million health records have been compromised in major HIPAA security breaches reported to the US government. Loss or theft of electronic equipment or storage media has been the source of more than 66% of all large HIPAA breaches during this period. The individuals affected by these breaches amount to nearly 73% of all individuals affected by large HIPAA breaches reported to HHS during the same time period. In most cases the theft or loss involved a laptop or electronic media, such as a flash drive, containing unencrypted PHI. In contrast, large breaches attributed to hacking amounted to 8% of the total incidents and affected 6% of the individuals whose PHI was disclosed.

    These data suggest that the implementation of IT systems that enable secure sharing of information without the need to transport it on a computer or storage media will go a long way toward eliminating the majority of large HIPAA breaches.

    Use the Cloud to Reduce HIPAA Risk

    The first, and perhaps the most important, step one can take in reducing the risk of HIPAA breaches is to make sure that users of PHI are not transporting unencrypted data on portable equipment (like laptops) or media (like flash drives). New genomic data management systems enable this goal by keeping data in the cloud, and providing access to users via a web browser. In this architecture, only the PHI that the user is viewing in his or her web browser is resident on the user s computer; all other data remains on secure servers.

    The use of the cloud can also facilitate enforcement of encryption requirements. For example, many of the new commercial systems encrypt all data while in transit and while at rest. This means that even if data somehow become accessible to an unauthorized person, they would be secured and could not be read unless the hacker also obtains the encryption key. While this is also possible using an on-premises data center, it is much harder to enforce where users download and store data.

    Further, the significant costs of security audits, certifications, and assessments to demonstrate best efforts to comply with HIPAA security requirements are more easily borne by cloud providers than by private data centers. Such certifications could provide meaningful defense against civil or criminal prosecution, even if there is an unavoidable breach.

    The use of an appropriately designed and developed cloud-based system for managing genomic PHI can also facilitate compliance with the physical and technical safeguards required by the HIPAA Security Rule. Most cloud service providers implement physical security measures that exceed those that are practical for all but the largest of single-institution data centers. In addition, systems designed to manage genomic data automatically implement technical and other safeguards to ensure data confidentiality and integrity, including encryption, multi-factor authentication, automatic session timeouts, and logging for auditability.

    While it may not be intuitively obvious, in most cases a user of genomic PHI can dramatically reduce its compliance risk by using a cloud-based solution consistent with the standards described in this article.

    [1] Rodriguez, L. et. al, The Complexities of Genomic Identifiability , Science. vol. 339, no. 6117, p.275 (January 18, 2013).

    Nathan Adelson Hospice – Modern Healthcare Modern Healthcare business news, research, data

    #nathan adelson hospice


    Nathan Adelson Hospice

    Nathan Adelson Hospice

    By Dawn Metcalfe | January 31, 2011

    Nathan Adelson Hospice developed a strategy that reduced our accounts receivables over 90 days by 80% in the first six months. Was the approach totally revolutionary? No. It was as simple as revisiting and recommitting to the basics of successfully managing accounts receivable.

    Nathan Adelson’s mission is to provide patients and their loved ones with comprehensive end-of-life care and influence better care for all in our community. We honor the importance of choice and control for those who are ill so they may define for themselves the most comfortable and dignified manner in which to live.

    As a not-for-profit hospice provider based in Las Vegas, Nathan Adelson strives to achieve a balance between the business aspects of hospice and our mission. Improved cash collections as a result of a strong accounts receivable management policy positions our organization to provide ongoing care to our patients and families at the most critical time in their lives.

    We determined several factors that were intrinsic to effective management of our receivables:

    • Establish a commonality between the finance objectives and our clinical mission.
    • Know our insurance contract potential.
    • Target specific challenges to timely billing and collection within our organization.
    • Develop simple tracking tools for monitoring activity and results.
    • Empower our billing team and set expectations for positive results.

    Expectations for positive results were set:

    • Reinforced the importance of effectively managing the accounts receivable to the continued success of the organization.
    • Created teamwork between departments by establishing ownership of common issues.
    • Encouraged consistency in following the process in spite of time constraints or conflicting priorities.
    • Acknowledged departments/employees whose extra efforts contributed to success took place.
    • Established and communicated the correlation between extra efforts and improved results.

    Commitment was organizationwide and started from the top. We used every opportunity to educate and garner support from the management team and staff by attending meetings at all levels. The accounts receivable goals were quantified in terms that each group could internalize.

    A periodic review of our existing contract base for negotiating potential was implemented. We communicated contract information to all areas of the organization and highlighted collaboration efforts in negotiating new or improved contracts. Relationships with case managers and human resources benefit team members were developed, and we identified and explained what differentiates our hospice from competitors. We educated employers on the value of providing hospice benefits to their employees and the potential financial savings to the company. We never assumed a specific payer or employer was off-limits.

    An impediment to effective accounts receivable management was the untimely billing of charges. We established timelines for submitting timesheets and billing charges within the finance and clinical divisions. Accomplishments and failures of each billing cycle were communicated to all responsible staff and management and we incorporated compliance into performance expectations. Consistent follow-up was vital to our ongoing success.

    Tracking tools focused on those issues that had been identified as challenges. For example, reports explaining variances between expected and actual billing dates and dollars billed were used to identify when snags were occurring. Dollar amounts by specific issue were highlighted. A weekly analysis of all balances over 90 days allowed us to review actions taken over a period of time and identify problems requiring additional investigation. Communication of issues between billing staff and the admission department was mandated.

    The billing staff was trained to identify and proactively address external issues early in the billing process. They were supported in their efforts to resolve issues with co-workers and payers and recognized for their individual and team accomplishments.

    One example of how our new strategies and philosophy changed work functionality is that historically, the finance department had the task of being the only staff members to contact and work with insurance companies. Through effective staff collaboration, and shift in establishing a commonality between the finance objectives and our clinical mission, the admissions team began making the initial contact with the insurance companies. While this does not seem like a major paradigm shift, we know that changing roles and responsibilities can create their own challenges. We found that connecting the admissions team with the insurance company eliminated some of the clinical documentation issues related to billing, thereby reducing authorization and claim payment delays.

    Getting back to the basics required a commitment from the whole organization. Was it worth it? We think so.

    Dawn Metcalfe is vice president of finance and administration for Nathan Adelson Hospice, Las Vegas.

    Guide to big data analytics tools, trends and best practices #big #data


    Guide to big data analytics tools, trends and best practices


    By now, many companies have decided that big data is not just a buzzword, but a new fact of business life — one that requires having strategies in place for managing large volumes of both structured and unstructured data. And with the reality of big data comes the challenge of analyzing it in a way that brings real business value. Business and IT leaders who started by addressing big data management issues are now looking to use big data analytics to identify trends, detect patterns and glean other valuable findings from the sea of information available to them.

    It can be tempting to just go out and buy big data analytics software, thinking it will be the answer to your company’s business needs. But big data analytics technologies on their own aren’t sufficient to handle the task. Well-planned analytical processes and people with the talent and skills needed to leverage the technologies are essential to carry out an effective big data analytics initiative. Buying additional tools beyond an organization’s existing business intelligence and analytics applications may not even be necessary depending on a project’s particular business goals.

    This Essential Guide consists of articles and videos that offer tips and practical advice on implementing successful big data analytics projects. Use the information resources collected here to learn about big data analytics best practices from experienced users and industry analysts — from identifying business goals to selecting the best big data analytics tools for your organization’s needs.

    1 Business benefits –

    Real-world experiences with big data analytics tools

    Technology selection is just part of the process when implementing big data projects. Experienced users say it’s crucial to evaluate the potential business value that big data software can offer and to keep long-term objectives in mind as you move forward. The articles in this section highlight practical advice on using big data analytics tools, with insights from professionals in retail, healthcare, financial services and other industries.

    Many data streaming applications don’t involve huge amounts of information. A case in point: an analytics initiative aimed at speeding the diagnosis of problems with Wi-Fi networking devices. Continue Reading

    Online advertising platform providers Altitude Digital and Sharethrough are both tapping Apache Spark’s stream processing capabilities to support more real-time analysis of ad data. Continue Reading

    To give healthcare providers a real-time view of the claims processing operations its systems support, RelayHealth is augmenting its Hadoop cluster with Spark’s stream processing module. Continue Reading

    Complexity can seem like a burden to already overworked IT departments — but when it comes to your organization’s big data implementation, there’s a good reason for all those systems. Continue Reading

    A number of myths about big data have proliferated in recent years. Don’t let these common misperceptions kill your analytics project. Continue Reading

    Learn how health system UPMC and financial services firm CIBC are adopting long-term strategies on their big data programs, buying tools as needed to support analytics applications. Continue Reading

    An executive from Time Warner Cable explains why it’s important to evaluate how big data software fits into your organization’s larger business goals. Continue Reading

    Allegiance Retail Services, a mid-Atlantic supermarket co-operative, is deploying a cloud-based big data platform in place of a homegrown system that fell short on analytics power. Continue Reading

    Users and analysts caution that companies shouldn’t plunge into using Hadoop or other big data technologies before making sure they’re a good fit for business needs. Continue Reading

    Compass Group Canada has started mining pools of big data to help identify ways to stop employee theft, which is a major cause of inventory loss at its food service locations. Continue Reading

    Big data projects must include a well-thought-out plan for analyzing the collected data in order to demonstrate value to business executives. Continue Reading

    Data analysts often can find useful information by examining only a small sample of available data, streamlining the big data analytics process. Continue Reading

    Shaw Industries had all the data it needed to track and analyze the pricing of its commercial carpeting, but integrating the information was a tall order. Continue Reading

    2 New developments –

    Opportunities and evolution in big data analytics processes

    As big data analytics tools and processes mature, organizations face additional challenges but can benefit from their own experiences, helpful discoveries by other users and analysts, and technology improvements. Big data environments are becoming a friendlier place for analytics because of upgraded platforms and a better understanding of data analysis tools. In this section, dig deeper into the evolving world of big data analytics.

    Technologies that support real-time data streaming and analytics aren’t for everyone, but they can aid organizations that need to quickly assess large volumes of incoming information. Continue Reading

    Although the main trends in big data for 2015 may not be a huge departure from the previous year, businesses should still understand what’s new in the world of big data analysis techniques. Continue Reading

    Before starting the analytical modeling process for big data analytics applications, organizations need to have the right skills in place — and figure out how much data needs to be analyzed to produce accurate findings. Continue Reading

    The Flint River Partnership is testing technology that analyzes a variety of data to generate localized weather forecasts for farmers in Georgia. Continue Reading

    Big data analytics processes on data from sensors and log files can propel users to competitive advantages, but a lot of refining is required first. Continue Reading

    Consultant Rick Sherman offers a checklist of recommended project management steps for getting big data analytics programs off to a good start. Continue Reading

    Big data analytics initiatives can pay big business dividends. But pitfalls can get in the way of their potential, so make sure your big data project is primed for success. Continue Reading

    Consultants Claudia Imhoff and Colin White outline an extended business intelligence and analytics architecture that can accommodate big data data analysis tools. Continue Reading

    Big data experts Boris Evelson and Wayne Eckerson shared ideas for addressing the widespread lack of big data skills in a tweet jam hosted by SearchBusinessAnalytics. Continue Reading

    In the Hadoop 2 framework, resource and application management are separate, which facilitates analytics applications in big data environments. Continue Reading

    It’s important to carefully evaluate the differences between the growing number of query engines that access Hadoop data for analysis using SQL, says consultant Rick van der Lans. Continue Reading

    Marketers have a new world of opportunities thanks to big data, and data discovery tools can help them take advantage, according to Babson professor Tom Davenport. Continue Reading

    The Data Warehousing Institute has created a Big Data Maturity Model that lets companies benchmark themselves on five specific dimensions of the big data management and analytics process. Continue Reading

    Download this free guide

    Download Our Guide: Create an Analytics Success Story

    Learn how to gain executive approval and drive operational, cultural changes within your organization.

    By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

    You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy .

    3 News stories –

    News and perspectives on big data analytics technologies

    Big data analysis techniques have been getting lots of attention for what they can reveal about customers, market trends, marketing programs, equipment performance and other business elements. For many IT decision makers, big data analytics tools and technologies are now a top priority. These stories highlight trends and perspectives to help you manage your big data implementation.

    President Barack Obama has introduced proposals for data security, but not everyone thinks they will address key questions for businesses. Continue Reading

    What is holographic storage (holostorage)? Definition from, holographic data storage.#Holographic #data #storage


    holographic storage (holostorage)

    Holographic data storage

    • Share this item with your network:

    Holographic storage is computer storage that uses laser beams to store computer-generated data in three dimensions. Perhaps you have a bank credit card containing a logo in the form of a hologram. The idea is to use this type of technology to store computer information. The goal is to store a lot of data in a little bit of space. In the foreseeable future, the technology is expected to yield storage capacities up to a terabyte in drives the same physical size as current ones. A terabyte would be enough space for hundreds of movies or a million books.

    Holographic data storage

    Holographic data storage

    Download the PDF version of “Dell EMC World 2017 Recap

    Save yourself time and energy by downloading our comprehensive PDF version of this event recap, accessing all the news and notes from Dell EMC World 2017 in one place

    By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

    You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

    Although no one has yet mass-commercialized this technology, many vendors are working on it. InPhase Technologies, which was founded by Lucent, is working on a product capable of storing 200 gigabytes of data, written four times faster than the speed of current DVD drives. Although current versions are not rewritable, the company expects to make holographic storage that can be rewritten within the next few years.

    The first products are likely to be expensive, and only feasible for large organizations with unusual needs for storage. However, vendors expect to make holographic storage available and affordable for the average consumer within the next few years.

    Calculate Beta With Historical Market Data, yahoo finance api historical data.#Yahoo #finance


    Calculate Beta With Historical Market Data

    This tool calculates beta across any time frame for any stock against any benchmark. It uses historical stock quotes downloaded from Yahoo Finance.

    But this spreadsheet goes one step beyond this, and gives you a value of beta for your specific requirements

    Beta measures historical systematic risk against a specific benchmark, and the values given on Yahoo Finance and Google Finance aren t quite what you need. For example, Yahoo gives beta for the trailing 3 years against the S P500 but you need beta for the five years between 1995 to 2000 against the FTSE 100.

    If so, this spreadsheet is perfect for you. Just enter

    • a stock ticker whose beta you want
    • a benchmark ticker
    • and two dates

    After you click the prominently-placed button, the tool grabs the historical market data from Yahoo Finance and calculates beta.

    In the following screengrab, we ve calculated the beta of Exxon Mobil (ticker: XOM) against the S P500 for the three years trailing 31st March 2015.

    Yahoo finance api historical data

    You could, if wanted, change the time period or swap out the benchmark for NASDAQ 100 (ticker: ^NDX).

    The value of beta given by this tool (specifically, the beta of the close prices) matches that quoted by Yahoo Finance.

    7 thoughts on Calculate Beta With Historical Market Data

    Hello Samir khan,

    I have downloaded spreadsheet calculate beta with historical market data on my laptop from your website http://www.investexcel.net. But it is not working in my open office even after enabling macros. Do I require to record macros, run macros, or organise macros?

    Would you guide me in this regard so that spreadsheet starts working.

    Please send a reply quickly.

    It won t work in Open Office.

    Hello Samir khan,

    Can I download the spreadsheet calculate beta with the historical market data in the EXCEL ONLINE ?

    Please send a reply. thanks

    Does not work on Excel for Mac, ver. 15.15 (latest update for Office for Mac 2016). Clicking the Download Historical Stock Data and Calculate Beta button gives me a Compile error in hidden module: Module 1 message.

    This spreadsheet no longer works, I suspect because Yahoo changed the column order for historical price downloads. Macro reports Run-time error 1004 : Method Range of object _Global failed .

    Could you update the algorithm for obtaining historical data from Yahoo in this spreadsheet Excel Calculate Beta With Historical Market Data ?

    In May, Yahoo! curtailed their free datafeed API that was supplying free data to Pairtrade Finder.

    Over the last month, we ve been able to build a new bridge for Pairtade Finder to enable our

    users to continue to use Yahoo! to source free data. We ve also upgraded the IQ Feed connection

    in the latest version to ensure that no matter what happens with Yahoo! (they just merged with Verizon)

    you always have access to high-quality data.

    How to import historical stock data from Yahoo Finance into Excel using


    How to import historical stock data from Yahoo Finance into Excel using VBA

    In this article, we are going to show you how to download historical stock prices using the Yahoo Finance API called table.csv and discuss the following topics that will allow you successfully import data in Excel from Yahoo in a simple and automatic way:

    • Yahoo Finance API
    • Import external data into Excel
    • Import data automatically using VBA
    • Dashboard and user inputs
    • Conclusion

    Before we delve into the details of this subject, we assume that you know how to program in VBA or, at least, that you have already some notions of programming, as we are not going to introduce any basic programming concepts in this article. However, we hope to post other articles in a near future about more simple aspects of programming in VBA.

    You can find the whole code source of this tutorial on GitHub, or you can download the following Excel file that contains the VBA code together with the Dashboard and a list of stock symbols: yahoo-hist-stock.xlsm.

    Yahoo Finance API

    Yahoo has several online APIs (Application Programming Interface) that provides financial data related to quoted companies: Quotes and Currency Rates, Historical Quotes, Sectors, Industries, and Companies. VBA can be used to import data automatically into Excel files using these APIs. In this article, we use the API for Historical Quotes only.

    It is of course possible to access Yahoo Finance’s historical stock data using a Web browser. For example, this is a link to access Apple’s historical data, where you can directly use the Web interface to display or hide stock prices and volumes.

    Yahoo finance api historical data

    Now, the first step before writing any line of code is to understand how the Yahoo Finance API for Historical Stock Data is working, which implies to first learn how URL (Uniform Resource Locator) are built and used to access content on the Web. A URL is a Web address that your browser uses to access a specific Web page. We give here some two examples of Yahoo’s URLs.

    • This is how a URL look like when navigating on Yahoo Finance’s Web site:

  • And this is how the URL of Yahoo Finance API look like when accessing historical stock data:

    Note that these URLs both access stock data related to symbol GOOGL, i.e. the stock symbol for Google.

    However, except the fact that there are tow distinct URLs, they are also quite different in the result they return. If you click on them, the first simply returns a Web page (as you would expect), while the second returns a CSV file called table.csv that can be saved on your computer. In our case, we are going to use the table.csv file.

    Here is an example of data contained in this file. The table.csv file contains daily stock values of Google from the 24th to 30th December 2015.

    As you can see, the returned CSV file always contains headers (column names) in the first line, columns are separated by comma, and each line contains measurements for a specific day. The number of lines contained in the file depends on the given parameters (start data, end date and frequency). However, the number of columns (7) will always be the same for this API.

    The first column [Date] contains the date for every measurement. Columns 3 to 5 [Open,High,Low,Close] contains stock prices, where [Open] represents the recorded price when the market (here NasdaqGS) opened, [High] is the highest recorded price for a specific time interval (e.g. day), [Low] is the lowest price for a specific time interval, and [Close] is the price after the market closed. [Volume] represents the number of transactions executed during the given time interval (e.g. for a day, between market opening and closure). Finally, the [Adj Close] stands for Adjusted Close Price and represents the final price at the end of the day after the market closed. It may not necessary be equal to the [Close] price because of different business reasons. So, usually, we prefer to use the Adjusted Close Price instead of the Close Price.

    Now, if you look more closely at URL, we can see that it is composed of two main parts (one fixed and one variable): (1) the Web site and file name [http://ichart.finance.yahoo.com/table.csv] that is fixed, and (2) the parameters that can be modified in order to get historical data from other companies and/or for different periods of time [s=GOOGL a=0 b=1 c=20 14 d=5 e=30 f=2016 g=d]. These two parts are separated with a special character: the question mark [?].

    Let’s take a closer look at the the URL parameters following the question mark. Parameter name and value must always be passed together and are separated by an equal sign “=“. For example, parameter name “s” with value “GOOGL” gives parameter “s=GOOGL” that is then attached to the URL just after the question mark “?” as follow:

    Note here that only parameter “s” is mandatory. This means that the above URL is valid and will download a file containing all historical stock data from Google (on a daily basis), i.e. since the company was first quoted on the stock exchange market. All other parameters are optional.

    Additional parameters are separated with symbol “ ” from each other. For example, if the next parameter is “g=d” (for daily report), the URL becomes:

    Note the difference between “g” and “d”, where “g” is the parameter name whereas “d” is the parameter value (meaning “daily”). Note also that the order in which parameters are appended to the URL is NOT important: “s=GOOGL g=d” is equivalent to “g=d s=GOOGL”.

    For the Stock Symbol parameter “s”, a list of symbols (called tickers) can be found here.

    Additionally, one might also want to to target a specific period of time. Fortunately, the API also accept parameters that allows us to reduce or increase the time window by using more parameters. Here is an exhaustive list of all parameters from the Historical Stock API:

  • Home Health Care Agencies – Ratings and Performance Data #hotel #direct

    #home health agencies


    Compare Home Health Care Services

    Guide to Choosing Home Health Care Agencies

    At Home Health Care Service Providers offer a number of options for care within the patient’s home. When looking for the right home health care, there are a number of options to consider before making your final decision. Since the service will be coming to your home, it is important to first locate a group of services that are geographically close to you before analyzing their amenities further. Next, compare the program’s services offered and make sure you choose a company that will provide care specific to your needs. In addition, make sure that the program scores well in performance measures, managing pain and treating symptoms, treating wounds and preventing bed sores, and preventing harm and hospital stays. Discuss your selection with your physician and family before making your final decision.

    Ownership Type

    The location of the service and who provides care: non-profit, corporate, government, or voluntary.

    • Combination Government Voluntary: This type of ownership provides care services that are run by the government at either the state or local level.
    • Hospital Based Program: Ownership of this type provides programs that are based within a hospital.
    • Local: This type of ownership provides care services that are run locally in your city, and serve a more limited area.
    • Official Health Agency: This type of ownership often provides a variety of services, giving you the ability to work with one company to cover all of your at home health care needs.
    • Rehabilitation Facility Based Program: These programs focus on at home physical rehabilitation.
    • Skilled Nursing Facility Based Program: These programs focus on providing skilled nurses for at home health care.
    • Visiting Nurse Association: These programs provide skilled nurses for at home care.

    What to Watch for in Home Health Care Agencies

    To ensure you receive the highest quality of care, avoid providers with low scores in performance measures, managing pain and treating symptoms, treating wounds and preventing bed sores, and preventing harm and hospital stays. In addition, avoid trying to work with a company that is located far away from you, as it might be more difficult to find a nurse or other care provider you like that is also willing or able to travel a longer distance.

    Dataladder – The Leader in Data Cleansing Software, data deduplication software.#Data #deduplication


    Data deduplication software

    • Data deduplication software

    Data deduplication software

    Data deduplication software

    Data deduplication software

    Data deduplication software

    Data deduplication software

    Data deduplication software

  • Data deduplication software

    DataMatch is our complete and affordable data quality, data cleaning,

    matching and deduplication software in one easy to use software suite.

    Start your free trial today & feel free to contact us to conduct a customized

    WebEx tailored to your specific data quality tool needs.

    Data deduplication software

    Data deduplication software

    Data deduplication software

  • Data deduplication software

    DataMatch Enterprise: Advanced Fuzzy Matching Algorithms

    for up to 100 Million Records

    > Unparalleled Matching Accuracy and Speed For Enterprise Level Data Cleansing

    > Proprietary Matching Algorithms with a high level of matching accuracy

    at blazing fast speeds on Desktop/Laptop

    > Big Data Capability with data sets up to 100 Million Records

    Data deduplication software

    Data deduplication software

    Data deduplication software

  • Data deduplication software

    Best in class Semantic Technology to recognize and transform unstructured

    and unpredictable data. Ideal for Product Data.

    > Transform complex and unstructured data with semantic technology

    > Machine learning, auto rules creation significantly improves classification

    > Ideal for Product, Parts, and other Unstructured Data

    Data deduplication software

    Data deduplication software

    Data deduplication software

  • Data deduplication software

    Integrate Fuzzy/Phonetic Matching Algorithms and standardization procedures

    into your applications. Built on the DataMatch Enterprise matching engine

    with Industry leading speed and accuracy.

    The DataLadder Decision Engine ‘learns’ from human input on what is ?>

  • US big-data company expands into China #cloudera,big #data, #china


    US-based tech company Cloudera, which specializes in big data services, will open three offices in China, the company announced.

    The Palo Alto, California-based company already is in Asia with an office in Japan, but offices in Beijing, Shanghai and Guangzhou will be the company’s first in China.

    Cloudera said it will provide local customers with technology to build their own data hubs using Apache Hadoop, software that helps store and distribute large data sets.

    “We are making a big investment in a big opportunity,” said Cloudera CEO Tom Reilly in a statement on Dec 10. “With the interest in open source software and big data being so strong, we expect fast growth and adoption in China.”

    Cloudera joins a number of other American tech firms that have expanded into China as US and Chinese companies try to leverage the large amount of data collected from the country’s 1.3 billion consumers.

    “Big data has now reached every sector in the global economy. Like other essential factors of production such as hard assets and human capital, much of modern economic activity simply couldn’t take place without it,” said Justin Zhan, director of the Interdisciplinary Research Institute at the North Carolina Agricultural and Technical State University.

    “Looking ahead, there is huge potential to leverage big data in developing economies as long as the right conditions are in place,” added Zhan, who is also the chair of the International Conference on Automated Software Engineering.

    “With a strong economy, successful enterprises and local developers, China is a place for great products and services powered by big data technologies like Cloudera,” said George Ling, general manager of Cloudera China, in the company statement. “The new China offices give us an opportunity to showcase our local talent in an important and savvy market, with the ability to address changes in the local economy with sensitivity to cultural dynamics ultimately ensuring our customers’ success.”

    Zhan said that China is already leading the region for personal location data in the area of mobile technology, given the sheer number of mobile phone users in the country. According to December 2013 estimates from Reuters, there were about 1.23 billion mobile phone users in China.

    “The possibilities of big data continue to evolve rapidly, driven by innovation in the underlying technologies, platforms, and analytic capabilities for handling data, as well as the evolution of behavior among its users as more and more individuals live digital lives,” Zhan said, adding that US companies can help China in this aspect because tech firms here lead big data research and will eventually provide the foundation for big data business in countries like China.

    Ge Yong, assistant professor of computer science at University of North Carolina at Charlotte, said that there are many opportunities for US companies to help with big data management in China, particularly in the information technology sector.

    “There are areas in which Chinese companies do really well – such as Alibaba with e-commerce big data – but in certain sectors, US companies have more mature analytic tools to apply to Chinese businesses,” Ge said.

    NetApp, another California-based data management company, entered China in 2007 and has since opened 15 branches in China across Beijing, Shanghai, Guangzhou, Chengdu, and Shenzhen.

    “With our innovation, our selling relationships, and our great work environments and employees, NetApp believes it has the recipe to continue to grow successfully in China,” wrote Jonathan Kissane, senior vice-president and chief strategy officer at NetApp, in an e-mail to China Daily.

    “Companies around the globe have common challenges meeting their business needs with increasing volumes of critical data and the need for business flexibility. Data is the heart of business innovation and customer want to free data to move unbound across private and public clouds,” he wrote.

    Most Viewed

    US Weekly

    Solutions for the Next Generation Data Center – 42U: Cooling, Power, Monitoring


    42U Data Center Solutions

    We commit to providing solutions that best meet your needs and improve overall efficiency.

    Data center design build

    High-Density Cooling

    Increase rack density with row-based, high-density cooling solutions. Precision Cooling that optimizes your server cooling strategies.

    • Up to 57 kW cooling output in under 4 ft 2 footprint.
    • Save energy with high water inlet temperatures.
    • EC fan technology minimizes operating costs.

    Data center design build

    Aisle Containment

    Cool higher heat loads by isolating cooled air from hot equipment exhaust. Reduce or eliminate Bypass Air and Recirculation .

    • Optimize cooling infrastructure efficiency.
    • Maximize airflow dynamics.
    • Customized solutions for any environment.

    Data center design build

    Smart Power

    Proactively monitor and protect mission-critical equipment. Customizable power control and active monitoring down to the outlet level.

    • Smart load shedding capability.
    • Customized alert thresholds and alarms.
    • Build-Your-Own PDUs for the perfect fit.

    Why IT professionals trust 42U

    Data center design build


    We are vendor and technology independent, providing complete unbiased guidance on developing your data center solution. This allows us to assess each technology and evaluate and recommend the best solution for your application.

    Data center design build

    Customer Focused Approach

    We believe in developing a true business relationship with all of our customers. Our complete discovery process helps us understand the unique requirements of your environment, allowing us to make educated recommendations and develop the best solution for you.

    Data center design build


    Our team of experts understand not only facilities management, but the special requirements of mission critical facilities. We are dedicated to help you create the most cost-effective solution that meets the demanding availability needs of your business.

    Data center design build

    Commitment to Energy Efficiency

    Leveraging our best-practice expertise in monitoring, airflow analysis, power, measurement, cooling, and best-of-breed efficiency technologies, we help data center managers improve energy efficiency, reduce power consumption, and lower energy costs.

    Vendors We Work With

    • Data center design build
    • Data center design build
    • Data center design build
    • Data center design build
    • Data center design build

    Office of Marine and Aviation Operations #national #flight #data #center


    Options below affect the visual display. Choices are stored using browser cookies.

    The low bandwidth option causes most images to disappear and stops external fonts from loading.

    The underlined links option causes all website links to become underlined, making them easier to distinguish.

    The high contrast option causes colors to change to mostly black and white.

    Office of Marine Aviation Operations (OMAO) parent organizations

    Utility Navigation

    Primary Navigation

    Office of Marine and Aviation Operations Headquarters

    Office of Marine and Aviation Operations

    National Oceanic and Atmospheric Administration

    8403 Colesville Road, Suite 500

    Silver Spring. MD 20910-3282

    Front Desk. 1-301-713-1045

    • Social Media


    Photo: Steve de Blois / NOAA

    The NOAA Ship Fleet

    Learn how NOAA research and survey ships support safe navigation, commerce, and resource management

    Photo: © Sean Michael Davis – used with permission

    The NOAA Aircraft Fleet

    Learn how NOAA aircraft support hurricane and flood forecasts, coastal mapping, and emergency response

    Photo: Robert Schwemmer / NOAA

    Migration Data Hub #data #center #migration #steps


    Migration Data Hub

    Migration Data Hub

    Remittances are among the most tangible links between migration and development. According to World Bank projections, international migrants are expected to remit more than $582 billion in earnings in 2015, of which $432 billion will flow to low- or middle-income countries. Use the interactive data tools to find global remittance flows numerically, as a share of GDP, and more.

    Use these interactive tools, data charts, and maps to learn the origins and destinations of international migrants, refugees, and asylum seekers; the current-day and historical size of the immigrant population by country of settlement; top 25 destinations for migrants; and annual asylum applications by country of destination.

    Use our interactive maps to learn about international migration, including immigrant and emigrant populations by country and trends in global migration since 1960. One of these maps was referred to by a news organization as “addictive” and “a font of fun facts.”

    Use our interactive maps, with the latest available data, to learn where immigrant populations, by country or region of birth, are concentrated in the United States—at state, county, and metro levels. And explore settlement patterns and concentration of various immigrant populations in the United States in 2010 and 2000 with static maps.

    Frequently Requested Statistics on Immigrants and Immigration in the United States
    This feature presents the latest, most sought-after data on immigrants in the United States—by origin, residence, legal status, deportations, languages spoken, and more—in one easy-to-use resource.

    Immigration: Data Matters
    This pocket guide compiles some of the most credible, accessible, and user-friendly government and nongovernmental data sources pertaining to U.S. and international migration. The guide also includes additional links to relevant organizations, programs, research, and deliverables, along with a glossary of frequently used immigration terms.

    Media Resources


    Jeanne Batalova is a Senior Policy Analyst at MPI and Manager of the MPI Data Hub. Full Bio >

    Microsoft to open UK data centres – BBC News #microsoft #data #centre


    Microsoft to open UK data centres

    Microsoft has announced plans to build two data centres in the UK next year.

    The move will allow the tech company to bid for cloud computing contracts involving sensitive government data, which it was restricted from providing before.

    Consumers should also benefit from faster-running apps.

    The announcement, made by Microsoft chief executive Satya Nadella in London, follows a similar declaration by Amazon last week.

    The two companies vie to provide online storage and data crunching tools via their respective platforms Microsoft Azure and Amazon Web Services.

    Microsoft’s existing clients include:

    Amazon’s corporate customers include:

    One expert said the companies’ latest efforts should address highly regulated organisations’ privacy concerns.

    In a related development, the firm has also announced plans to offer its Azure and Office 365 cloud services from two German data centres controlled by a third-party, a subsidiary of Deutsche Telekom .

    “Microsoft will not be able to access this data without the permission of customers or the data trustee, and if permission is granted by the data trustee, will only do so under its supervision,” it said.

    The move will make it even harder for overseas authorities to gain access to the files.

    Microsoft is currently engaged in a legal battle with the US Department of Justice, which is trying to make it hand over emails stored on a server in Ireland – the tech firm says the government is trying to exceed its authority.

    ‘Huge milestone’

    Mr Nadella announced the plan to open a data centre near London and another in elsewhere in the UK – whose location has yet to be named – in 2016.

    They will bring the company’s tally of regional data centres to 26.

    He added Microsoft had also just completed the expansion of existing facilities in Ireland and the Netherlands.

    “[It] really marks a huge milestone and a commitment on our part to make sure that we build the most hyperscale public cloud that operates around the world with more regions than anyone else,” he told the Future Decoded conference.

    Scott Guthrie, Microsoft’s cloud enterprise group chief, added that the move would address privacy watchdogs’ concerns about “data sovereignty”.

    “We’re always very clear that we don’t move data outside of a region that customers put it in,” he told the BBC.

    “For some things like healthcare, national defence and public sector workloads, there’s a variety of regulations that says the data has to stay in the UK.

    “Having these two local Azure regions means we can say this data will never leave the UK, and will be governed by all of the local regulations and laws.”

    Amazon has also committed itself to multiple UK data centres, but has not said how many at this stage. It will make the UK its 15th regional base.

    Although that is fewer than Microsoft’s, the company is currently the global leader in this field in terms of market share.

    Image copyright Thinkstock Image caption Microsoft and Amazon will compete to provide local cloud computing services to UK-based organisations

    Announcing its move, Amazon said an added benefit of having a local data centre was that the public would experience less lag when using net-based services.

    “It will provide customers with quick, low-latency access to websites, mobile applications, games, SaaS [software as a service] applications, big data analysis, internet of things (IoT) applications, and more,” wrote Amazon’s chief technology officer, Werner Vogels .

    Amazon’s other EU-based data centres are in Ireland and Germany.

    Safe Harbour

    The recent legal battle over Safe Harbour highlighted the benefits of storing and processing data locally.

    Image copyright Thinkstock Image caption Regulations sometimes dictate that sensitive data must not be held outside of the UK

    The trade agreement – which used to make it easy to send EU-sourced personal information to the US – was ruled invalid. causing companies to take on additional administrative work if they wanted to continue using US-based cloud services.

    One expert said that the latest move should allay many IT chiefs’ concerns.

    “Microsoft’s new UK data centre will be a big deal for enterprises here – especially in highly regulated industries,” said Nick McGuire, from the tech research company CCS Insight.

    “It unlocks one of the key restraints on those bodies wishing to embrace cloud services.”

    Although outsourcing computing work to one of the big tech companies offers the potential for savings – as they do not have to build and maintain their own equipment – there are also risks involved.

    A fault with Azure knocked many third-party websites offline last year. and Amazon has experienced glitches of its own.

    However, major faults taking clients’ services offline are a relatively rare occurrence.

    Media playback is unsupported on your device

    UCE International Cellular Network Engineering Group, cellular data services.#Cellular #data #services


    cellular data services

    • Cellular data services

    provide on-job training and know-how knowledge transfer to your engineers.

  • Cellular data services

    UCE will audit the network from a technical aspect to market analysis to provide a sound long term network

  • Cellular data services

    within telephone facilities, high rise buildings, commercial buildings, hotels, hospitals,

    universities, residential areas and shopping complexes.

  • Cellular data services

    and engineering services providers to increase the skills of their workforce,

    enhance their efficiency, reduce their operation costs and increase their operating profit.

    Cellular data services

    Core Business

    UCE has been transformed into a regional powerhouse with its core business focused on telecommunications.

    Cellular data services


    Our people are the ‘building blocks’ of our business and the international diversity allows UCE to be the success it is today.

    Cellular data services

    Join Us

    We are always looking for creative, flexible, self-motivated contributors who possess the necessary skills to perform at the highest level.

    Cellular data services

    Latest News in Facebook

    Connect with us TODAY!

    Latest news and events can be found here. Like our Facebook page.

    Cellular data services

    Management Team

    UCE International is committed to being a responsible business and our environment is driven by our corporate values.

    Welcome to UCE International Group

    Cellular data servicesIn the Telecommunication Industry, the technologies are changing too rapidly and as a result, methods for dealing with these changes are also emerging rapidly. Coupled with the burgeoning demands of smartphones and mobile data, the network operators are being subjected to all kinds of pressure to meet these requirements.

    To ensure the greatest return to investors, the network operators are constantly looking to lower operation expenditures. They will always leverage and outsource the engineering works to professional technology services firms to complement their product offerings. The outsourcing strategy not only provides higher returns for the organization but also ensures higher efficiency to achieve its objectives and product offering deadline.


    Cellular data servicesThe Management believes in working hard but keeping it fun. The founder believe in a win-win scenario for both staff and management as they believe in fair treatment and being rewarded for innovation because it recognizes that its people are its best assets, and as such, are well rewarded.

    As a innovative and forward-thinking company, UCE has a flat management structure preferring to keep its staff together like a close-knit family. Apart from selecting people based on the best skills set, UCE looks out for people with enthusiasm, keenness in learning and a critical ability to think. More.

  • General Data Protection Regulation #data #protection #certification


    The General Data Protection Regulation is Coming fast: Will you be ready for 2018?

    Data transparency

    Your company has no shortage of data about customers and employees. But without a doubt, you don’t have complete knowledge as to all its whereabouts, its composition, its usage, how it was captured and how well it is being protected – at least not at the level of detail required by GDPR. Software AG gives you the means to fully comply with GDPR restrictions on personal data with solutions to properly classify the data you have and build a comprehensive record of processing activities and business processes. You’ll be able to satisfy customer inquiries and requests competently, and react quickly and effectively in the event of a data breach.

    Reporting efficiency

    Communication will be both a strategic and tactical strategy against compliance violation. If you can ensure stakeholders both internal (employees, subsidiaries, outsourcers) and external (customers, auditors and business partners) get the information they need, when they need it and in a palatable form, you’ve won half the battle. Use Software AG’s powerful reporting capabilities to deliver compliance status and progress reports for every audience, compile evidence of lawful processing for auditors and certification boards, and totally fulfill disclosure requests from data subjects.

    Company-wide commitment

    Even the slightest misstep in handling personal data could put your company at risk of non-compliance. Make sure everyone in your company understands the basic underpinnings of GDPR, their specific role in the matter, and, especially, what’s at stake – huge fines and a damaged reputation. Software AG’s GDPR solution, with its enterprise-wide reach, ensures you can effectively communicate and enforce your policies, principles, and procedures for compliance. Conduct readiness surveys and regular trainings – in particular, what to do in case of a data breach – to help foster personal engagement.

    Risk sensitivity

    The frenetic pace of our highly competitive digital marketplace and daily pressures to meet work demands can make risk seem like an afterthought. Yet, as GDPR demonstrates, data protection and security demand greater attention in the digital age – ignore it at your own risk! Make risk awareness universal to your business operations with Software AG’s solution to integrate impact analysis, risk assessment and mitigation into business processes. We’ll even help you identify where to direct your energy with issue and incident tracking capabilities.

    Informed transformation

    The authors of the GDPR recognize that the business world keeps evolving. They mandate privacy impact assessments when you introduce new technologies. This means for every software tool and process you add, you need to establish a risk-aware IT planning procedure for GDPR assessment. You also have to assess existing projects for GDPR-relevance and revise them accordingly. Use Software AG’s GDPR solution to implement privacy-by-design requirements, coordinate and synchronize all parts of the enterprise on planned changes, and work collaboratively with business to assess impact of GDPR on digitalization strategy. Move forward confidently on business and IT innovation with Software AG’s “whole-view” business and IT strategic planning and compliance platform.

    Customer intimacy

    Some are concerned that GDPR will put a dent in companies’ digitalization strategies. But the truth of the matter is that when it comes to delivering a superior customer experience, GDPR presents the opportunity to add data protection rights to your portfolio of personalized services. Software AG’s strong business process analysis and customer journey mapping capabilities help you assess the impact of GDPR on your digitalization strategy and the customer experience you offer. It will also show you where data capture occurs to provide GDPR-mandated information and where to implement “right-to-know” touchpoints.

    Software AG offers the world’s first Digital Business Platform. Recognized as a leader by the industry’s top analyst firms, Software AG helps you combine existing systems on premises and in the cloud into a single platform to optimize your business and delight your customers. With Software AG, you can rapidly build and deploy digital business applications to exploit real-time market opportunities. Get maximum value from big data. make better decisions with streaming analytics. achieve more with the Internet of Things. and respond faster to shifting regulations and threats with intelligent governance, risk and compliance. The world’s top brands trust Software AG to help them rapidly innovate, differentiate and win in the digital world. Learn more at www.SoftwareAG.com .

    Your personal data is protected by Software AG in accordance with our privacy policy. You will be contacted only with your permission. Your personal data will only be processed within Software AG group and will not be made available to any third parties.

    What is network topology? Definition from #data #center #network #topology


    network topology

    A network topology is the arrangement of a network, including its nodes and connecting lines. There are two ways of defining network geometry: the physical topology and the logical (or signal) topology.

    The physical topology of a network is the actual geometric layout of workstations. There are several common physical topologies, as described below and as shown in the illustration.

    In the bus network topology, every workstation is connected to a main cable called the bus. Therefore, in effect, each workstation is directly connected to every other workstation in the network.

    In the star network topology, there is a central computer or server to which all the workstations are directly connected. Every workstation is indirectly connected to every other through the central computer.

    In the ring network topology, the workstations are connected in a closed loop configuration. Adjacent pairs of workstations are directly connected. Other pairs of workstations are indirectly connected, the data passing through one or more intermediate nodes.

    If a Token Ring protocol is used in a star or ring topology, the signal travels in only one direction, carried by a so-called token from node to node.

    The mesh network topology employs either of two schemes, called full mesh and partial mesh. In the full mesh topology, each workstation is connected directly to each of the others. In the partial mesh topology, some workstations are connected to all the others, and some are connected only to those other nodes with which they exchange the most data.

    The tree network topology uses two or more star networks connected together. The central computers of the star networks are connected to a main bus. Thus, a tree network is a bus network of star networks.

    Logical (or signal) topology refers to the nature of the paths the signals follow from node to node. In many instances, the logical topology is the same as the physical topology. But this is not always the case. For example, some networks are physically laid out in a star configuration, but they operate logically as bus or ring networks.

    This was last updated in October 2016

    Continue Reading About network topology

    Related Terms

    churn rate Churn rate is a measure of the number of customers or employees who leave a company during a given period. It can also refer to. See complete definition Cisco Certified Internetwork Expert (CCIE certification) Cisco Certified Internetwork Expert (CCIE certification) is a series of technical certifications for senior networking. See complete definition Facebook Spaces Facebook Spaces is the social media company’s virtual reality (VR) application that allows users to interact in a virtual. See complete definition

    Data Encryption #data #encryption #standard #des


    Data Encryption – Overview

    Data Encryption provides the ability to encrypt data both for transmission over non-secure networks and for storage on media. The flexibility of key management schemes makes data encryption useful in a wide variety of configurations.

    Encryption can be specified at following levels:

    • Client level (for backup)

    Client level encryption allows users to protect data prior to it leaving the computer. You can setup client level encryption if you need network security.

    The data encryption keys are randomly generated per archive file.

  • Replication Set level

    Encryption for replication is specified on the Replication Set level, and applies to all of its Replication Pairs. For a given Replication Set, you can enable or disable encryption between the source and destination machines.

    Replication Set level encryption encrypts data on the source computer, replicated across the network to the destination computer, and decrypted on the destination computer.

  • Auxiliary Copy level (for copies)

    Auxiliary Copy level encryption encrypts data during auxiliary copy operations enabling backup operations to run at full speed. If you are concerned that media may be misplaced, data can be encrypted before writing it to the media and keys stored in the CommServe database. In this way, recovery of the data without the CommServe is impossible – not even with Media Explorer.

    Here, data encryption keys are generated per storage policy copy of the archive file. Thus, if there are multiple copies in a storage policy, the same archive files in each copy gets a different encryption key. Individual archive files, however, will have different encryption keys.

  • Hardware level (all data)

    Hardware Encryption allows you to encrypt media used in drives with built-in encryption capabilities, which provides considerably faster performance than data or auxiliary copy encryption. The data encryption keys are generated per chunk on the media. Each chunk will have a different encryption key.

    Data Encryption Algorithms

    Supported algorithms and key lengths are listed in the following table.

  • Data Visualization in R, visualization data.#Visualization #data


    Data Visualization in R

    This course is part of these tracks:

    Visualization data

    Ronald Pearson

    PhD in Electrical Engineering and Computer Science from M.I.T.

    Ron has been actively involved in data analysis and predictive modeling in a variety of technical positions, both academic and commercial, including the DuPont Company, the Swiss Federal Institute of Technology (ETH Zurich), the Tampere University of Technology in Tampere, Finland, the Travelers Companies and DataRobot. He holds a PhD in Electrical Engineering and Computer Science from M.I.T. and has written or co-written five books, including Exploring Data in Engineering, the Sciences, and Medicine (Oxford University Press, 2011) and Nonlinear Digital Filtering with Python (CRC Press, 2016, with Moncef Gabbouj). Ron is the author and maintainer of the GoodmanKruskal R package, and one of the authors of the datarobot R package.


    • Visualization data

  • Visualization data


    Course Description

    This course provides a comprehensive introduction on how to plot data with R’s default graphics system, base graphics.

    After an introduction to base graphics, we look at a number of R plotting examples, from simple graphs such as scatterplots to plotting correlation matrices. The course finishes with exercises in plot customization. This includes using R plot colors effectively and creating and saving complex plots in R.

    Base Graphics Background

    R supports four different graphics systems: base graphics, grid graphics, lattice graphics, and ggplot2. Base graphics is the default graphics system in R, the easiest of the four systems to learn to use, and provides a wide variety of useful tools, especially for exploratory graphics where we wish to learn what is in an unfamiliar dataset.

    A quick introduction to base R graphics

    This chapter gives a brief overview of some of the things you can do with base graphics in R. This graphics system is one of four available in R and it forms the basis for this course because it is both the easiest to learn and extremely useful both in preparing exploratory data visualizations to help you see what’s in a dataset and in preparing explanatory data visualizations to help others see what we have found.

    • Visualization data

    The world of data visualization
    Creating an exploratory plot array
    Creating an explanatory scatterplot
    The plot() function is generic
    A preview of some more and less useful techniques
    Adding details to a plot using point shapes, color, and reference lines
    Creating multiple plot arrays
    Avoid pie charts

    Different plot types

    Base R graphics supports many different plot types and this chapter introduces several of them that are particularly useful in seeing important features in a dataset and in explaining those features to others. We start with simple tools like histograms and density plots for characterizing one variable at a time, move on to scatter plots and other useful tools for showing how two variables relate, and finally introduce some tools for visualizing more complex relationships in our dataset.

    • Visualization data

    Characterizing a single variable
    The hist() and truehist() functions
    Density plots as smoothed histograms
    Using the qqPlot() function to see many details in data
    Visualizing relations between two variables
    The sunflowerplot() function for repeated numerical data
    Useful options for the boxplot() function
    Using the mosaicplot() function
    Showing more complex relations between variables
    Using the bagplot() function
    Plotting correlation matrices with the corrplot() function
    Building and plotting rpart() models

    Adding details to plots

    Most base R graphics functions support many optional arguments and parameters that allow us to customize our plots to get exactly what we want. In this chapter, we will learn how to modify point shapes and sizes, line types and widths, add points and lines to plots, add explanatory text and generate multiple plot arrays.

    • Visualization data

    The plot() function and its options
    Introduction to the par() function
    Exploring the type option
    The surprising utility of the type n option
    Adding lines and points to plots
    The lines() function and line types
    The points() function and point types
    Adding trend lines from linear regression models
    Adding text to plots
    Using the text() function to label plot features
    Adjusting text position, size, and font
    Rotating text with the srt argument
    Adding or modifying other plot details
    Using the legend() function
    Adding custom axes with the axis() function
    Using the supsmu() function to add smooth trend curves

    How much is too much?

    As we have seen, base R graphics provides tremendous flexibility in creating plots with multiple lines, points of different shapes and sizes, and added text, along with arrays of multiple plots. If we attempt to add too many details to a plot or too many plots to an array, however, the result can become too complicated to be useful. This chapter focuses on how to manage this visual complexity so the results remain useful to ourselves and to others.

    • Visualization data

    Managing visual complexity
    Too much is too much
    Deciding how many scatterplots is too many
    How many words is too many?
    Creating plot arrays with the mfrow parameter
    The Anscombe quartet
    The utility of common scaling and individual titles
    Using multiple plots to give multiple views of a dataset
    Creating plot arrays with the layout() function
    Constructing and displaying layout matrices
    Creating a triangular array of plots
    Creating arrays with different sized plots

    Advanced plot customization and beyond

    This final chapter introduces a number of important topics, including the use of numerical plot details returned invisibly by functions like barplot() to enhance our plots, and saving plots to external files so they don’t vanish when we end our current R session. This chapter also offers some guidelines for using color effectively in data visualizations, and it concludes with a brief introduction to the other three graphics systems in R.

    • Visualization data

  • Data center dedicated server #data #center #dedicated #server

    เปิดตัวอาคาร Data Center แห่งใหม่

    CAT data center บริการศูนย์ Data Center แบบครบวงจร ไม่ว่าจะเป็น ให้บริการรับฝากเซิร์ฟเวอร์ (Server Co-location), ให้เช่าพื้นที่ (Temp Office) มั่นใจด้วยมาตรฐาน TSI Level 3 และ ISO 27001: 2013 มีระบบรักษาความปลอดภัยที่แน่นหนา ระบบไฟฟ้า 2 แหล่งจ่าย พร้อมเชื่อมต่อเข้ากับ Internet Gateway ที่ใหญ่ที่สุดของประเทศเพียบพร้อมอุปกรณ์ฮาร์ดแวร์/ซอฟต์แวร์ระดับ Premium ครอบคลุมการให้บริการทั่วประเทศ ด้วยศูนย์ Data Center มากที่สุดในประเทศไทยถึง 8 แห่ง

    บริการเช่าวางเซิร์ฟเวอร์สำหรับผู้ใช้ บริการที่ต้องการดูแลระบบในแบบของ ท่านเอง พร้อมเชื่อมต่อเข้าอินเทอร์เน็ต ผ่านโครงข่ายความเร็วสูง

    บริการสำนักงานให้เช่าชั่วคราว ณ อาคาร CAT Tower ชั้น 14 ติดกับ Server Room พร้อมคอมพิวเตอร์และ อุปกรณ์ สำนักงาน เพื่อใช้ปฏิบัติงาน ในกรณีที่เกิดเหตุฉุกเฉินแก่หน่วยงานหลัก (Main Site) ไม่สามารถใช้งานตามแผนของ ระบบ Disaster Recovery Site (DRSite)

    เปิดตัว CAT data center Nonthaburi II มาตรฐาน TSI level 3 แห่งแรกและแห่งเดียวในอาเซียน Thu, 08/20/2015 – 14:58

    บริการ CAT data center เปิดตัวอาคาร Data Center แห่งใหม่ ที่นนทบุรี พร้อมรองรับการใช้งานอย่างเต็มรูปแบบ Tue, 08/04/2015 – 14:54


    unc venenatis augue nec tincidunt vestibulum. Curabitur pellentesque ipsum ut est tincidunt molestie. Pellentesque ornare urna unc venenatis augue nec tincidunt vestibulum. Curabitur pellentesque ipsum ut est tincidunt molestie. Pellentesque ornare urna

    k.bCEO Donec vitae 2

    Vivamus vestibulum sit amet ligula ut molestie. Nullam in leo vel ligula laoreet finibus. Sed neque risus, tempus id libero a, tempor elementum eros. Sed gravida vitae odio pharetra maximusVivamus vestibulum sit amet ligula ut molestie. Nullam in leo vel ligula laoreet finibus. Sed neque risus, tempus id libero a, tempor elementum eros. Sed gravida vitae odio pharetra maximus

    K.aCEO Donec vitae

    Nunc venenatis augue nec tincidunt vestibulum. Curabitur pellentesque ipsum ut est tincidunt molestie. Pellentesque ornare urna eu erat feugiat,Nunc venenatis augue nec tincidunt vestibulum. Curabitur pellentesque ipsum ut est tincidunt molestie. Pellentesque ornare urna eu erat feugiat,

    LG Get Product Support #lg #customer #service, #lg #support, #lg #firmware #update,


    Get Product Support

    Find my model #? Would you like to register a product?

    • Manuals & Documents View and download information for your LG product.
    • Software & Drivers Update your LG product with the latest version of software, firmware, or drivers.
    • Easy TV Connect Guide Step-by-step guide by device and cable, to get your new LG TV connected.
    • Easy Bluetooth Connect Guide Step-by-step guide by device pairs, to get your new Bluetooth devices connected.
    • Request a Repair Fast and easy way to submit a request online 24/7.
    • LG Bridge Move pictures, music, and other files between your phone, tablet and computer.
    • LG PC Suite Move pictures, music, and other files between your phone, tablet and computer.
    • Smart Share Connect devices to your smart TV through a Wi-Fi network or USB connection to view photos, music and videos.
    • LG Premium Care Extend your protection for years to come with the additional peace of mind of LG Premium Care.
    • LG G6 Support Find available guides, manuals, tutorials, and more for your LG G6 device.
    • Water Filter Finder Need help finding the correct Water Filter for your LG Refrigerator?
    • LG TVs Support Need support for your TV, but don’t know where to start? LG TVs Support will help.

    Product Help

    Repair Services

    Contact Us

    *NO PURCHASE NECESSARY. The LG Electronics “Product Registration” Sweepstakes is open to legal residents of the 50 United States and D.C. age 18 or older at the time of entry. Void outside the U.S. in Puerto Rico, and wherever else prohibited by law. Sweepstakes begins at 12:00:01 AM ET on 01/01/17 and ends at 11:59:59 PM ET on 12/30/17, with four (4) separate Sweepstakes Periods: Period 1 begins on 01/01/17 and ends on 03/31/17; Period 2 begins on 04/01/17 and ends on 06/30/17; Period 3 begins on 07/01/17 and ends on 09/30/17; Period 4 begins on 10/01/17 and end on 12/30/17. Click here for how to enter without purchasing or registering a product and Official Rules. Sponsor: LG Electronics Alabama, Inc. 201 James Record Road, Huntsville, AL 35824.

    Outsourcing Data Entry Services to ARDEM to Improve ROI #data #entry #outsourcing



    Accurate, Cost-Effective End to End Outsourcing Solutions

    There’s no room for error in the data that drives your business. ARDEM is committed to the accuracy of your data and passionate about delivering with precision every time. Our professional, accessible account management team works tirelessly to ensure that your custom solutions are flawlessly executed. Demand better data? Trust ARDEM to deliver data solutions to aid the growth and success of your company.


    Next programming language #programming, #software #development, #devops, #java, #agile, #web, #iot, #database,


    Why .NET Core Made C# Your Next Programming Language to Learn

    Why .NET Core Made C# Your Next Programming Language to Learn

    Get Your Apps to Customers 5X Faster with RAD Studio

    For years I have read about polyglot programmers and how some new language was the new cool thing. Over time, it has been programming languages like Ruby, Python, Scala, Go, Node.js, Swift, and others. It is amazing to see what Microsoft, and the community, have done with .NET Core and how it has become the cool new thing.

    The problem with many of the existing programming languages is they are good at one use case. Ruby and PHP are awesome for web applications. Swift or Objective-C are great for creating iOS or MacOS applications. If you wanted to write a background service you could use Python, Java, or other languages. Besides C#, JavaScript and Java may be the only languages that can be applied to a wide set of use cases.

    It is hard for me to apply my skills to a broad set of problems if I have to learn many programming languages. It limits my job opportunities. The awesome thing about C# is the wide versatility of it that can be used for a wide variety of types of applications. Now with .NET Core working on MacOS and Linux, there truly is no limit to what you can do. We will explore this in more detail below.

    Why C# and .NET Core Are the Next Big Thing

    I have been playing with .NET Core for over a year now and have been very impressed with it. I have even ported a .NET app over to run on a Mac, which was pretty amazing to see in action after all these years!

    Since our company creates developer tools that also work with .NET Core, I feel like we are more plugged in to what is going on. It feels like .NET Core is picking up steam fast and I predict there will be a huge demand for .NET Core developers in 2018. We talk to customers every day who are already running .NET Core apps in production.

    According to the TIOBE programming index. C# is already one of the top 5 programming languages.

    Top 6 Things to Know About C# and .NET Core

    If you are thinking about learning a new programming language, I want to provide you some of my insights as to why C# and .NET Core should be on the top of your list.

    Easy to Learn

    If you have done any programming in C, Java, or even JavaScript, the syntax of C# will feel very familiar to you. The syntax is simple to understand and read. Based on the TIOBE index I posted above, there are millions of developers who could easily make the switch from Java or C.

    There are lots of online resources to help you learn C#. Many are free and there are some that are low cost as well.

    Modern Language Features

    NET has been around a long time now and has steadily changed and improved over 15 years. Over the years I have seen awesome improvements like MVC, generics, LINQ, async/await, and more. As someone who has personally dedicated myself to the language, it is awesome to see it improve over time. With .NET Core, a lot has changed, including all of the ASP.NET stack being completely overhauled.

    Here are some the top features:

    • Strongly typed.
    • Robust base class libraries.
    • Asynchronous programming – easy to use async/await pattern.
    • Garbage collection, automatic memory management.
    • LINQ – Language Integrated Queries.
    • Generics – List T , Dictionary T, T .
    • Package management.
    • The ability to share binaries across multiple platforms and frameworks.
    • Easy to use frameworks to create MVC web apps and RESTful APIs.

    Versatility: Web, Mobile, Server, Desktop

    One of the best things about C# and .NET is the versatility of it. I can write desktop apps, web applications, background services, and even mobile apps thanks to Xamarin. Besides C#, all I really have to know is a little JavaScript (aided by TypeScript) to hack some UI code together (which I still try to avoid!). ASP.NET Core templates even make use of Bootstrap layouts and npm for pulling in client-side libraries.

    The versatility is a big deal because your investment in learning the language can be used for a wide array of things. Your skillset is highly portable. You can also jump from building web apps to mobile apps if you want to mix up what you are doing. This is a stark difference to most other programming languages that only work server side.

    And let’s not forget the first class support for Microsoft Azure. It’s never been easier to get up and running and then deployed to the cloud in just a few clicks. Docker containers are also supported which makes it easy to deploy your app to AWS or other hosting providers as well.

    Awesome Developer Tools

    Visual Studio has always been regarded as one of the best IDEs available for developers. It is a great code editor that supports features like code completion, debugging, profiling, git integration, unit testing, and much more. Visual Studio now offers a full-featured, free Community edition.

    It is also possible to write code for .NET Core as basic text files with your favorite text editor. You can also use Visual Studio Code on any OS as a great basic code editor. For those of you who will never give up your vim or emacs, you can even do C# development too. You could also install a plug-in for Visual Studio to add all of your favorite shortcut keys.

    The whole .NET ecosystem is also full of amazing developer tools. For example, I couldn’t imagine living without Resharper from Jetbrains. There are dozens of awesome tools that exist, including a mixture of open source and commercial products.

    Standardization of Skills

    NET comes with a very good set of base class libraries. Unlike Node.js, simple string functions like LeftPad() are built in. The wide array of base classes really decreases the need for external packages. Microsoft does lean on some community projects as well, like JSON.NET, to be key libraries widely used in most projects.

    Microsoft provides a very good set of patterns and practices for .NET. For example, there are standard data access (entity framework) and model-view-controller (MVC) frameworks built-in. Most developers use those standard frameworks. This makes it easy as a developer to move between teams and quickly understand how things work. Your knowledge and skills become more portable due to this.

    .NET Core Is Open Source

    One of the biggest changes to ever happen to .NET was the open sourcing of the code. Virtually all of the code is now on GitHub for anyone to review, fork, and contribute to. This is a huge change that most people in the industry never thought would happen.

    As a developer, from time to time you need to look under the covers to see what your code is really doing. For example, in the past, I once wondered if I called Dispose() on a database connection if that closes the connection or not. If you can access the source code somehow, you can quickly verify these types of questions.

    Even if you don’t contribute to the source code, you benefit from the huge community that is. Problems and improvements are quickly discussed, coded, and released for you to use on a regular basis. Gone are the days of waiting years in-between releases for major improvements or minor bug fixes.

    GeSI home: thought leadership on social and environmental ICT sustainability #global #e-sustainability


    Building a sustainable world through responsible, ICT-enabled transformation

    Developing key tools, resources and best practices to be part of the sustainability solution

    Providing a unified voice for communicating with ICT companies, policymakers and the greater sustainability community worldwide

    UNFCCC / Momentum for Change

    How digital solutions will drive progress towards the sustainable development goals

    SMARTer2030 Action Coalition


    Project Portfolio

    Thought Leadership

    News Events

    Interview with Carmen Hualda, CSR Manager at Atlinks Holding Atlinks Holding is the winner of this year’s Leadership Index in the Manufacture & Assembly of ICT Equipment sector (SMEs). We speak to their CSR-QHSE Manager, Carmen Hualda. Read More Big Data for big impact: Let’s accelerate sustainability progress We now live in an era of exponential growth for data flows driven by the proliferation of connected objects in the Internet of Things (IoT) ecosystem. Read More Innovation our way to the SDGs – a forum summary report The Global e-Sustainability Initiative (GeSI) and Verizon recently hosted a multi-stakeholder forum to identify the potential for information and communications technology (ICT) to catalyze progress towards the 17 UN Sustainable Development Goals (SDGs). Leaders from the ICT industry, other industry sectors, the technology startup sector, financial community, sustainability NGOs, academia, multilateral organizations, government, and media convened at the Verizon Innovation Center in San Francisco to spend a day focusing on the potential for innovative technology to address four priority solutions core to advancing the SDGs: (1) Food and agriculture; (2) Energy and climate; (3) Smart, sustainable communities; (4) Public health. Read More

    To practise what we preach the GeSI website is hosted on an environmentally-friendly data centre located in Toronto, Canada. Green methods were employed wherever possible in the construction and for the ongoing and future operation of the data centre.

    Become a Member

    Each of us has the opportunity to help change the world. Join GeSI to work directly with members of the ICT sector and the greater sustainability community worldwide to alter the direction of how technology influences sustainability.

    ABB data center technology earns acclaim and $50 million string of bundled


    ABB data center technology earns acclaim and $50 million string of bundled orders

    2013-05-08 – The approximately half million data centers operating globally are the backbone of our digital society and must be efficient, safe and dependable. ABB has supplied the highest quality, most reliable components to data centers for many years and we recently initiated a concerted approach toward expanding our data center capabilities. Today we are well positioned as a single-source supplier for integrated data center systems and packaged solutions.

    We have focused resources through a dedicated data center industry sector initiative and accompanying growth strategy that has led to:

    • Increasingly bundle our offerings through collaboration across all divisions
    • Increase our overall R D investment
    • Leverage offerings from acquired companies, including Baldor, Thomas Betts, Ventyx, ValidusDC and Newave
    • Partner with the world-leading manufacturers of IT hardware
    • Co-develop with innovative newcomers such as Power Assure and Nlyte
    • Broaden our advancements in electrical distribution, grid connections, infrastructure management and emergency power systems

    These investments have been validated with a string of key market successes. In addition to component equipment orders, during a recent six-week period ABB was awarded expanded projects totaling $50 million:

    • Belgium: a global Internet company will expand its data center with ABB medium- and low-voltage (MV and LV) switchgear, transformers, a power management and control system, and comprehensive site services.
    • China: a telecom and two Internet companies have entered into multi-year frame agreements with ABB for the supply of LV switchgear and power distribution units (PDUs).
    • U.K. a world-leading biomedical research and innovation center has called on ABB for MV and LV switchgear, transformers, battery systems and site services.
    • Germany: ABB LV switchgear has been incorporated into the data center solution of a competitor at the request of the customer, a public agency.
    • India: a new data center ordered advanced LV switchgear and PDUs jointly developed by ABB’s Low Voltage Systems business unit and Thomas Betts Power Solutions unit.
    • Mexico: a large financial institution contracted with ABB for a high-voltage gas-insulated substation, MV and LV switchgear, transformers and a two-year service agreement – also coming in collaboration with Thomas Betts Power Solutions unit.
    • Singapore: a global software company’s new data center will rely on our MV and LV switchgear, transformers, station battery system and service.

    Our global reach and project execution abilities have been primary motivators for these customers to choose ABB. In addition, today’s data centers need to ensure the safety of personnel, facilities and equipment while simultaneously maintaining 24/7/365 availability of mission critical systems. Here ABB quality has a key advantage with our full suite of offerings.

    Furthermore, ABB has been extending its expertise in AC power systems to pioneer DC systems, as well. Our purpose is to offer a proven DC alternative for data centers. “ABB believes both AC and DC are relevant in today’s world,” said Tarak Mehta, who heads ABB’s Low Voltage Products Division. “Our customers benefit from our optimized solutions that help them achieve capital savings, and improve energy efficiency and reliability.”

    Currently, ABB is working with industry thought leaders to create a complete DC-enabled infrastructure for data centers, with a comprehensive solutions suite for both UL and IEC markets.

    Whether delivered as AC or DC, power and environmental compatibility are primary concerns of data center managers. On average, a single facility consumes power equivalent to 25,000 homes, and collectively the amount of CO2 emissions resulting worldwide is rapidly approaching levels generated by nations the size of Argentina or the Netherlands.

    ABB has developed Decathlon™. a highly intelligent data center infrastructure monitoring (DCIM) solution. Decathlon automates power and energy management, asset and capacity planning, alarm management, remote monitoring and other key data center functions, integrating every aspect of monitoring and control into a unified, open platform.

    “There are few suppliers with offerings spanning the utility all the way to the power distribution system in the data center and also encompass infrastructure control,” said ABB Data Center Global Leader Valerie Richardson. “Combining this breadth with our global manufacturing, local project execution and service capabilities, we streamline the purchase process and deliver solutions to our customers through a single point of contact.”

    Stay in the loop:

    Security Assessment, VAPT, ECSA Training in Bangalore, Chennai, Mumbai, Pune, Delhi, Gurgaon,


    A penetration test is done to evaluate the security of a computer system or network by simulating an attack by a malicious user / hacker. The process involves active exploitation of security vulnerabilities that may be present due to poor or improper system configuration, known and / or unknown hardware or software flaws, or operational weaknesses in process or design.

    This analysis is carried out from the position of a potential attacker, to determine feasibility of an attack and the resulting business impact of a successful exploit. Usually this is presented with recommendations for mitigation or a technical solution.

    About this workshop

    This workshop gives an in-depth perspective of penetration testing approach and methodology that covers all modern infrastructure, operating systems and application environments.

    This workshop is designed to teach security professionals the tools and techniques required to perform comprehensive information security assessment.

    Participants will learn how to design, secure and test networks to protect their organization from the threats hackers and crackers pose. This workshop will help participants to effectively identify and mitigate risks to the security of their organization s infrastructure.

    This 40 hour highly interactive workshop will help participants have hands on understanding and experience in Security Assessment.

    A proper understanding of Security Assessment is an important requirement to analyze the integrity of the IT infrastructure.

    Expertise in security assessment is an absolute requirement for a career in information security management and could be followed by management level certifications like CISA, CISSP, CISM, CRISC and ISO 27001.

    There are many reasons to understand Security Assessment:

    • Prepare yourself to handle penetration testing assignments with more clarity
    • Understand how to conduct Vulnerability Assessment
    • Expand your present knowledge of identifying threats and vulnerabilities
    • Bring security expertise to your current occupation
    • Become more marketable in a highly competitive environment

    Therefore this workshop will prepare you to handle VA / PT assignments and give you a better understanding of various security concepts and practices that will be of valuable use to you and your organization.

    This workshop will significantly benefit professionals responsible for security assessment of the network / IT infrastructure.

    • IS / IT Specialist / Analyst / Manager
    • IS / IT Auditor / Consultant
    • IT Operations Manager
    • Security Specialist / Analyst
    • Security Manager / Architect
    • Security Consultant / Professional
    • Security Officer / Engineer
    • Security Administrator
    • Security Auditor
    • Network Specialist / Analyst
    • Network Manager / Architect
    • Network Consultant / Professional
    • Network Administrator
    • Senior Systems Engineer
    • Systems Analyst
    • Systems Administrator

    Anyone aspiring for a career in Security Assessment would benefit from this workshop. The workshop is restricted to participants who have knowledge of ethical hacking countermeasures.

    The entire workshop is a combination of theory and hands-on sessions conducted in a dedicated ethical hacking lab environment.

    • The Need for Security Analysis
    • Advanced Googling
    • TCP/IP Packet Analysis
    • Advanced Sniffing Techniques
    • Vulnerability Analysis with Nessus
    • Advanced Wireless Testing
    • Designing a DMZ
    • Snort Analysis
    • Log Analysis
    • Advanced Exploits and Tools
    • Penetration Testing Methodologies
    • Customers and Legal Agreements
    • Rules of Engagement
    • Penetration Testing Planning and Scheduling
    • Pre Penetration Testing Checklist
    • Information Gathering
    • Vulnerability Analysis
    • External Penetration Testing
    • Internal Network Penetration Testing
    • Routers and Switches Penetration Testing
    • Firewall Penetration Testing
    • IDS Penetration Testing
    • Wireless Network Penetration Testing
    • Denial of Service Penetration Testing
    • Password Cracking Penetration Testing
    • Social Engineering Penetration Testing
    • Stolen Laptop, PDAs and Cell phones Penetration Testing
    • Application Penetration Testing
    • Physical Security Penetration Testing
    • Database Penetration testing
    • VoIP Penetration Testing
    • VPN Penetration Testing
    • War Dialing
    • Virus and Trojan Detection
    • Log Management Penetration Testing
    • File Integrity Checking
    • Blue Tooth and Hand held Device Penetration Testing
    • Telecommunication and Broadband Communication Penetration Testing
    • Email Security Penetration Testing
    • Security Patches Penetration Testing
    • Data Leakage Penetration Testing
    • Penetration Testing Deliverables and Conclusion
    • Penetration Testing Report and Documentation Writing
    • Penetration Testing Report Analysis
    • Post Testing Actions
    • Ethics of a Penetration Tester
    • Standards and Compliance

    Five Steps to Aligning IT and Business Goals for Data Governance #data


    Five Steps to Aligning IT and Business Goals for Data Governance

    Address a complex, difficult issue: aligning IT and business around a central data management plan.

    Here s an interesting question: How do you create a successful data governance strategy across a large organization?

    The International Association for Information and Data Quality recently published a lengthy piece that explains how you can coordinate such a data governance and master data management strategy by using a very specific tool: an alignment workshop .

    Kelle O Neal founded the MDM and customer data integration consultancy First San Francisco Partners, but she s also worked for Siperian, GoldenGate Software, Oracle and Siebel Systems.

    Alignment is a key first step in any change management initiative and is especially important to an organization that is trying to better govern and manage data, writes O Neal. Many organizations struggle with launching and sustaining a data program because of a lack of initial alignment.

    Kelle suggests the Alignment Workshop as a proactive approach to the problem.

    As you might imagine, it involves bringing everyone together – the lines of business, IT and various stakeholders. Kelle writes that the benefits to such a meeting are two-fold:

    • You can educate everyone about data quality, MDM and data governance from the get-go.
    • It supports buy-in and helps maintain long-term interest.

    That part about maintain long-term interests should be your first clue that this is NOT a one-time event. In fact, she describes it as five components, each building upon the previous components.

    In brief, the five components are:

    1. Confirm the value of the data initiative to IT and the business/operational groups separately. Everybody lists what they see as the benefits, and then you prioritize and map these values. She includes a value-mapping matrix to help you visualize this process, but the gist is that you re pairing up what matters to IT with the business values.

    So, for instance, an IT value might be to create a single data brokerage architecture, but that s tied to the business values of focusing on value-added activities, creating consistency in reporting, adhering to regulations and more efficient support processes.

    This serves to identify, illustrate and confirm the overlap between what is important to the business and what is important to IT, Kelle writes.

    2. Identify the stakeholders goals. You might assume that the stakeholders goals would either align with IT or business/operations. That s not necessarily true. Even if the end goals are the same, it doesn t mean the stakeholder will share your priorities or have the same concerns about the project.

    So, this is your chance to hear from the people who really will be handling the day in, day out management of any governance or MDM project. Part of this process is also clearly defining what the consequences are if the goals are not achieved.

    Personally, I love this kind of if-then logic, because I think it makes it very clear why individual employees should support concepts that can often seem overly vague – like data governance.

    3. Create linkages between the delivery of the solution and what s important to individual stakeholders. Here, you re really drilling down and assigning tasks to individuals, and explaining how those tasks relate to the broader goals. The top data program deliverables are identified for each stakeholder and mapped to their goals.

    Stakeholders can now clearly see and articulate how those deliverables can help them achieve their business goals , Kelle writes.

    Again, she offers a sample chart if you re having trouble visualizing what that means.

    4. Determine success criteria and metrics. We all know the maxim about what gets measured gets done, but this brings it a step closer to home by setting targets for specific stakeholders so they can measure and monitor their own progress.

    5. Establish a communication plan. She goes into some detail about this, but, basically, this translates into documenting what s been said, as well as how progress should be reported and to whom.

    As I said, it s very detailed and lengthy, but it addresses a complex, difficult issue: aligning IT and business around a central data management plan.

    Data protection #continuous #data #protection



    BASF Online Data Protection Rules

    BASF is delighted that you have visited our website and thanks you for your interest in our company.

    At BASF, data protection has the highest priority. This document is designed to provide you with information on how we are following the rules for data protection at BASF, which information we gather while you are browsing our website, and how this information is used. First and foremost: your personal data is only used in the following cases and will not be used in other cases without your explicit approval.

    Collecting data

    When you visit the BASF website, general information is collected automatically (in other words, not by means of registration) which is not stored as personal related data. The web servers that are used store the following data by default:

    • The name of your internet service provider
    • The website from which you visited us
    • The websites which you visit when you are with us
    • Your IP address

    This information is analyzed in an anonymous form. It is used solely for the purpose of improving the attractiveness, content, and functionality of our website. Where data is passed on to external service providers, we have taken technical and organizational measures to ensure that the data protection regulations are observed.

    Collecting and processing personal data

    Personal data is only collected when you provide us with this in the course of, say, registration, by filling out forms or sending emails, and in the course of ordering products or services, inquiries or requests for material.

    Your personal data remains with our company, our affiliates, and our provider and will not be made available to third parties in any form by us or by persons instructed by us. The personal data that we do collect will only be used in order to perform our duties to you and for any other purpose only when you have given specific consent. You can adjust your consent for the use of your personal data at any time with an email to the effect that you revoke your consent in the future to either the email address listed in the imprint or to the data protection representative (contact information listed below).

    Data retention

    We store personal data for as long as it is necessary to perform a service that you have requested or for which you have granted your permission, providing that no legal requirements exist to the contrary such as in the case of retention periods required by trade or tax regulations.


    BASF deploys technical and organizational security measures to protect the information you have made available from being manipulated unintentionally or intentionally, lost, destroyed or accessed by unauthorized persons. Where personal data is being collected and processed, the information will be transferred in encrypted form in order to prevent misuse of the data by a third party. Our security measures are continuously reviewed and revised in line with the latest technology.

    Right to obtain and correct information

    You have the right to obtain information on all of your stored personal data, to receive, to review, and if necessary to amend or erase. To do this, just send an email to the email address indicated in the imprint or to the person in charge of data protection (see below for the relevant contact details). The deletion of your personal data will be completed unless we are legally obligated to store the information.


    On our corporate website, we only use cookies if they are required for an application or service which we provide. If you would like to opt out of the advantages of these cookies, you can read in the help function on your browser how to adjust your browser to prevent these cookies, accept new cookies, or delete existing cookies. You can also learn there how to block all cookies or set up notifications for new cookies.

    The Cookies which we currently use on the website are listed in the following table.

    With this cookie the web analytics tool Webtrends, gathers anonymous information about how our website is used. The information collected helps us to continually address the needs of our visitors. Information stored are e.g. how many people visit our site, from which websites they come and what pages they view. Further information can be found in the statement on data protection from Webtrends .

    Erased two years after site visit

    We use the DoubleClick cookie to compile data regarding user interactions with ad impressions and other ad service functions as they relate to our website.

    Erased two years after site visit


    If you have any questions or ideas, please refer to the data protection representative at BASF SE, who will be pleased to help you. The continuous development of the Internet makes it necessary for us to adjust our data protection rules from time to time. We reserve the right to implement appropriate changes at any time.

    General Contact

    Ralf Herold Data Protection Officer at BASF SE, COA – Z 36 67056 Ludwigshafen +49 (0) 621 60-0

    • Contact the Data Protection Officer at BASF

    Join the conversation





    Copyright © BASF SE 2017

    5 ways hospitals can use data analytics #data #analytics #healthcare


    When it comes to healthcare analytics, hospitals and health systems can benefit most from the information if they move towards understanding the analytic discoveries, rather than just focusing on the straight facts.

    George Zachariah, a consultant at Dynamics Research Corporation in Andover, Mass. explains the top five ways hospital systems can better use health analytics in order to get the most out of the information.

    1. Use analytics to help cut down on administrative costs.

    To reduce administrative costs it s really one of the biggest challenges we face in the industry, said Zachariah. One-fourth of all healthcare budget expenses are going to administrative costs, and that is not a surprise because you need human resources in order to perform.

    Zachariah suggests that hospital systems begin to better utilize and exchange the information they already have by making sure their medical codes are properly used, and thus, the correct reimbursements are received.

    Right now, with electronic medical records, you can see that automated coding can significantly enhance how we can turn healthcare encounters into cash flow by decreasing administrative costs, he said.

    Zachariah said that having all medical tests, lab reports and prescribed medications for patients on one electronic dashboard can significantly improve the way clinicians make decisions about their patients while at the same time cutting costs for the organization.

    If all the important information is on one electronic dashboard, clinicians can easily see what needs to get done for a patient, and what has already been done. They can then make clinical decisions right on the spot, he said. In addition, clinicians will not be double-prescribing patients certain medications due to the lack of information they have on the patient.

    3. Cut down on fraud and abuse.

    Zachariah said that with such a significant amount of money lost in the healthcare industry due to fraud and abuse, it s important for organizations to use analytics for insight into patient information and what physicians are doing for their patients.

    Analytics can track fraudulent and incorrect payments, as well as the history of an individual patient, he said. However, it s not just about the analytic tool itself but understanding the tool and how to use it to get the right answers.

    4. Use analytics for better care coordination.

    Zachariah believes that the use of healthcare analytics in the next 10 years is going to be extremely important for hospital systems.

    Even within the same hospital systems, it can be very disjointed, he said. I think we need to use analytics to help with patient handoff, both within systems and between all types of healthcare organizations across the country. Historically, within many organizations different specialties just didn t communicate to one another about a patient, and I think we can really work to have all records reachable across the country.

    5. Use analytics for improved patient wellness.

    Analytics can help healthcare organizations remind patients to keep up with a healthy lifestyle, as well as keep track of where a patient stands in regard to their lifestyle choices, said Zackariah.

    Analytics can be used to provide information on ways a certain patient can modify his or her lifestyle, he said. This makes a patient s health a huge priority and I don t think people will mind be reminded to take care of themselves.