Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Analysis of Tools for Data Cleaning and Quality Management

Paper Type: Free Essay Subject: Computer Science
Wordcount: 1702 words Published: 5th Apr 2018

Reference this

Data cleaning is needed in process of combining heterogeneous data sources with relation or tables in databases. Data cleaning or data cleansing or data scrubbing is defined as removing and detecting errors along with ambiguities existing in files, log tables. It is done with the aim to improve quality of data. Data quality and data cleaning are both related terms. Both are directly proportional to each other. If data is cleansed timely then quality of data will get improved day by day. There are various data cleaning tools that are freely available on net. The tools include Winpure Clean and Match, OpenRefine, Wrangler, Data cleaner and many more. The thesis presents information about WinPure Clean and Match data cleaning tool, its benefits and applications in running environment due to its three filtered mechanism of cleaning data. Its implementation has been done by taking user defined database and results are presented in this chapter.

WinPure Clean and Match

It is one of easiest and simplest three phase filtered cleaning tool to perform data cleansing and data de-duplication. It is designed in such a way that running this application saves time and money. The main benefit of this tool is that we can import two tables or lists at same time. The software uses fuzzy matching algorithm technique for performing powerful data de-duplication. The functions of this tool are as follows:

  • Removes redundant data from databases in faster way.
  • Correct misspellings and incorrect email addresses. It also converts words to uppercase or lowercase depending on user’s demand.
  • Removes unwanted punctuation and spelling errors.
  • Helps to relocate missing data and gives statistics in form of 3D chart. This option can be proven useful in finding population percentage of particular area.
  • It automatically capitalizes first alphabet of every word.

Advantages

  • Increases accuracy and utilization of database (either professional database, user defined database or consumer database).
  • Eliminate duplicity from databases using fuzzy matching de-duplication technique.
  • Increases industry perspectives by using standard naming conventions with facility of removing duplicate data from original data.
  • Export given file into various formats like access, excel(95), excel (2007), outlook systems etc.

Applications

  • The software is made for use from normal users to IT professionals. It is ideal for marketing, banking, universities and various IT organizations.

Working of WinPure Clean and Match

Clean and Match is made of three components- Data, Clean and Match. Data gives us imported list of tables. Clean option consists of seven modules each having different purposes. The clean section is basically used to analyze, clean, correct and correctly populate given table without removing duplicity. It has separate cleansing modules like Statistics Module, Case converter, Text cleaner, Column cleaner, E-mail cleaner, column splitter and column merger.

Match section is used to detect duplicity using fuzzy matching de-duplication technique. WinPure Clean and Match contains a unique 3 step approach for finding duplications in given list or database.

Step 1: The first step is to specify which table/s and columns you would like to use to search for possible duplications.

Step 2: The second step is to specify which matching technique you would like to use either basic (telephone numbers, emails, etc) or advanced de-duplication with or without fuzzy matching (names, addresses, etc.

Step 3: The final step is to specify which viewing screen you would like to use, WinPure Clean & Match offers two unique viewing screens for managing the duplicated records.

Limitations of WinPure Clean and Match

(a) It has nothing to deal with connectivity and networking of dataset. It simply removes redundant words by cleaning and matching data.

(b) It is not derived from any expert systems like Simile Longwell CSI and lacks client server terminology.

(c) It means modifying/updating dataset is not possible once data is imported in tool.

Google Refine

Google refine overcomes the limitations of WinPure Clean and Match. It was earlier called as OpenRefine. It is powerful tool for working with dirty data and cleans, transforms data along with various services to link it to databases like Freebase. OpenRefine understands a variety of data file formats. Currently, it tries to guess the format based on the file extension. For example,.xmlfiles are of course in XML. By default, an unknown file extension is assumed to be either tab-separated value (TSV) or comma-separated value (CSV).

Once imported, the data is stored in OpenRefine’s own format, and original data file is left undisturbed.

Google Refine Architecture

OpenRefine is a web application that is intended to be run on one’s own machine and used by oneself. The machine has server as well as client side. The server-side maintains states of the data (undo/redo history, long-running processes, etc.) while the client-side maintains states of the user interface (facets and their selections, view pagination, etc.). The client-side makes GET and POST Ajax calls to modify and fetch data related information from server side.

The architecture has come into existence from expert systems like Simile Long well CSI, a faceted browser for RDF data. It provides a good separation of concerns (data vs. Universal interface) and also makes it quick and easy to implement user interface features using familiar web technologies.

5.6. Using Data Quality Services in connecting databases

This section is to provide high quality data by introducing data quality services (DQS) in Microsoft SQL Server. The data-quality solution provided by Data Quality Services (DQS) enables an IT professional to maintain the quality of their data and ensure that the data is suited for its business usage. DQS is a knowledge-driven solution that provides both computer-assisted and interactive ways to manage the integrity and quality of your data sources. DQS enables you to discover, build, and manage knowledge about your data. You can then use that knowledge to perform data cleansing, matching, and profiling.It is based on building of knowledge base or test bed to identify the quality of data as well as correcting bad quality of data. Data Quality Services is a very important concept of SQL Server.

Utilisation of data cleaning and quality phases

The process of data cleaning starts from the starting phase when user chooses data from random dataset from internet or some books. A framework showing utility of these processes is described below in form of sequential steps listed below:

Step 1) Choose random dataset

Step 2) Shorten it as per user requirements

Step 3) Find whether data contains dirty bits or not.

Step 4) Cleanse data by testing it on application platforms like WinPure Clean and Match and Google Refine.

Step 5) Then the task of creating high quality data is initiated.

Step 6) Connect refined database with SQL server.

Step7) Install Data Quality Services (DQS).

Step 8) Knowledge base is built through DQS interface.

Step 9) After building database, process of knowledge discovery has been started.

Step 10) In knowledge discovery process, normalization of string values has been done to replace incorrect spellings and errors.

Step 11) It leads to production of high quality data by removing dirty bits of data.

Shortcomings of the existing tools

  • WinPure Clean and Match simply clean data by removing redundant words. It does not give information about synonyms and homophones.
  • This data cleaning tool produces moderate correctness level. The tool only gives details of incorrect words and matched words instead of removing similar words. It leads to wastage of memory and less accuracy.
  • Data Quality Services (DQS) is somewhat complex for non technical users. A normal person cannot use this quality software without having knowledge of databases.
  • DQS improves data quality with human intervention. If user selects correct spelling of given word, then DQS approves it else reject it.
  • There is no automatic system for detection of strings and synonyms. One has to create set up of SQL in machine to use it.
  • Both tools work syntactically rather than semantically. That is the reason they are unable to find synonyms.
  • These tools corrects given data according to predefined syntaxes like spelling errors, omitting commas etc.

Keeping the above shortcomings in consideration, the study has proposed data cleaning algorithm by using String detection Matching technique via WordNet.

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: