Annotation Using Content and Querying Value

A large number of organizations today generate and share textual descriptions of their products, services, and actions. Such collections of textual data contain significant amount of structured information, which remains buried in the unstructured text. While information extraction algorithms facilitate the extraction of structured relations, they are often expensive and inaccurate, especially when operating on top of text that does not contain any instances of the targeted structured information.

We present a novel alternative approach that facilitates the generation of the structured metadata by identifying documents that are likely to contain information of interest and this information is going to be subsequently useful for querying the database. Our approach relies on the idea that humans are more likely to add the necessary metadata during creation time, if prompted by the interface; or that it is much easier for humans (and/or algorithms) to identify the metadata when such information actually exists in the document, instead of naively prompting users to fill in forms with information that is not available in the document.
As a major contribution of this paper, we present algorithms that identify structured attributes that are likely to appear within the document, by jointly utilizing the content of the text and the query workload. Our experimental evaluation shows that our approach generates superior results compared to approaches that rely only on the textual content or only on the query workload, to identify attributes of interest.
Existing System
There are several system that favor the collaborative annotation of objects and use previous annotations or tags to annotate new objects. There have been a significant amount of work in predicting the tags for documents or other resources (webpage's, images, videos). Depending on the object and the user involvement, this approaches have different assumptions on what is expected as an input; Nevertheless, the goals are similar as expect to find missing tags that are related with the object.
Disadvantages Of Existing System:
• The cost is high for creation of annotation information.
• The existing system produces some errors in the suggestions.

PROPOSED SYSTEM
In this project, this project propose Collaborative Adaptive Data Sharing platform (CADS), which is an “annotate-as-you create” infrastructure that facilitates fielded data annotation. A key contribution of our system is the direct use of the query workload to direct the annotation process, in addition to examining the content of the document. In other words, this project are trying to prioritize the annotation of documents toward generating attribute values for attributes that are often used by querying users.
Proposed Work
Collaborative Adaptive Data Sharing platform (CADS). CADS are nothing but annotate-as-you create infrastructure that facilitates fielded data annotations. The aim of CADS is to minimize the cost creating annotated documents that can be useful for commonly issued semi structured queries. Fig1 represents work flow of CADS. The CADS system has two types of actors: producers and consumers. Producers upload data in the CADS system using interactive insertion forms and consumers search for relevant information using adaptive query forms.

Advantages Of Proposed System:
• We present an adaptive technique for automatically generating data input forms, for annotating unstructured textual documents, such that the utilization of the inserted data is maximized, given the user information needs.
• We create principled probabilistic methods and algorithms to seamlessly integrate information from the query workload into the data annotation process, in order to generate metadata that are not just relevant to the annotated document, but also useful to the users querying the database.
• We present extensive experiments with real data and real users, showing that our system generates accurate suggestions that are significantly better than the suggestions from alternative approaches.

    1 reviews
  • Raj J

    Annotation Using Content and Querying Value

    3 years ago