Multimedia question answering 1

68 118 0
Multimedia question answering 1

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Multimedia Question Answering Liqiang Nie A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF PHILOSOPHY SCHOOL OF COMPUTING NATIONAL UNIVERSITY OF SINGAPORE 2013 c ⃝2013 Liqiang Nie All Rights Reserved Publications Liqiang Nie, Yi-Liang Zhao, Xiangyu Wang, Jialie Shen, Tat-Seng Chua Learning to Recommend Descriptive Tags for Questions in Social Forums ACM Transactions on Information Systems, 2013 Full jounral paper Yi-Liang Zhao, Liqiang Nie, Xiangyu Wang, Tat-Seng Chua Personalized Recommendations of Locally Interesting Venues to Tourists via Cross Region Community Matching ACM Transactions on Intelligent Systems and Technology, 2013 Full jounral paper Liqiang Nie, Meng Wang, Gao Yue, Zheng-Jun Zha, Tat-Seng Chua Beyond Text QA: Multimedia Answer Generation by Harvesting Web Information IEEE Transactions on Multimedia, 2013 Full jounral paper Liqiang Nie, Shuicheng Yan, Meng Wang, Richang Hong, Tat-Seng Chua Harvesting Visual Concepts for Image Search with Complex Queries In Proceedings of the ACM International Conference on Multimedia, 2012 Full conference paper, oral Liqiang Nie, Meng Wang, Zheng-Jun Zha, Tat-Seng Chua Oracle In Image Search: A Content-Based Approach To Performance Prediction ACM Transactions on Information Systems, 2012 Full jounral paper Richang Hong, Meng Wang, Guangda Li, Liqiang Nie, Tat-Seng Chua Multimedia Question Answering IEEE Multimedia, 2012 Full magazine paper Weinan Zhang, Zhaoyan Ming, Yu Zhang, Liqiang Nie, Ting Liu and TatSeng Chua The Use of Dependency Relation Graph to Enhance the Term Weighting in Question Retrieval In Proceedings of the International Conference on Computational Linguistics, 2012 Full conference paper, oral Yan Chen, Zhoujun Li, Liqiang Nie, Xia Hu, Xiangyu Wang and Tat-Seng Chua A Semi-Supervised Bayesian Network Model for Microblog Topic Classification In Proceedings of the International Conference on Computational Linguistics, 2012 Full conference paper, oral Liqiang Nie, Meng Wang, Zheng-Jun Zha, Guangda Li, Tat-Seng Chua Multimedia Answering: Enriching Text QA with Media Information In Proceedings of the International ACM SIGIR Conference, 2011 Full conference paper, oral 10 Xiangyu Chen, Jin Yuan, Liqiang Nie, Zhengjun Zha, Tat-Seng Chua NUS-LMS Known-item Search In TRECVID 2010 Full research paper 11 Richang Hong, Guangda Li, Liqiang Nie, Jinhui Tang, Tat-Seng Chua Exploring Large-Scale Data for Multimedia QA: An Initial Study In Proceedings of the International ACM Conference on Image and Video Retrieval, 2010 Full conference paper, oral Acknowledgements This dissertation would not have been completed, or at least not what it looks like now, without the support, direction and help of many people I am honored to take this opportunity to thank them My first and foremost thank undoubtedly goes to my supervisor Prof TatSeng Chua, a respectable, responsible and resourceful professor, who took me into his research group in the mid of 2009 From then, whenever I have questions, his door is always open for discussions His creative ideas and unique angle of research observations have consistently inspired me to devote my efforts in the area of media search Prof Chua always sets high standard for our research, insists on impact work targeting at premier forums, and advocates the value of building commercializable systems Besides research, he leads us on long-distance jogging almost once per week, which greatly strengthens our bodies Second, I would like to express my heartfelt gratitude to Prof Meng Wang and Richang Hong, who have influenced me in many ways and deserves my special appreciations During the first two years of my Ph.D pursuit, they have always been providing insightful suggestion and discerning comments to my research work and paper draft Their heuristic guidance in our discussion makes me think and work very independently I sincerely extend my thank to my doctoral committee (Prof Mohan S Kankanhalli, Chew Lim Tan and Anthony K H Tung) Their constructive feedback and comments at various states have been significantly helpful in shaping the thesis up to completion I would also like to thank the external examiner, Prof Winston H Hsu, for his critical readings and constructive criticisms so as to make the thesis as sound as possible Lastly my thanks would be reserved to my beloved family, my parents, wife, and twins For their selfless considerations, endless love and unconditional support, I can not find words to express my gratitude Contents List of Figures xi List of Tables xv Chapter Introduction 1.1 Background 1.2 Motivation 1.3 Challenges 1.4 Strategies 1.4.1 Question Analysis 10 1.4.2 Answer Medium Determination 12 1.4.3 Web Media Answer Selection 13 1.5 Contributions 14 1.6 Outline of the Thesis 15 Chapter Literature Review 17 2.1 Automatic Textual Question Answering 17 2.2 Community-based Question Answering 22 2.3 Multimedia Question Answering 28 2.4 Summary 34 i Chapter Question Understanding 37 3.1 Introduction 37 3.2 Related Work 41 3.2.1 Annotation of Media Entities 41 3.2.2 Annotation of Textual Entities 42 3.3 Question Annotation Scheme 43 3.4 Question Space Inference 45 3.4.1 Probabilistic Hypergraph Construction 45 3.4.2 Adaptive Probabilistic Hypergraph Learning 47 3.4.3 Discussions 50 Relevant Tag Selection 52 3.5.1 Tag Relevance Estimation 52 3.5.2 Complexity Analysis 54 3.6 Query Generation for Multimedia Search 54 3.7 Experiments 56 3.7.1 Experimental Settings 56 3.7.2 First-order Analytics on Our Dataset 57 3.7.3 On Learning Performance Comparison 60 3.7.4 On Relevant Tag Selection 63 3.7.5 On the Sensitivity of Parameters 64 3.7.6 Ontology Generation 65 3.7.6.1 Application Scenario 65 3.7.6.2 Experiments 67 Evaluation of Query Generation 69 Summary 70 3.5 3.7.7 3.8 Chapter Answer Medium Determination 4.1 Introduction ii 71 71 4.2 Related Work 74 4.3 Answer Medium Selection 76 4.3.1 Question-Based Classification 77 4.3.2 Answer-Based Classification 78 4.4 Answer Availability Prediction 80 4.4.1 Probabilistic Analysis of AP and NDCG 80 4.4.2 Query-Adaptive Graph-Based Learning 82 4.4.2.1 Ranking-Based Relevance Analysis 82 4.4.2.2 Query-Adaptive Graph-Based Learning 83 4.4.2.3 Discussion 87 4.5 Experiments 88 4.5.1 Experimental Settings 88 4.5.2 On Answer Medium Selection 90 4.5.3 On Query Classification 92 4.5.4 On Media Search Performance Prediction 94 4.5.5 Discussion 97 4.6 Applications 98 4.6.1 98 4.6.1.1 Application Scenario 98 4.6.1.2 4.6.2 Image Metasearch Experiments 99 Multilingual Image Search 103 4.6.2.1 4.6.2.2 4.6.3 Application Scenario 103 Experiments 104 Boolean Image Search 105 4.6.3.1 Application Scenario 105 4.6.3.2 Experiments 106 4.7 Summary 110 iii Chapter Multimedia Answer Selection 113 5.1 Introduction 113 5.2 Related Work 117 5.2.1 Complex Queries in Text Search 117 5.2.2 Complex Queries in Media Search 118 5.3 Relevant Media Answer Selection Scheme 119 5.4 Visual Concept Detection 120 5.5 Heterogeneous network 121 5.5.1 Semantic Relatedness Estimation 122 5.5.2 Visual Relatedness Estimation 124 5.5.3 Cross-Modality Relatedness Estimation 125 5.5.3.1 5.5.3.2 5.5.4 5.6 KDE Approach 126 NRCC Approach 126 Discussions 127 Experiments 128 5.6.1 5.6.2 On Visual Concept Detection 130 5.6.3 On Query Performance Comparison 131 5.6.4 On Media Answer Selection 132 5.6.5 5.7 Experimental Settings 128 On the Sensitivity of Parameters 134 Applications 136 5.7.1 5.7.2 5.8 Photo-based Question Answering 136 Textual News Visualization 138 System Evaluation 140 5.8.1 Data Presentation 140 5.8.2 On Informativeness of Enriched Media Data 140 5.8.3 Subjective Test of Multimedia Answering 143 iv 32 Figure 2.6: Three-layer system architecture for photo-based QA multimodal QA technically challenging Zhang [155] observed and addressed this problem in 2012 through the paradigm of answering-by-search in a two-stage computational framework, which is composed out of instance search (IS) and similar question ranking (QR), as illustrated in Figure 2.7 In IS, names of the instances are inferred from similar visual examples searched through a million-scale image dataset For recalling instances of non-planar and non-rigid shapes, spatial configurations that emphasize topology consistency while allowing for local variations in matches have been incorporated In QR, the candidate names of the instance are statistically identified from search results and directly utilized to retrieve similar questions from community contributed QA (cQA) archives By parsing questions into syntactic trees, a fuzzy matching between the inquirer’s question and cQA questions is performed to locate answers and recommend related questions to the inquirer The proposed framework is evaluated on a wide range of visual instances (e.g., fashion, art, food, pet, logo, and landmark) over various QA categories (e.g., 33 Figure 2.7: The framework of answering multimodal question by naming visual instance factoid, definition, how-to, and opinion) Chua et al in 2009 first systematically proposed an approach to extend text based QA research to MMQA to tackle a range of factoid, definitional and “howto” QA in a common framework [29] Their system was designed to directly find multimedia answers from web-scale media resources such as Flicker and YouTube More, they found that MMQA uses a similar retrieval pipeline as that in text-based, and give the case of video QA: word to video frame, sentence to shot, paragraph to scene and document to video sequence, as depicted in Figure 2.8 However, their scheme can be hard to answer complex questions This is due to the fact that in order to automatically accomplish MMQA, understanding questions is essentially required But it is not an easy task The existing work related to MMQA are summarized in Table 2.3 These approaches usually focus on specific domain and simply factoid questions Also all of them just handle single answer medium, i.e., image or video What’s more, few 34 Figure 2.8: The retrieval pipelines comparison between the aQA and MMQA It is observed that the procedures of the two QAs are analogous to each other of them sufficiently incorporate the community contributed knowledge 2.4 Summary As previously introduced, both traditional automatic textual QA, community-based QA (including the social-oriented QA) have achieved great success The former mainly addresses the simple and factoid questions, while the latter makes it possible to answer verbose and complex questions via utilizing the intelligence from grassroot Internet users However, to date, QA research has largely focused on text and the existing MMQA research work either can lightly handle certain questions in narrow domains or only support question-independent monolithic media type, such as pure video and pure image Further, none of the existing work deeply utilizes the social contributed resources to assist question understanding, which is the key to handle complex and general questions in broad domains Besides, answer medium 35 Table 2.3: The summarization of the previous multimedia question answering research work CommunityPP PP Angles Specific Textual Visual Answer PP contributed PP Domain Infor Knowledge Medium Work P Knowledge (Yang, 2003, [145]) News Yes Yes Video No (Wu, 2004, [136]) Movie (Wu, 2008, [137]) Yes No Video No News (Lee, 2009, [67]) (Li, 2010, [70]) Cooking Yes Yes Video Partial How to (Chua, 2009, [29]) Yes Yes Video No Definition (Gary, 2005, [61]) Game Yes Yes Images No (Yeh, 2008, [148]) Object Yes Yes Images Partial (Zhang, 2012, [155]) Object Yes Yes Text Yes determination, answer availability prediction, and media answer selection are all not scientifically touched before 36 37 Chapter Question Understanding 3.1 Introduction As mentioned before, it will be much intuitive and descriptive to answer some questions with appropriate media data, especially for the appearance-oriented questions and procedure-oriented questions To collect relevant visual data, effective query formulation plays an essential role However, formulating informative queries from questions is non-trivial due to the deep question understanding problem resulted by following facts First, most of questions are frequently expressed by only one sentence that are extremely sparse to convey rich information [28] Second, the extracted or generated key phrases should summarize the given natural language question at least one aspect/sub-topic, which should not be over-general or overspecific Third, some key phrases are exactly what the questions asked, which are not explicitly highlighted by the question contents One typical example is “Who is the current CEO of Facebook ” With the exploding growth of social networks, question annotation has turned up in some social-oriented QA sites, such as Quora and Zhihu It provides an efficient way for question understanding via enabling each question to have a few 38 Figure 3.1: Illustration of question tagging selected from Stack Overflow manually assigned and collectively editable tags without constraints on the vocabulary These tags summarize the question contents in a coarse-grained but semantically meaningful level One typical example of question tagging is demonstrated in Figure 3.1 Naturally, tags summarize multi-faceted aspects of question contents, which can be viewed as keyword candidates for the given question and can also be utilized to expand the generated query Further, taxonomies of questions based on tag mapping can greatly reduce answer generation cost by direct vertical search For example, given a question, its textual answer can be relatively easily and precisely obtained by question match within the specific sub-trees or question categories rather than the whole ontology of QA knowledge Therefore, question annotation and its generated ontology benefit the query formulation and inherent knowledge discovering within domains Beside question understanding, question tags greatly facilitate processing of user-generated contents from various angles First, tags immediately put a question into the feeds of related topic-followers, which can aggregate more attentions and lead to a faster answer Second, tags precisely connect experts to questions through signaling the recommender systems with brief but informative cues Furthermore, tags also benefit other fundamental applications such as statistics, expert badges, indexing, searching and organizing However, the incomplete characteristic of question tagging is statistically observed as a noticeable phenomenon as illustrated in Table 3.1, which is gathered from approximately 200K questions crawled from Zhihu It shows that more 39 Table 3.1: The distribution of the number of tags annotated for questions Tag Number Zero Tag One Tag Two Tags Above Two Tags Question 18.08% 19.34% 18.89% 43.69% Percentage than 37% questions contain zero or at most one tag The incompleteness of tags is caused by incomprehensive question understanding or informal tagging behaviours This severely hinders the performance of tag-based systems For example, both the precision and recall for tag-based question search will be degraded because of the absence of potentially relevant tags Also, it limits the higher level tasks of ontological question organization and effective question routing Automatic question annotation with available tag vocabulary is the most straightforward approach to tackle this difficulty However, some user-provided tags in the vocabulary are often biased towards personal perspectives or specific contextual information [3], which are usually subjective and inconsistent with the more frequently used terms These type tags are not very stable or reliable Conversely, some tags with extremely high frequencies are too broad to describe individual question content Therefore, new approaches towards automatically refilling or enlarging questions with objective and descriptive tags are highly desired It is worth mentioning that the literature regarding automatic question annotation is relatively sparse, even through several prior efforts have been dedicated to research on automatic annotation for social media and textual entities [157, 38, 93, 39] But these algorithms can hardly be applied to question annotation directly, due to differences in modalities or content structures To solve this problem, we hypothesize that similar questions from the same semantic space share the same tag space; and tags with excessive or rare occurrences are less informative or stable These two assumptions have motivated us in proposing a novel scheme to enhance automatic question annotation by exploring the cues from both 40 content analytics and social tagging behaviours The scheme comprises two main components, which is demonstrated in Figure 3.2 The first component roughly identifies a collection of probably relevant tags via finding similar question space, which aims to narrow down the suggested tag candidates This task is accomplished through an adaptive probabilistic hypergraph learning, where the vertices denote the questions, and hyperedges are generated based on QA content analysis, tag sharing networks as well as users’ social behaviours The distinction among hyperedge influences is considered with a regularizer on hyperedge weights The learning process iteratively and alternatively updates between the vertex relevances and hyperedge weights to convergence The second component deals with relevant tag selection by taking into consideration informativeness, stability and question closeness at the same time It intends to comprehensively evaluate each tag candidate and select the most appropriate tags for annotation The whole process of our approach is unsupervised and can be extended to handle large-scale data Based on the proposed scheme, we introduce one potential application scenario: knowledge organizer The knowledge structures of conventional cQA forums are predefined, which suffer from the issues associated with fixed taxonomies such as centralized, conservative and ambiguous [54] By leveraging question tags, the ontology of QA pairs can be flexibly and effortlessly reorganized via mapping associated tags into user needs-aware categories The new generated knowledge hierarchy is user-navigable and reconfigurable, which greatly empowers the users’ web surfing experience By conducting experiments on the representative real-world dataset, we demonstrate that our proposed scheme achieves significant gains in question annotation Meanwhile, the satisfactory results obtained by knowledge organizer well verify the effectiveness of the proposed scheme The remainder is structured as follows Section 3.2 and 3.3 respectively reviews the related work and briefly introduces the annotation scheme In Section 41 3.4, we discuss our adaptive probabilistic hypergraph learning approach for question space inference Section 3.5 details the proposed heuristic approach for relevant tag selection Experimental results and analysis are presented in Section 3.6, followed by our concluding remarks in Section 3.7 3.2 3.2.1 Related Work Annotation of Media Entities The prevalence of visual and audio capture devices and the growing popularity of media sharing technologies have created massive multimedia contents available online, which are distributed in sorts of community-contributed sites such as Flickr, Youtube and MeeMix Meanwhile, the user-generated tags play an essential role in making these media entities better accessible to the general public [9] via summarizing low level features with semantic descriptors [134] However, the grassroots-provided social media tags suffer from labor intensive [127], incomplete [80], biased [42] and imprecise issues [30, 81] Several recent research efforts from multimedia, computer vision as well as machine learning domains have been conducted to tackle these issues These efforts can be broadly categorized into generative models based [118, 50, 38, 93, 59] and discriminative models based approaches [39, 138, 144, 111, 62, 25] The idea behind the generative approaches is to annotate visual or audio entities by estimating the correlations or joint probabilities between a tag and the given extracted features The tags with highest probabilities could then be reserved as the final annotations A variety of statistical machine learning models have been successfully applied to automatic media data annotation such as cooccurrence model [50], machine translation model [38], latent space analysis [93], as well as relevance language model [59] The proposed co-occurrence model [50] in 42 1999 counted co-occurrences of words with image regions created using a regular grid Three years later, it was improved by a classical machine translation model via translating vocabulary of visual blobs to textual tags [38] Follow that, Monay et al utilized latent semantic analysis in 2003 to capture the co-occurrence information between low level features and concepts [93] More recently, a cross media relevance model [59] was introduced to annotate media data and observed improved effectiveness as compared to translation models Alternatively, the discriminative approaches apply classification techniques by treating tags as classes and employing the trained classifiers to annotate an input entity Earlier studies were devoted to develop binary classifiers, while most recent works viewed the tagging problem as a multi-class classification task [138] Yang et al [144] presented an asymmetrical support vector machine for region-based image annotation Carneiro et al [25] formulated the image annotation as a supervised multi-class problem, and learned the distribution model for each class However, the aforementioned methods not explicitly investigate the discriminative information between different classes Kang et al [62] noticed this research issue and deeply exploited the correlations between class labels by extending the standard label propagation algorithms to propagate multiple labels Furthermore, classifications for automated detection of the video and audio concepts were also comprehensively studied [39, 111] Though great success has been achieved for entity annotation in media domain, these techniques cannot be directly applied to the general textual domain due to the different modalities between the tags and entities 3.2.2 Annotation of Textual Entities Tags are also promising in organizing, indexing and searching textual resources such as blogs and tweets Typically, two general approaches exist in text annotation 43 One is individually tagging by extracting interesting terms from the post itself Brooks et al [21] developed a system to automatically extract three terms with the top TFIDF scores from each post and suggested them as tags A more sophisticated work was proposed in [97] by utilizing advanced natural language processing approach to distill semantic annotations from Twitter and transform them into a reusable knowledge base Wu et al [135] designed a novel system applying TextRank algorithm to extract personalized tags to label Twitter user’s interest and concerns However, tags extracted from a single individual encounter challenge of vocabulary variability [123] To overcome this problem, the second type approaches consider the tags collaboratively contributed by the crowds over a large collection of posts Xu et al [141] developed a method to globally select tags from the whole collection by simultaneously considering the criteria of high coverage, least effort and high popularity A system called “AutoTag” automatically suggested tags to the given blog post via finding similar blog posts [90] The principle of this work is concordant with our first assumption An improved version of this system names “TagAssist” [122], which annotated a post by generating search queries from the given post content, searching a collection of blog posts using those queries, selecting suitable tags from the retrieved posts Overall, literature regarding text annotation is still relatively sparse, and the existing approaches either view the entities independently or overlook the social connections of entities’ attributes Most importantly, no reported work touches the annotation problem for one of the dominant thought exchanging platforms, i.e., social QA services 3.3 Question Annotation Scheme Let Q = {q1 , q2 , , qN } and T = {t1 , t2 , , tM } respectively represent a repository of questions and their associated tags The target of this chapter is to automatically 44 Question Space Inference Relevant Tag Selection Relevance Score q What is the business model of Quora Informativeness Stability Closeness Final Selected Tags t t What is the business model of Quora Monetization Startup business model How Does Company X Make Money ? Startup Advice and Strategy Quora Figure 3.2: The schematic illustration of the proposed automatic question annotation scheme for social QA services It contains two components, i.e., question space inference and relevant tag selection select appropriate tags from T to annotate a given question q To accomplish this task, two components are involved as illustrated in Figure 3.2 The first component is question space inference It aims to identify a subset s s s of questions Qs = {q1 , q2 , , qn } from Q, each of which is semantically close to q This space is constructed by estimating the semantical similarity score fi between each qi (qi ∈ Q) and q, via our proposed adaptive probabilistic hypergraph learning approach Then all the associated tags of our inferred question space are straightforward to form a tag space Ts = {ts , ts , , ts }, Ts ⊆ T As a byproduct, this m component quantitatively outputs the semantical closeness between each question in Qs and the to be annotated question q The other component is relevant tag selection It ranks the tags in Ts by seamlessly integrating multiple analysis, i.e., informativeness obtained from user tagging behaviours, stability defined based on the neighbour voting approach, and question closeness learned by component one Based on the ordered tag list, the top tags are truncated for annotation suggestion 45 3.4 Question Space Inference In this section, we present our proposed adaptive probabilistic hypergraph learning approach, which identifies the semantically similar question space by jointly considering the QA content analysis, tag sharing networks as well as the user’s social behaviours We first introduce the hypergraph learning theory, and then detail the alternating optimization process for our proposed model Finally, we prove the learning consistency between simple graph and hypergraph 3.4.1 Probabilistic Hypergraph Construction A hypergraph G(V, E, w) is composed by the vertex set V, the hyperedge set E, and the diagonal matrix of hyperedge weight W Here, E is a family of arbitrary subsets e of V such that ∪e∈E = V, and each hyperedge e is assigned weight W (e) Unlike a simple graph, where each edge only conveys the pairwise relations that overlooks the relations in higher order, a hypergraph contains the summarized local grouping information by allowing each hyperedge to connect to more than two vertices simultaneously A probabilistic hypergraph G can be represented by a |V| × |E| incidence matrix H with the following entries:   P (vi , ej ) if vi ∈ ej ; h(vi , ej ) =  0 otherwise (3.1) where P (vi , ej ) describes the probability that vertex vi falls into the hyperedge ej Based on H, the vertex degree of vi ∈ V is estimated as, d(vi ) = ∑ W (ej )h(vi , ej ) (3.2) ej ∈E For a hyperedge ej ∈ E, its degree is defined as, δ(ej ) = ∑ vi ∈ej h(vi , ej ) (3.3) 46 We denote the vertex degrees and hyperedge degrees by Dv and De , respectively In our work, the given question q and the N questions from Q are regarded as vertices, and thus the generated hypergraph has N + vertices Meanwhile, three kinds of hyperedges are constructed as shown in Figure 3.3 The first type hyperedges take each vertex as a centroid and forms a hyperedge by circling around its k-nearest neighbors based on semantical QA content similarities This procedure was firstly adopted in [54] The second kind is tags-based by grouping all the questions sharing the same tag The third kind actually takes the users’ social behaviours into consideration via rounding up all the questions asked by the same user and its followees1 Therefore, up to N +M +1+U hyperedges are generated in our hypergraph, where U denotes the number of involved users For each hyperedge, the likelihood of each its constituent question belonging to its local group is defined according to its hyperedge type,   1  Tags-based hyperedges;    P (vi , ej ) = K(qi , qj ) QA-based hyperedges;      1 Users-based hyperedges (3.4) K(·, ·) is the Gaussian similarity function [103], defined as, qi − qj K(qi , qj ) = exp(− σ2 ), (3.5) where the radius parameter, σ, is simply set as the median of the Euclidean distances among all QA pairs Then the initial weight for each hyperedge is computed as, W (ej ) = ∑ h(vi , ej ) (3.6) vi ∈ej The magnitude of the hyperedge weight indicates to what extent the vertices in a hyperedge belong to the same group [3] If A follows B, then A is B’s follower and B is A’s followee ... 14 1. 6 Outline of the Thesis 15 Chapter Literature Review 17 2 .1 Automatic Textual Question Answering 17 2.2 Community-based Question Answering. .. Experiments 10 6 4.7 Summary 11 0 iii Chapter Multimedia Answer Selection 11 3 5 .1 Introduction 11 3 5.2 Related Work ... 11 7 5.2 .1 Complex Queries in Text Search 11 7 5.2.2 Complex Queries in Media Search 11 8 5.3 Relevant Media Answer Selection Scheme 11 9 5.4

Ngày đăng: 10/09/2015, 09:21

Tài liệu cùng người dùng

Tài liệu liên quan