Bottleneck Attention Module (BAM)
Latest advances in deep neural networks have been developed through structure seek for stronger representational energy. On this work, we give attention to the impact of consideration basically deep neural networks. We suggest a easy and efficient consideration module, named Bottleneck Consideration Module (BAM), that may be built-in with any feed-forward convolutional neural networks. Our module infers an consideration map alongside two separate pathways, channel and spatial. We place our module at every bottleneck of fashions the place the downsampling of function maps happens. Our module constructs a hierarchical consideration at bottlenecks with quite a few parameters and it’s trainable in an end-to-end method collectively with any feed-forward fashions. We validate our BAM by in depth experiments on CIFAR-100, ImageNet-1K, VOC 2007 and MS COCO benchmarks. Our experiments present constant enchancment in classification and detection performances with numerous fashions, demonstrating the large applicability of BAM. The code and fashions will probably be publicly accessible. …
Sapphire
RDF knowledge within the linked open knowledge (LOD) cloud could be very worthwhile for a lot of completely different purposes. With a view to unlock the total worth of this knowledge, customers ought to have the ability to problem complicated queries on the RDF datasets within the LOD cloud. SPARQL can specific such complicated queries, however setting up SPARQL queries generally is a problem to customers because it requires understanding the construction and vocabulary of the datasets being queried. On this paper, we introduce Sapphire, a device that helps customers write syntactically and semantically right SPARQL queries with out prior data of the queried datasets. Sapphire interactively helps the consumer whereas typing the question by offering auto-complete options based mostly on the queried knowledge. After a question is issued, Sapphire offers options on methods to alter the question to raised match the wants of the consumer. We evaluated Sapphire based mostly on efficiency experiments and a consumer research and confirmed it to be superior to competing approaches. …
TextEnt
On this paper, we describe TextEnt, a neural community mannequin that learns distributed representations of entities and paperwork instantly from a data base (KB). Given a doc in a KB consisting of phrases and entity annotations, we prepare our mannequin to foretell the entity that the doc describes and map the doc and its goal entity shut to one another in a steady vector area. Our mannequin is skilled utilizing a lot of paperwork extracted from Wikipedia. The efficiency of the proposed mannequin is evaluated utilizing two duties, specifically fine-grained entity typing and multiclass textual content classification. The outcomes exhibit that our mannequin achieves state-of-the-art efficiency on each duties. The code and the skilled representations are made accessible on-line for additional tutorial analysis. …
Neural Tensor Network (NTN)
The Neural Tensor Community (NTN) replaces a typical linear neural community layer with a bilinear tensor layer that instantly relates two entity vectors throughout a number of dimensions. The mannequin computes a rating of how possible it’s that two entities are in a sure relationship. …