Mixed Variational Inference
The Laplace approximation has been one of many workhorses of Bayesian inference. It typically delivers good approximations in apply even though it doesn’t strictly bear in mind the place the quantity of posterior density lies. Variational approaches keep away from this problem by explicitly minimising the Kullback-Leibler divergence DKL between a postulated posterior and the true (unnormalised) logarithmic posterior. Nevertheless, they depend on a closed kind DKL with the intention to replace the variational parameters. To deal with this, stochastic variations of variational inference have been devised that approximate the intractable DKL with a Monte Carlo common. This approximation permits calculating gradients with respect to the variational parameters. Nevertheless, variational strategies typically postulate a factorised Gaussian approximating posterior. In doing so, they sacrifice a-posteriori correlations. On this work, we suggest a way that mixes the Laplace approximation with the variational method. The benefits are that we preserve: applicability on non-conjugate fashions, posterior correlations and a decreased variety of free variational parameters. Numerical experiments reveal enchancment over the Laplace approximation and variational inference with factorised Gaussian posteriors. …
Hyperspherical Prototype Network (HPN)
This paper introduces hyperspherical prototype networks, which unify regression and classification by prototypes on hyperspherical output areas. Moderately than defining prototypes because the imply output vector over coaching examples per class, we suggest hyperspheres as output areas to outline class prototypes a priori with giant margin separation. By doing so, we don’t require any prototype updating, we will deal with any coaching dimension, and the output dimensionality is now not constrained to the variety of lessons. Moreover, hyperspherical prototype networks generalize to regression, by optimizing outputs as an interpolation between two prototypes on the hypersphere. Since each duties at the moment are outlined by the identical loss operate, they are often collectively optimized for multi-task issues. Experimental analysis exhibits the advantages of hyperspherical prototype networks for classification, regression, and their mixture. …
Apache Gearpump
Apache Gearpump is a real-time large information streaming engine. The identify Gearpump is a reference to the engineering time period ‘gear pump’ which is an excellent easy pump that consists of solely two gears, however could be very highly effective at streaming water. Completely different to different streaming engines, Gearpump’s engine is occasion/message primarily based. Per preliminary benchmarks we’re capable of course of 18 million messages per second (message size is 100 bytes) with a 8ms latency on a 4-node cluster. …
Discrete False Discovery Rate (DFRD+/-)
This text introduces a discrete false discovery fee (DFRD+/-) controlling methodology for information snooping testing. We examine with DFRD+/- the efficiency of dynamic portfolios constructed upon over 21,000 technical buying and selling guidelines on 12 categorical and country-specific markets over the research interval 2004-2017. The profitability, robustness and persistence of the technical guidelines are examined. We notice that technical evaluation has nonetheless short-term worth in superior, rising and frontier markets. A cross-validation train highlights the significance of frequent rebalancing and the variability of profitability in buying and selling with technical evaluation. …