Additive multi-task learning models and task diagnostics
In: Communications in statistics. Simulation and computation, Band 53, Heft 12, S. 6120-6137
ISSN: 1532-4141
11127 Ergebnisse
Sortierung:
In: Communications in statistics. Simulation and computation, Band 53, Heft 12, S. 6120-6137
ISSN: 1532-4141
In: CAIE-D-23-03816
SSRN
In: FRL-D-24-02240
SSRN
In: Computers and electronics in agriculture: COMPAG online ; an international journal, Band 207, S. 107726
In: Schriftenreihe Wirtschafts- und Sozialwissenschaften 43
In: Computers and electronics in agriculture: COMPAG online ; an international journal, Band 229, S. 109747
ISSN: 1872-7107
In: ISPRS journal of photogrammetry and remote sensing: official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS), Band 166, S. 213-227
ISSN: 0924-2716
Vehicle detection in remote sensing image has been attracting remarkable attention over past years for its applications in traffic, security, military, and surveillance fields. Due to the stunning success of deep learning techniques in object detection community, we consider to utilize CNNs for vehicle detection task in remote sensing image. Specifically, we take advantage of deep residual network, multi-scale feature fusion, hard example mining and homography augmentation to realize vehicle detection, which almost integrates all the advanced techniques in deep learning community. Furthermore, we simultaneously address super-resolution (SR) and detection problems of low-resolution (LR) image in an end-to-end manner. In consideration of the absence of paired low-/highresolution data which are generally time-consuming and cumbersome to collect, we leverage generative adversarial network (GAN) for unsupervised SR. Detection loss is back-propagated to SR generator to boost detection performance. We conduct experiments on representative benchmark datasets and demonstrate that our model yields significant improvements over state-of-the-art methods in deep learning and remote sensing areas.
BASE
In the context of neural machine translation, data augmentation (DA) techniques may be used for generating additional training samples when the available parallel data are scarce. Many DA approaches aim at expanding the support of the empirical data distribution by generating new sentence pairs that contain infrequent words, thus making it closer to the true data distribution of parallel sentences. In this paper, we propose to follow a completely different approach and present a multi-task DA approach in which we generate new sentence pairs with transformations, such as reversing the order of the target sentence, which produce unfluent target sentences. During training, these augmented sentences are used as auxiliary tasks in a multi-task framework with the aim of providing new contexts where the target prefix is not informative enough to predict the next word. This strengthens the encoder and forces the decoder to pay more attention to the source representations of the encoder. Experiments carried out on six low-resource translation tasks show consistent improvements over the baseline and over DA methods aiming at extending the support of the empirical data distribution. The systems trained with our approach rely more on the source tokens, are more robust against domain shift and suffer less hallucinations. ; Work funded by the European Union's Horizon 2020 research and innovation programme under grant agreement number 825299, project Global Under-Resourced Media Translation (GoURMET); and by Generalitat Valenciana through project GV/2021/064. The computational resources used for the experiments were funded by the European Regional Development Fund through project IDIFEDER/2020/003.
BASE
In: ISPRS journal of photogrammetry and remote sensing: official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS), Band 144, S. 48-60
ISSN: 0924-2716
In: EAAI-24-9414
SSRN
In: Scientific African, Band 21, S. e01799
ISSN: 2468-2276
In: Computers and electronics in agriculture: COMPAG online ; an international journal, Band 218, S. 108719
In: ECM-D-24-07515
SSRN
In: APEN-D-24-12425
SSRN