Shannon Seminar Room, Place du Levant 3, Maxwell Building, 1st floor -- Thursday, 08 December 2016 at 16:15 (45 min.)
{
"name":"Deep learning and structured output problems",
"description":"Deep neural networks have shown to be efficient models for learning complex mapping functions. It is known that adding more layers improves the performance but it comes with the cost of adding more parameters which requires more training labeled data. Pre-training techniques have shown to be helpful in training deeper networks by exploiting unlabeled data. However, in practice, this technique requires a lot of effort to setup the raised hyper-parameters. We present in this talk a regularization scheme to alleviate this problem. We extend our approach for structured output problems.",
"startDate":"2016-12-08",
"endDate":"2016-12-08",
"startTime":"16:15",
"endTime":"17:00",
"location":"Shannon Seminar Room, Place du Levant 3, Maxwell Building, 1st floor",
"label":"Add to my Calendar",
"options":[
"Apple",
"Google",
"iCal",
"Microsoft365",
"MicrosoftTeams",
"Outlook.com"
],
"timeZone":"Europe/Berlin",
"trigger":"click",
"inline":true,
"listStyle":"modal",
"iCalFileName":"Seminar-Reminder"
}
Deep neural networks have shown to be efficient models for learning complex mapping functions. It is known that adding more layers improves the performance but it comes with the cost of adding more parameters which requires more training labeled data. Pre-training techniques have shown to be helpful in training deeper networks by exploiting unlabeled data. However, in practice, this technique requires a lot of effort to setup the raised hyper-parameters. We present in this talk a regularization scheme to alleviate this problem. We extend our approach for structured output problems.