* Doc Migration from Gitlab (#1289) * doc migration * fix * Update FakeQuantize_1.md * Update performance_benchmarks.md * Updates graphs for FPGA * Update performance_benchmarks.md * Change DL Workbench structure (#1) * Changed DL Workbench structure * Fixed tags * fixes * Update ie_docs.xml * Update performance_benchmarks_faq.md * Fixes in DL Workbench layout * Fixes for CVS-31290 * [DL Workbench] Minor correction * Fix for CVS-30955 * Added nGraph deprecation notice as requested by Zoe * fix broken links in api doxy layouts * CVS-31131 fixes * Additional fixes * Fixed POT TOC * Update PAC_Configure.md PAC DCP 1.2.1 install guide. * Update inference_engine_intro.md * fix broken link * Update opset.md * fix * added opset4 to layout * added new opsets to layout, set labels for them * Update VisionAcceleratorFPGA_Configure.md Updated from 2020.3 to 2020.4 Co-authored-by: domi2000 <domi2000@users.noreply.github.com>
2.1 KiB
EmbeddingBagPackedSum
Versioned name: EmbeddingBagPackedSum-3
Category: Sparse
Short description: Computes sums of "bags" of embeddings, without instantiating the intermediate embeddings.
Detailed description: This is the first case of the PyTorch EmbeddingBag, it has indices in the tensor of format [batch, indices_per_bag]. If 3rd input is not provided, this operation is equivalent to Gather followed by ReduceSum(axis=0). However, EmbeddingBagPackedSum is much more time and memory efficient than using a chain of these operations.
Inputs:
-
1:
emb_tabletensor containing the embedding lookup table of the module of shape[num_emb, emb_dim1, emb_dim2, ...]and of type T. Required. -
2:
indicestensor of shape[batch, indices_per_bag]and of type T_IND. Required. -
3:
per_sample_weightstensor of the same shape asindicesand of type T. Each value in this tensor are multiplied with each value pooled from embedding table for each index. Optional, default is tensor of ones.
Outputs:
- 1: tensor of shape
[batch, emb_dim1, emb_dim2, ...]and of type T containing embeddings for each bag.
Types
-
T: any numeric type.
-
T_IND:
int32orint64.
Example
<layer ... type="EmbeddingBagPackedSum" ... >
<input>
<port id="0"> <!-- emb_table value is: [[-0.2, -0.6], [-0.1, -0.4], [-1.9, -1.8], [-1., 1.5], [ 0.8, -0.7]] -->
<dim>5</dim>
<dim>2</dim>
</port>
<port id="1"> <!-- indices value is: [[0, 2], [1, 2], [3, 4]] -->
<dim>3</dim>
<dim>2</dim>
</port>
<port id="2"/> <!-- per_sample_weigths value is: [[0.5, 0.5], [0.5, 0.5], [0.5, 0.5]] -->
<dim>3</dim>
<dim>2</dim>
</port>
</input>
<output>
<port id="4"> <!-- output value is: [[-1.05, -1.2], [-1., -1.1], [-0.1, 0.4]] -->
<dim>3</dim>
<dim>2</dim>
</port>
</output>
</layer>