Skip to content

blockers

DeepBlocker

Bases: EmbeddingBlocker

Base class for DeepBlocker strategies.

Parameters:

Name Type Description Default
frame_encoder HintOrType[DeepBlockerFrameEncoder]

DeepBlockerFrameEncoder: DeepBlocker strategy.

None
frame_encoder_kwargs OptionalKwargs

keyword arguments for initialisation of encoder

None
embedding_block_builder_kwargs OptionalKwargs

keyword arguments for initalising blockbuilder.

None
save bool

If true saves the embeddings before using blockbuilding.

True
save_dir Optional[Union[str, Path]]

Directory where to save the embeddings.

None
force bool

If true, recalculate the embeddings and overwrite existing. Else use precalculated if present.

False

Attributes:

Name Type Description
frame_encoder

DeepBlocker Encoder class to use for embedding the datasets.

embedding_block_builder

Block building class to create blocks from embeddings.

save

If true saves the embeddings before using blockbuilding.

save_dir

Directory where to save the embeddings.

force

If true, recalculate the embeddings and overwrite existing. Else use precalculated if present.

Examples:

>>> # doctest: +SKIP
>>> from sylloge import MovieGraphBenchmark
>>> from klinker.data import KlinkerDataset
>>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
>>> from klinker.blockers import DeepBlocker
>>> blocker = DeepBlocker(frame_encoder="autoencoder")
>>> blocks = blocker.assign(left=ds.left, right=ds.right)
Reference

Thirumuruganathan et. al. 'Deep Learning for Blocking in Entity Matching: A Design Space Exploration', VLDB 2021, http://vldb.org/pvldb/vol14/p2459-thirumuruganathan.pdf

Source code in klinker/blockers/embedding/deepblocker.py
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
class DeepBlocker(EmbeddingBlocker):
    """Base class for DeepBlocker strategies.

    Args:
        frame_encoder: DeepBlockerFrameEncoder: DeepBlocker strategy.
        frame_encoder_kwargs: keyword arguments for initialisation of encoder
        embedding_block_builder_kwargs: keyword arguments for initalising blockbuilder.
        save: If true saves the embeddings before using blockbuilding.
        save_dir: Directory where to save the embeddings.
        force: If true, recalculate the embeddings and overwrite existing. Else use precalculated if present.

    Attributes:
        frame_encoder: DeepBlocker Encoder class to use for embedding the datasets.
        embedding_block_builder: Block building class to create blocks from embeddings.
        save: If true saves the embeddings before using blockbuilding.
        save_dir: Directory where to save the embeddings.
        force: If true, recalculate the embeddings and overwrite existing. Else use precalculated if present.


    Examples:

        >>> # doctest: +SKIP
        >>> from sylloge import MovieGraphBenchmark
        >>> from klinker.data import KlinkerDataset
        >>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
        >>> from klinker.blockers import DeepBlocker
        >>> blocker = DeepBlocker(frame_encoder="autoencoder")
        >>> blocks = blocker.assign(left=ds.left, right=ds.right)

    Quote: Reference
        Thirumuruganathan et. al. 'Deep Learning for Blocking in Entity Matching: A Design Space Exploration', VLDB 2021, <http://vldb.org/pvldb/vol14/p2459-thirumuruganathan.pdf>
    """

    def __init__(
        self,
        frame_encoder: HintOrType[DeepBlockerFrameEncoder] = None,
        frame_encoder_kwargs: OptionalKwargs = None,
        embedding_block_builder: HintOrType[EmbeddingBlockBuilder] = None,
        embedding_block_builder_kwargs: OptionalKwargs = None,
        save: bool = True,
        save_dir: Optional[Union[str, pathlib.Path]] = None,
        force: bool = False,
    ):
        frame_encoder = deep_blocker_encoder_resolver.make(
            frame_encoder, frame_encoder_kwargs
        )
        super().__init__(
            frame_encoder=frame_encoder,
            embedding_block_builder=embedding_block_builder,
            embedding_block_builder_kwargs=embedding_block_builder_kwargs,
            save=save,
            save_dir=save_dir,
            force=force,
        )

EmbeddingBlocker

Bases: SchemaAgnosticBlocker

Base class for embedding-based blocking approaches.

Parameters:

Name Type Description Default
frame_encoder HintOrType[FrameEncoder]

Encoder class to use for embedding the datasets.

None
frame_encoder_kwargs OptionalKwargs

keyword arguments for initialising encoder class.

None
embedding_block_builder HintOrType[EmbeddingBlockBuilder]

Block building class to create blocks from embeddings.

None
embedding_block_builder_kwargs OptionalKwargs

keyword arguments for initalising blockbuilder.

None
save bool

If true saves the embeddings before using blockbuilding.

True
save_dir Optional[Union[str, Path]]

Directory where to save the embeddings.

None
force bool

If true, recalculate the embeddings and overwrite existing. Else use precalculated if present.

False

Attributes:

Name Type Description
frame_encoder

Encoder class to use for embedding the datasets.

embedding_block_builder

Block building class to create blocks from embeddings.

save

If true saves the embeddings before using blockbuilding.

save_dir

Directory where to save the embeddings.

force

If true, recalculate the embeddings and overwrite existing. Else use precalculated if present.

Source code in klinker/blockers/embedding/blocker.py
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
class EmbeddingBlocker(SchemaAgnosticBlocker):
    """Base class for embedding-based blocking approaches.

    Args:
        frame_encoder: Encoder class to use for embedding the datasets.
        frame_encoder_kwargs: keyword arguments for initialising encoder class.
        embedding_block_builder: Block building class to create blocks from embeddings.
        embedding_block_builder_kwargs: keyword arguments for initalising blockbuilder.
        save: If true saves the embeddings before using blockbuilding.
        save_dir: Directory where to save the embeddings.
        force: If true, recalculate the embeddings and overwrite existing. Else use precalculated if present.

    Attributes:
        frame_encoder: Encoder class to use for embedding the datasets.
        embedding_block_builder: Block building class to create blocks from embeddings.
        save: If true saves the embeddings before using blockbuilding.
        save_dir: Directory where to save the embeddings.
        force: If true, recalculate the embeddings and overwrite existing. Else use precalculated if present.
    """

    def __init__(
        self,
        frame_encoder: HintOrType[FrameEncoder] = None,
        frame_encoder_kwargs: OptionalKwargs = None,
        embedding_block_builder: HintOrType[EmbeddingBlockBuilder] = None,
        embedding_block_builder_kwargs: OptionalKwargs = None,
        save: bool = True,
        save_dir: Optional[Union[str, pathlib.Path]] = None,
        force: bool = False,
    ):
        self.frame_encoder = frame_encoder_resolver.make(
            frame_encoder, frame_encoder_kwargs
        )
        self.embedding_block_builder = block_builder_resolver.make(
            embedding_block_builder, embedding_block_builder_kwargs
        )
        self.save = save
        self.save_dir = save_dir
        self.force = force

    def _assign(
        self,
        left: SeriesType,
        right: SeriesType,
        left_rel: Optional[KlinkerFrame] = None,
        right_rel: Optional[KlinkerFrame] = None,
    ) -> KlinkerBlockManager:
        """

        Args:
          left: SeriesType:
          right: SeriesType:
          left_rel: Optional[KlinkerFrame]:  (Default value = None)
          right_rel: Optional[KlinkerFrame]:  (Default value = None)

        Returns:

        """
        left = generic_upgrade_from_series(left, reset_index=False)
        right = generic_upgrade_from_series(right, reset_index=False)

        # handle save dir
        if self.save:
            if self.save_dir is None:
                save_dir = pathlib.Path(".").joinpath(
                    f"{left.table_name}_{right.table_name}_{self.frame_encoder.__class__.__name__}"
                )
                self.save_dir = save_dir
            if os.path.exists(self.save_dir):
                left_path, left_name = self._encoding_path_and_table_name_from_dir(
                    "left_", left.table_name
                )
                right_path, right_name = self._encoding_path_and_table_name_from_dir(
                    "right_", right.table_name
                )
                if left_path is not None and right_path is not None:
                    if self.force:
                        warnings.warn(
                            f"{self.save_dir} exists. Overwriting! This behaviour can be changed by setting `force=False`"
                        )
                        os.makedirs(self.save_dir, exist_ok=True)
                    else:
                        logger.info(
                            f"Loading existing encodings from {left_path} and {right_path}. To recalculate set `force=True`"
                        )
                        return self.from_encoded(
                            left_path=left_path,
                            left_name=left_name,
                            right_path=right_path,
                            right_name=right_name,
                        )
        left_emb, right_emb = self.frame_encoder.encode(
            left=left,
            right=right,
            left_rel=left_rel,
            right_rel=right_rel,
        )
        if self.save:
            assert self.save_dir  # for mypy
            assert left.table_name
            assert right.table_name
            EmbeddingBlocker.save_encoded(
                self.save_dir,
                (left_emb, right_emb),
                (left.table_name, right.table_name),
            )
        assert left.table_name
        assert right.table_name
        return self.embedding_block_builder.build_blocks(
            left=left_emb,
            right=right_emb,
            left_name=left.table_name,
            right_name=right.table_name,
        )

    @staticmethod
    def save_encoded(
        save_dir: Union[str, pathlib.Path],
        encodings: Tuple[NamedVector, NamedVector],
        table_names: Tuple[str, str],
    ):
        """Save embeddings.

        Args:
          save_dir: Union[str, pathlib.Path]: Directory to save into.
          encodings: Tuple[NamedVector, NamedVector]: Tuple of named embeddings.
          table_names: Tuple[str, str]: Name of left/right dataset.

        """
        if isinstance(save_dir, str):
            save_dir = pathlib.Path(save_dir)
        if not os.path.exists(save_dir):
            os.makedirs(save_dir)
        for enc, table_name, left_right in zip(
            encodings, table_names, get_args(ENC_PREFIX)
        ):
            path = save_dir.joinpath(f"{left_right}{table_name}{ENC_SUFFIX}")
            logger.info(f"Saved encoding in {path}")
            enc.to_pickle(path)

    def _encoding_path_and_table_name_from_dir(
        self, left_or_right: ENC_PREFIX, table_name: Optional[str] = None
    ) -> Tuple[Optional[pathlib.Path], Optional[str]]:
        assert self.save_dir  # for mypy
        if isinstance(self.save_dir, str):
            self.save_dir = pathlib.Path(self.save_dir)

        if table_name is not None:
            possible_path = self.save_dir.joinpath(
                f"{left_or_right}{table_name}{ENC_SUFFIX}"
            )
            if os.path.exists(possible_path):
                return possible_path, table_name
            return None, None

        enc_path_list = list(self.save_dir.glob(f"{left_or_right}*{ENC_SUFFIX}"))
        if len(enc_path_list) > 1:
            warnings.warn(
                f"Found multiple encodings {enc_path_list} will choose the first"
            )
        elif len(enc_path_list) == 0:
            raise FileNotFoundError(
                f"Expected to find encoding pickle in {self.save_dir} for {left_or_right} side!"
            )

        enc_path = enc_path_list[0]
        table_name = (
            str(enc_path.name).replace(f"{left_or_right}", "").replace(ENC_SUFFIX, "")
        )
        return enc_path, table_name

    def from_encoded(
        self,
        left_path=None,
        right_path=None,
        left_name=None,
        right_name=None,
    ) -> KlinkerBlockManager:
        """Apply blockbuilding strategy from precalculated embeddings.

        Args:
          left_path: path of left encoding.
          right_path: path of right encoding.
          left_name: Name of left dataset.
          right_name: Name of right dataset.

        Returns:
          Calculated blocks.
        """
        if self.save_dir is None:
            raise ValueError("Cannot run `from_encoded` if `self.save_dir` is None!")
        if left_path is None:
            left_path, left_name = self._encoding_path_and_table_name_from_dir("left_")
            right_path, right_name = self._encoding_path_and_table_name_from_dir(
                "right_"
            )

        left_enc = NamedVector.from_pickle(left_path)
        right_enc = NamedVector.from_pickle(right_path)
        return self.embedding_block_builder.build_blocks(
            left=left_enc,
            right=right_enc,
            left_name=left_name,
            right_name=right_name,
        )

from_encoded(left_path=None, right_path=None, left_name=None, right_name=None)

Apply blockbuilding strategy from precalculated embeddings.

Parameters:

Name Type Description Default
left_path

path of left encoding.

None
right_path

path of right encoding.

None
left_name

Name of left dataset.

None
right_name

Name of right dataset.

None

Returns:

Type Description
KlinkerBlockManager

Calculated blocks.

Source code in klinker/blockers/embedding/blocker.py
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
def from_encoded(
    self,
    left_path=None,
    right_path=None,
    left_name=None,
    right_name=None,
) -> KlinkerBlockManager:
    """Apply blockbuilding strategy from precalculated embeddings.

    Args:
      left_path: path of left encoding.
      right_path: path of right encoding.
      left_name: Name of left dataset.
      right_name: Name of right dataset.

    Returns:
      Calculated blocks.
    """
    if self.save_dir is None:
        raise ValueError("Cannot run `from_encoded` if `self.save_dir` is None!")
    if left_path is None:
        left_path, left_name = self._encoding_path_and_table_name_from_dir("left_")
        right_path, right_name = self._encoding_path_and_table_name_from_dir(
            "right_"
        )

    left_enc = NamedVector.from_pickle(left_path)
    right_enc = NamedVector.from_pickle(right_path)
    return self.embedding_block_builder.build_blocks(
        left=left_enc,
        right=right_enc,
        left_name=left_name,
        right_name=right_name,
    )

save_encoded(save_dir, encodings, table_names) staticmethod

Save embeddings.

Parameters:

Name Type Description Default
save_dir Union[str, Path]

Union[str, pathlib.Path]: Directory to save into.

required
encodings Tuple[NamedVector, NamedVector]

Tuple[NamedVector, NamedVector]: Tuple of named embeddings.

required
table_names Tuple[str, str]

Tuple[str, str]: Name of left/right dataset.

required
Source code in klinker/blockers/embedding/blocker.py
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
@staticmethod
def save_encoded(
    save_dir: Union[str, pathlib.Path],
    encodings: Tuple[NamedVector, NamedVector],
    table_names: Tuple[str, str],
):
    """Save embeddings.

    Args:
      save_dir: Union[str, pathlib.Path]: Directory to save into.
      encodings: Tuple[NamedVector, NamedVector]: Tuple of named embeddings.
      table_names: Tuple[str, str]: Name of left/right dataset.

    """
    if isinstance(save_dir, str):
        save_dir = pathlib.Path(save_dir)
    if not os.path.exists(save_dir):
        os.makedirs(save_dir)
    for enc, table_name, left_right in zip(
        encodings, table_names, get_args(ENC_PREFIX)
    ):
        path = save_dir.joinpath(f"{left_right}{table_name}{ENC_SUFFIX}")
        logger.info(f"Saved encoding in {path}")
        enc.to_pickle(path)

MinHashLSHBlocker

Bases: SchemaAgnosticBlocker

Blocker relying on MinHashLSH procedure.

Parameters:

Name Type Description Default
tokenize_fn Callable

Function that tokenizes entity attribute values.

word_tokenize
threshold float

float: Jaccard threshold to use in underlying lsh procedure.

0.5
num_perm int

int: number of permutations used in minhash algorithm.

128
weights Tuple[float, float]

Tuple[float,float]: false positive/false negative weighting (must add up to one)

(0.5, 0.5)

Attributes:

Name Type Description
tokenize_fn Callable

Function that tokenizes entity attribute values.

threshold

float: Jaccard threshold to use in underlying lsh procedure.

num_perm

int: number of permutations used in minhash algorithm.

weights

Tuple[float,float]: false positive/false negative weighting (must add up to one)

Examples:

>>> # doctest: +SKIP
>>> from sylloge import MovieGraphBenchmark
>>> from klinker.data import KlinkerDataset
>>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
>>> from klinker.blockers import MinHashLSHBlocker
>>> blocker = MinHashLSHBlocker(threshold=0.8, weights=(0.7,0.3))
>>> blocks = blocker.assign(left=ds.left, right=ds.right)
Source code in klinker/blockers/lsh.py
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
class MinHashLSHBlocker(SchemaAgnosticBlocker):
    """Blocker relying on MinHashLSH procedure.

    Args:
        tokenize_fn Callable: Function that tokenizes entity attribute values.
        threshold: float: Jaccard threshold to use in underlying lsh procedure.
        num_perm: int: number of permutations used in minhash algorithm.
        weights: Tuple[float,float]: false positive/false negative weighting (must add up to one)

    Attributes:
        tokenize_fn Callable: Function that tokenizes entity attribute values.
        threshold: float: Jaccard threshold to use in underlying lsh procedure.
        num_perm: int: number of permutations used in minhash algorithm.
        weights: Tuple[float,float]: false positive/false negative weighting (must add up to one)

    Examples:

        >>> # doctest: +SKIP
        >>> from sylloge import MovieGraphBenchmark
        >>> from klinker.data import KlinkerDataset
        >>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
        >>> from klinker.blockers import MinHashLSHBlocker
        >>> blocker = MinHashLSHBlocker(threshold=0.8, weights=(0.7,0.3))
        >>> blocks = blocker.assign(left=ds.left, right=ds.right)

    """

    def __init__(
        self,
        tokenize_fn: Callable = word_tokenize,
        threshold: float = 0.5,
        num_perm: int = 128,
        weights: Tuple[float, float] = (0.5, 0.5),
    ):
        self.tokenize_fn = tokenize_fn
        self.threshold = threshold
        self.num_perm = num_perm
        self.weights = weights

    def _inner_encode(self, val: str):
        """Encodes string to list of bytes

        Args:
          val: str: input string.

        Returns:
            list of bytes.
        """
        return [tok.encode("utf-8") for tok in self.tokenize_fn(str(val))]

    def _assign(
        self,
        left: SeriesType,
        right: SeriesType,
        left_rel: Optional[KlinkerFrame] = None,
        right_rel: Optional[KlinkerFrame] = None,
    ) -> KlinkerBlockManager:
        """Assign entity ids to blocks.

        Uses minhash algorithm to encode entities via tokenized attributes.
        Fills a lsh instance with the left hashes.
        Queries using the right hashes.

        Args:
          left: SeriesType: concatenated entity attribute values of left dataset as series.
          right: SeriesType: concatenated entity attribute values of left dataset as series.
          left_rel: Optional[KlinkerFrame]:  (Default value = None) Contains relational information of left dataset.
          right_rel: Optional[KlinkerFrame]:  (Default value = None) Contains relational information of left dataset.

        Returns:
            KlinkerBlockManager: instance holding the resulting blocks.
        """
        lsh = MinHashLSH(
            threshold=self.threshold,
            num_perm=self.num_perm,
            weights=self.weights,
        )
        if isinstance(left, dd.Series):
            left.map_partitions(
                _insert,
                lsh=lsh,
                encode_fn=self._inner_encode,
                meta=left._meta.index,
            ).compute()
            blocks = right.map_partitions(
                _query,
                lsh=lsh,
                encode_fn=self._inner_encode,
                left_name=left.name,
                right_name=right.name,
                meta=pd.DataFrame([], columns=[left.name, right.name], dtype="O"),
            )
            return KlinkerBlockManager(blocks)
        else:
            _insert(left, lsh=lsh, encode_fn=self._inner_encode)
            blocks = _query(
                right,
                lsh=lsh,
                encode_fn=self._inner_encode,
                left_name=left.name,
                right_name=right.name,
            )
            return KlinkerBlockManager.from_pandas(blocks)

QgramsBlocker

Bases: StandardBlocker

Blocker relying on qgram procedure

Parameters:

Name Type Description Default
blocking_key str

str: On which attribute the blocking should be done

required
q int

int: how big the qgrams should be.

3

Attributes:

Name Type Description
blocking_key

str: On which attribute the blocking should be done

q

int: how big the qgrams should be.

Examples:

>>> # doctest: +SKIP
>>> from sylloge import MovieGraphBenchmark
>>> from klinker.data import KlinkerDataset
>>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
>>> from klinker.blockers import QgramsBlocker
>>> blocker = QgramsBlocker(blocking_key="tail")
>>> blocks = blocker.assign(left=ds.left, right=ds.right)
Source code in klinker/blockers/qgrams.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
class QgramsBlocker(StandardBlocker):
    """Blocker relying on qgram procedure

    Args:
        blocking_key: str: On which attribute the blocking should be done
        q: int: how big the qgrams should be.

    Attributes:
        blocking_key: str: On which attribute the blocking should be done
        q: int: how big the qgrams should be.

    Examples:

        >>> # doctest: +SKIP
        >>> from sylloge import MovieGraphBenchmark
        >>> from klinker.data import KlinkerDataset
        >>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
        >>> from klinker.blockers import QgramsBlocker
        >>> blocker = QgramsBlocker(blocking_key="tail")
        >>> blocks = blocker.assign(left=ds.left, right=ds.right)
    """

    def __init__(self, blocking_key: str, q: int = 3):
        super().__init__(blocking_key=blocking_key)
        self.q = q

    def qgram_tokenize(self, x: str) -> Optional[List[str]]:
        """Tokenize into qgrams

        Args:
          x: str: input string

        Returns:
            list of qgrams
        """
        if x is None:
            return None
        else:
            return ["".join(tok) for tok in ngrams(x, self.q)]

    def assign(
        self,
        left: KlinkerFrame,
        right: KlinkerFrame,
        left_rel: Optional[KlinkerFrame] = None,
        right_rel: Optional[KlinkerFrame] = None,
    ) -> KlinkerBlockManager:
        """Assign entity ids to blocks.

        Args:
          left: KlinkerFrame: Contains entity attribute information of left dataset.
          right: KlinkerFrame: Contains entity attribute information of right dataset.
          left_rel: Optional[KlinkerFrame]:  (Default value = None) Contains relational information of left dataset.
          right_rel: Optional[KlinkerFrame]:  (Default value = None) Contains relational information of left dataset.

        Returns:
            KlinkerBlockManager: instance holding the resulting blocks.
        """
        assert isinstance(self.blocking_key, str)
        qgramed = []
        for tab in [left, right]:

            reduced = tab.set_index(tab.id_col)[self.blocking_key]
            if isinstance(left, dd.DataFrame):
                series = reduced.apply(
                    self.qgram_tokenize, meta=(self.blocking_key, "object")
                )
            else:
                series = reduced.apply(self.qgram_tokenize)
            series = series.explode()

            kf = tab.__class__._upgrade_from_series(
                series,
                table_name=tab.table_name,
                id_col=tab.id_col,
                columns=[tab.id_col, self.blocking_key],
            )
            qgramed.append(kf)
        return super().assign(left=qgramed[0], right=qgramed[1])

assign(left, right, left_rel=None, right_rel=None)

Assign entity ids to blocks.

Parameters:

Name Type Description Default
left KlinkerFrame

KlinkerFrame: Contains entity attribute information of left dataset.

required
right KlinkerFrame

KlinkerFrame: Contains entity attribute information of right dataset.

required
left_rel Optional[KlinkerFrame]

Optional[KlinkerFrame]: (Default value = None) Contains relational information of left dataset.

None
right_rel Optional[KlinkerFrame]

Optional[KlinkerFrame]: (Default value = None) Contains relational information of left dataset.

None

Returns:

Name Type Description
KlinkerBlockManager KlinkerBlockManager

instance holding the resulting blocks.

Source code in klinker/blockers/qgrams.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
def assign(
    self,
    left: KlinkerFrame,
    right: KlinkerFrame,
    left_rel: Optional[KlinkerFrame] = None,
    right_rel: Optional[KlinkerFrame] = None,
) -> KlinkerBlockManager:
    """Assign entity ids to blocks.

    Args:
      left: KlinkerFrame: Contains entity attribute information of left dataset.
      right: KlinkerFrame: Contains entity attribute information of right dataset.
      left_rel: Optional[KlinkerFrame]:  (Default value = None) Contains relational information of left dataset.
      right_rel: Optional[KlinkerFrame]:  (Default value = None) Contains relational information of left dataset.

    Returns:
        KlinkerBlockManager: instance holding the resulting blocks.
    """
    assert isinstance(self.blocking_key, str)
    qgramed = []
    for tab in [left, right]:

        reduced = tab.set_index(tab.id_col)[self.blocking_key]
        if isinstance(left, dd.DataFrame):
            series = reduced.apply(
                self.qgram_tokenize, meta=(self.blocking_key, "object")
            )
        else:
            series = reduced.apply(self.qgram_tokenize)
        series = series.explode()

        kf = tab.__class__._upgrade_from_series(
            series,
            table_name=tab.table_name,
            id_col=tab.id_col,
            columns=[tab.id_col, self.blocking_key],
        )
        qgramed.append(kf)
    return super().assign(left=qgramed[0], right=qgramed[1])

qgram_tokenize(x)

Tokenize into qgrams

Parameters:

Name Type Description Default
x str

str: input string

required

Returns:

Type Description
Optional[List[str]]

list of qgrams

Source code in klinker/blockers/qgrams.py
36
37
38
39
40
41
42
43
44
45
46
47
48
def qgram_tokenize(self, x: str) -> Optional[List[str]]:
    """Tokenize into qgrams

    Args:
      x: str: input string

    Returns:
        list of qgrams
    """
    if x is None:
        return None
    else:
        return ["".join(tok) for tok in ngrams(x, self.q)]

RelationalDeepBlocker

Bases: RelationalBlocker

Seperate DeepBlocker strategy on concatenation of entity attribute values and neighboring values.

Examples:

>>> # doctest: +SKIP
>>> from sylloge import MovieGraphBenchmark
>>> from klinker.data import KlinkerDataset
>>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
>>> from klinker.blockers import RelationalDeepBlocker
>>> blocker = RelationalDeepBlocker(attr_frame_encoder="autoencoder", rel_frame_encoder="autoencoder")
>>> blocks = blocker.assign(left=ds.left, right=ds.right, left_rel=ds.left_rel, right_rel=ds.right_rel)
Source code in klinker/blockers/relation_aware.py
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
class RelationalDeepBlocker(RelationalBlocker):
    """Seperate DeepBlocker strategy on concatenation of entity attribute values and neighboring values.

    Examples:

        >>> # doctest: +SKIP
        >>> from sylloge import MovieGraphBenchmark
        >>> from klinker.data import KlinkerDataset
        >>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
        >>> from klinker.blockers import RelationalDeepBlocker
        >>> blocker = RelationalDeepBlocker(attr_frame_encoder="autoencoder", rel_frame_encoder="autoencoder")
        >>> blocks = blocker.assign(left=ds.left, right=ds.right, left_rel=ds.left_rel, right_rel=ds.right_rel)
    """

    _attribute_blocker: DeepBlocker
    _relation_blocker: DeepBlocker

    def __init__(
        self,
        attr_frame_encoder: HintOrType[DeepBlockerFrameEncoder] = None,
        attr_frame_encoder_kwargs: OptionalKwargs = None,
        attr_embedding_block_builder: HintOrType[EmbeddingBlockBuilder] = None,
        attr_embedding_block_builder_kwargs: OptionalKwargs = None,
        rel_frame_encoder: HintOrType[DeepBlockerFrameEncoder] = None,
        rel_frame_encoder_kwargs: OptionalKwargs = None,
        rel_embedding_block_builder: HintOrType[EmbeddingBlockBuilder] = None,
        rel_embedding_block_builder_kwargs: OptionalKwargs = None,
        save: bool = True,
        save_dir: Optional[Union[str, pathlib.Path]] = None,
        force: bool = False,
    ):
        self._attribute_blocker = DeepBlocker(
            frame_encoder=attr_frame_encoder,
            frame_encoder_kwargs=attr_frame_encoder_kwargs,
            embedding_block_builder=attr_embedding_block_builder,
            embedding_block_builder_kwargs=attr_embedding_block_builder_kwargs,
        )
        self._relation_blocker = DeepBlocker(
            frame_encoder=rel_frame_encoder,
            frame_encoder_kwargs=rel_frame_encoder_kwargs,
            embedding_block_builder=rel_embedding_block_builder,
            embedding_block_builder_kwargs=rel_embedding_block_builder_kwargs,
        )
        # set after instatiating seperate blocker to use setter
        self.save = save
        self.force = force
        self.save_dir = save_dir

    @property
    def save(self) -> bool:
        return self._save

    @save.setter
    def save(self, value: bool):
        self._save = value
        self._attribute_blocker.save = value
        self._relation_blocker.save = value

    @property
    def force(self) -> bool:
        return self._force

    @force.setter
    def force(self, value: bool):
        self._force = value
        self._attribute_blocker.force = value
        self._relation_blocker.force = value

    @property
    def save_dir(self) -> Optional[Union[str, pathlib.Path]]:
        return self._save_dir

    @save_dir.setter
    def save_dir(self, value: Optional[Union[str, pathlib.Path]]):
        if value is None:
            self._save_dir = None
            self._attribute_blocker.save_dir = None
            self._relation_blocker.save_dir = None
        else:
            sd = pathlib.Path(value)
            self._save_dir = sd
            self._attribute_blocker.save_dir = sd.joinpath("attributes")
            self._relation_blocker.save_dir = sd.joinpath("relation")

RelationalMinHashLSHBlocker

Bases: RelationalBlocker

Seperate MinHashLSH blocking on concatenation of entity attribute values and neighboring values.

Examples:

>>> # doctest: +SKIP
>>> from sylloge import MovieGraphBenchmark
>>> from klinker.data import KlinkerDataset
>>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
>>> from klinker.blockers import RelationalMinHashLSHBlocker
>>> blocker = RelationalMinHashLSHBlocker(attr_threshold=0.7, rel_threshold=0.9)
>>> blocks = blocker.assign(left=ds.left, right=ds.right, left_rel=ds.left_rel, right_rel=ds.right_rel)
Source code in klinker/blockers/relation_aware.py
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
class RelationalMinHashLSHBlocker(RelationalBlocker):
    """Seperate MinHashLSH blocking on concatenation of entity attribute values and neighboring values.

    Examples:

        >>> # doctest: +SKIP
        >>> from sylloge import MovieGraphBenchmark
        >>> from klinker.data import KlinkerDataset
        >>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
        >>> from klinker.blockers import RelationalMinHashLSHBlocker
        >>> blocker = RelationalMinHashLSHBlocker(attr_threshold=0.7, rel_threshold=0.9)
        >>> blocks = blocker.assign(left=ds.left, right=ds.right, left_rel=ds.left_rel, right_rel=ds.right_rel)
    """

    def __init__(
        self,
        tokenize_fn: Callable = word_tokenize,
        attr_threshold: float = 0.5,
        attr_num_perm: int = 128,
        attr_weights: Tuple[float, float] = (0.5, 0.5),
        rel_threshold: float = 0.7,
        rel_num_perm: int = 128,
        rel_weights: Tuple[float, float] = (0.5, 0.5),
    ):
        self._attribute_blocker = MinHashLSHBlocker(
            tokenize_fn=tokenize_fn,
            threshold=attr_threshold,
            num_perm=attr_num_perm,
            weights=attr_weights,
        )
        self._relation_blocker = MinHashLSHBlocker(
            tokenize_fn=tokenize_fn,
            threshold=rel_threshold,
            num_perm=rel_num_perm,
            weights=rel_weights,
        )

RelationalTokenBlocker

Bases: RelationalBlocker

Seperate Tokenblocking on concatenation of entity attribute values and neighboring values.

Examples:

>>> # doctest: +SKIP
>>> from sylloge import MovieGraphBenchmark
>>> from klinker.data import KlinkerDataset
>>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
>>> from klinker.blockers import RelationalTokenBlocker
>>> blocker = RelationalTokenBlocker(attr_min_token_length=3, rel_min_token_length=5)
>>> blocks = blocker.assign(left=ds.left, right=ds.right, left_rel=ds.left_rel, right_rel=ds.right_rel)
Source code in klinker/blockers/relation_aware.py
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
class RelationalTokenBlocker(RelationalBlocker):
    """Seperate Tokenblocking on concatenation of entity attribute values and neighboring values.

    Examples:

        >>> # doctest: +SKIP
        >>> from sylloge import MovieGraphBenchmark
        >>> from klinker.data import KlinkerDataset
        >>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
        >>> from klinker.blockers import RelationalTokenBlocker
        >>> blocker = RelationalTokenBlocker(attr_min_token_length=3, rel_min_token_length=5)
        >>> blocks = blocker.assign(left=ds.left, right=ds.right, left_rel=ds.left_rel, right_rel=ds.right_rel)

    """

    def __init__(
        self,
        tokenize_fn: Callable[[str], List[str]] = word_tokenize,
        attr_min_token_length: int = 3,
        rel_min_token_length: int = 3,
    ):
        self._attribute_blocker = TokenBlocker(
            tokenize_fn=tokenize_fn,
            min_token_length=attr_min_token_length,
        )
        self._relation_blocker = TokenBlocker(
            tokenize_fn=tokenize_fn,
            min_token_length=rel_min_token_length,
        )

SimpleRelationalMinHashLSHBlocker

Bases: BaseSimpleRelationalBlocker

MinHashLSH blocking on concatenation of entity attribute values and neighboring values.

Examples:

>>> # doctest: +SKIP
>>> from sylloge import MovieGraphBenchmark
>>> from klinker.data import KlinkerDataset
>>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
>>> from klinker.blockers import SimpleRelationalTokenBlocker
>>> blocker = SimpleRelationalMinHashLSHBlocker()
>>> blocks = blocker.assign(left=ds.left, right=ds.right, left_rel=ds.left_rel, right_rel=ds.right_rel)
Source code in klinker/blockers/relation_aware.py
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
class SimpleRelationalMinHashLSHBlocker(BaseSimpleRelationalBlocker):
    """MinHashLSH blocking on concatenation of entity attribute values and neighboring values.

    Examples:

        >>> # doctest: +SKIP
        >>> from sylloge import MovieGraphBenchmark
        >>> from klinker.data import KlinkerDataset
        >>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
        >>> from klinker.blockers import SimpleRelationalTokenBlocker
        >>> blocker = SimpleRelationalMinHashLSHBlocker()
        >>> blocks = blocker.assign(left=ds.left, right=ds.right, left_rel=ds.left_rel, right_rel=ds.right_rel)
    """

    def __init__(
        self,
        tokenize_fn: Callable = word_tokenize,
        threshold: float = 0.5,
        num_perm: int = 128,
        weights: Tuple[float, float] = (0.5, 0.5),
    ):
        self._blocker = MinHashLSHBlocker(
            tokenize_fn=tokenize_fn,
            threshold=threshold,
            num_perm=num_perm,
            weights=weights,
        )

SimpleRelationalTokenBlocker

Bases: BaseSimpleRelationalBlocker

Token blocking on concatenation of entity attribute values and neighboring values.

Examples:

>>> # doctest: +SKIP
>>> from sylloge import MovieGraphBenchmark
>>> from klinker.data import KlinkerDataset
>>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
>>> from klinker.blockers import SimpleRelationalTokenBlocker
>>> blocker = SimpleRelationalTokenBlocker()
>>> blocks = blocker.assign(left=ds.left, right=ds.right, left_rel=ds.left_rel, right_rel=ds.right_rel)
Source code in klinker/blockers/relation_aware.py
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
class SimpleRelationalTokenBlocker(BaseSimpleRelationalBlocker):
    """Token blocking on concatenation of entity attribute values and neighboring values.

    Examples:

        >>> # doctest: +SKIP
        >>> from sylloge import MovieGraphBenchmark
        >>> from klinker.data import KlinkerDataset
        >>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
        >>> from klinker.blockers import SimpleRelationalTokenBlocker
        >>> blocker = SimpleRelationalTokenBlocker()
        >>> blocks = blocker.assign(left=ds.left, right=ds.right, left_rel=ds.left_rel, right_rel=ds.right_rel)
    """

    def __init__(
        self,
        tokenize_fn: Callable[[str], List[str]] = word_tokenize,
        min_token_length: int = 3,
        intermediate_saving: bool = False,
    ):
        self._blocker = TokenBlocker(
            tokenize_fn=tokenize_fn,
            min_token_length=min_token_length,
        )

StandardBlocker

Bases: Blocker

Block on same values of a specific column.

Examples:

>>> # doctest: +SKIP
>>> from sylloge import MovieGraphBenchmark
>>> from klinker.data import KlinkerDataset
>>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
>>> from klinker.blockers import StandardBlocker
>>> blocker = StandardBlocker(blocking_key="tail")
>>> blocks = blocker.assign(left=ds.left, right=ds.right)
Reference

Fellegi, Ivan P. and Alan B. Sunter. 'A Theory for Record Linkage.' Journal of the American Statistical Association 64 (1969): 1183-1210.

Source code in klinker/blockers/standard.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
class StandardBlocker(Blocker):
    """Block on same values of a specific column.

    Examples:

        >>> # doctest: +SKIP
        >>> from sylloge import MovieGraphBenchmark
        >>> from klinker.data import KlinkerDataset
        >>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
        >>> from klinker.blockers import StandardBlocker
        >>> blocker = StandardBlocker(blocking_key="tail")
        >>> blocks = blocker.assign(left=ds.left, right=ds.right)

    Quote: Reference
        Fellegi, Ivan P. and Alan B. Sunter. 'A Theory for Record Linkage.' Journal of the American Statistical Association 64 (1969): 1183-1210.
    """

    def __init__(self, blocking_key: str):
        self.blocking_key = blocking_key

    def _inner_assign(self, kf: KlinkerFrame) -> pd.DataFrame:
        id_col = kf.id_col
        table_name = kf.table_name
        assert table_name

        # TODO address code duplication
        if isinstance(kf, KlinkerDaskFrame):
            series = (
                kf[[id_col, self.blocking_key]]
                .groupby(self.blocking_key)
                .apply(
                    lambda x, id_col: list(set(x[id_col])),
                    id_col=kf.id_col,
                    meta=pd.Series(
                        [], dtype=object, index=pd.Index([], name=self.blocking_key)
                    ),
                )
            )
        else:
            series = (
                kf[[id_col, self.blocking_key]]
                .groupby(self.blocking_key)
                .apply(
                    lambda x, id_col: list(set(x[id_col])),
                    id_col=kf.id_col,
                )
            )
        blocked = kf.__class__._upgrade_from_series(
            series,
            columns=[table_name],
            table_name=table_name,
            id_col=id_col,
            reset_index=False,
        )
        return blocked

    def assign(
        self,
        left: KlinkerFrame,
        right: KlinkerFrame,
        left_rel: Optional[KlinkerFrame] = None,
        right_rel: Optional[KlinkerFrame] = None,
    ) -> KlinkerBlockManager:
        """Assign entity ids to blocks.

        Args:
          left: KlinkerFrame: Contains entity attribute information of left dataset.
          right: KlinkerFrame: Contains entity attribute information of right dataset.
          left_rel: Optional[KlinkerFrame]:  (Default value = None) Contains relational information of left dataset.
          right_rel: Optional[KlinkerFrame]:  (Default value = None) Contains relational information of left dataset.

        Returns:
            KlinkerBlockManager: instance holding the resulting blocks.
        """
        left_assign = self._inner_assign(left)
        right_assign = self._inner_assign(right)
        pd_blocks = left_assign.join(right_assign, how="inner")
        if isinstance(left_assign, dd.DataFrame):
            return KlinkerBlockManager(pd_blocks)
        return KlinkerBlockManager.from_pandas(pd_blocks)

assign(left, right, left_rel=None, right_rel=None)

Assign entity ids to blocks.

Parameters:

Name Type Description Default
left KlinkerFrame

KlinkerFrame: Contains entity attribute information of left dataset.

required
right KlinkerFrame

KlinkerFrame: Contains entity attribute information of right dataset.

required
left_rel Optional[KlinkerFrame]

Optional[KlinkerFrame]: (Default value = None) Contains relational information of left dataset.

None
right_rel Optional[KlinkerFrame]

Optional[KlinkerFrame]: (Default value = None) Contains relational information of left dataset.

None

Returns:

Name Type Description
KlinkerBlockManager KlinkerBlockManager

instance holding the resulting blocks.

Source code in klinker/blockers/standard.py
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def assign(
    self,
    left: KlinkerFrame,
    right: KlinkerFrame,
    left_rel: Optional[KlinkerFrame] = None,
    right_rel: Optional[KlinkerFrame] = None,
) -> KlinkerBlockManager:
    """Assign entity ids to blocks.

    Args:
      left: KlinkerFrame: Contains entity attribute information of left dataset.
      right: KlinkerFrame: Contains entity attribute information of right dataset.
      left_rel: Optional[KlinkerFrame]:  (Default value = None) Contains relational information of left dataset.
      right_rel: Optional[KlinkerFrame]:  (Default value = None) Contains relational information of left dataset.

    Returns:
        KlinkerBlockManager: instance holding the resulting blocks.
    """
    left_assign = self._inner_assign(left)
    right_assign = self._inner_assign(right)
    pd_blocks = left_assign.join(right_assign, how="inner")
    if isinstance(left_assign, dd.DataFrame):
        return KlinkerBlockManager(pd_blocks)
    return KlinkerBlockManager.from_pandas(pd_blocks)

TokenBlocker

Bases: SchemaAgnosticBlocker

Concatenates and tokenizes entity attribute values and blocks on tokens.

Examples:

>>> # doctest: +SKIP
>>> from sylloge import MovieGraphBenchmark
>>> from klinker.data import KlinkerDataset
>>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
>>> from klinker.blockers import TokenBlocker
>>> blocker = TokenBlocker()
>>> blocks = blocker.assign(left=ds.left, right=ds.right)
Source code in klinker/blockers/token_blocking.py
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
class TokenBlocker(SchemaAgnosticBlocker):
    """Concatenates and tokenizes entity attribute values and blocks on tokens.

    Examples:

        >>> # doctest: +SKIP
        >>> from sylloge import MovieGraphBenchmark
        >>> from klinker.data import KlinkerDataset
        >>> ds = KlinkerDataset.from_sylloge(MovieGraphBenchmark(),clean=True)
        >>> from klinker.blockers import TokenBlocker
        >>> blocker = TokenBlocker()
        >>> blocks = blocker.assign(left=ds.left, right=ds.right)

    """

    def __init__(
        self,
        tokenize_fn: Callable[[str], List[str]] = word_tokenize,
        min_token_length: int = 3,
    ):
        self.tokenize_fn = tokenize_fn
        self.min_token_length = min_token_length

    def _tok_block(self, tab: SeriesType) -> Frame:
        """Perform token blocking on this series.

        Args:
          tab: SeriesType: series on which token blocking should be done.

        Returns:
            token blocked series.
        """
        name = tab.name
        id_col_name = tab.index.name
        # TODO figure out why this hack is needed
        # i.e. why does dask assume later for the join, that this is named 0
        # no matter what it is actually named
        tok_name = "tok"
        tok_kwargs = dict(
            tokenize_fn=self.tokenize_fn, min_token_length=self.min_token_length
        )
        collect_ids_kwargs = dict(id_col=id_col_name)
        if isinstance(tab, dd.Series):
            tok_kwargs["meta"] = (tab.name, "O")
            collect_ids_kwargs["meta"] = pd.Series(
                [],
                name=tab.name,
                dtype="O",
                index=pd.Series([], dtype="O", name=tok_name),
            )
        return (
            tab.apply(tokenize_series, **tok_kwargs)
            .explode()
            .to_frame()
            .reset_index()
            .rename(columns={name: tok_name})  # avoid same name for col and index
            .groupby(tok_name)
            .apply(lambda x, id_col: list(set(x[id_col])), **collect_ids_kwargs)
            .to_frame(name=name)
        )

    def _assign(
        self,
        left: SeriesType,
        right: SeriesType,
        left_rel: Optional[KlinkerFrame] = None,
        right_rel: Optional[KlinkerFrame] = None,
    ) -> KlinkerBlockManager:
        """Assign entity ids to blocks.

        Args:
          left: KlinkerFrame: Contains entity attribute information of left dataset.
          right: KlinkerFrame: Contains entity attribute information of right dataset.
          left_rel: Optional[KlinkerFrame]:  (Default value = None) Contains relational information of left dataset.
          right_rel: Optional[KlinkerFrame]:  (Default value = None) Contains relational information of left dataset.

        Returns:
            KlinkerBlockManager: instance holding the resulting blocks.
        """
        left_tok = self._tok_block(left)
        right_tok = self._tok_block(right)
        pd_blocks = left_tok.join(right_tok, how="inner")
        if isinstance(pd_blocks, dd.DataFrame):
            return KlinkerBlockManager(pd_blocks)
        return KlinkerBlockManager.from_pandas(pd_blocks)