Last modified: Oct 28, 2024 By Alexander Williams

# Find Duplicate Subsets in List Python

Identifying duplicate subsets in a list can help in data analysis and cleaning. Python provides several techniques to efficiently find duplicate subsets in lists.

## Understanding Duplicate Subsets

A subset in a Python list can be another list. Duplicate subsets are subsets with identical elements, regardless of order.

For beginners in Python, a quick overview on creating lists may be helpful. See Creating Lists in Python for more.

## Using a Dictionary to Track Duplicates

A common method to find duplicates is using a dictionary. By converting subsets to tuples (since lists are unhashable), you can count their occurrences.

```
def find_duplicate_subsets(lst):
subset_count = {}
duplicates = []
for subset in lst:
subset_tuple = tuple(sorted(subset))
if subset_tuple in subset_count:
subset_count[subset_tuple] += 1
else:
subset_count[subset_tuple] = 1
for subset, count in subset_count.items():
if count > 1:
duplicates.append(list(subset))
return duplicates
# Example usage
list_of_lists = [[1, 2], [2, 1], [3, 4], [1, 2]]
print(find_duplicate_subsets(list_of_lists))
```

This code returns duplicate subsets by counting their occurrences.

```
Output:
[[1, 2]]
```

## Using Sets for Faster Comparison

Using `set`

is helpful for lists that only contain numbers or unique elements. A `set`

removes duplicates and allows for fast comparisons.

```
def find_duplicates_using_sets(lst):
seen_subsets = set()
duplicates = []
for subset in lst:
subset_tuple = tuple(sorted(subset))
if subset_tuple in seen_subsets:
duplicates.append(list(subset_tuple))
else:
seen_subsets.add(subset_tuple)
return duplicates
list_of_lists = [[1, 2], [3, 4], [2, 1], [1, 3], [3, 4]]
print(find_duplicates_using_sets(list_of_lists))
```

This returns all unique duplicate subsets based on unique elements.

```
Output:
[[1, 2], [3, 4]]
```

## Removing Non-Unique Subsets

If you want to remove non-unique subsets from your list, filter based on counts in a dictionary.

```
def remove_non_unique(lst):
subset_count = {}
unique_subsets = []
for subset in lst:
subset_tuple = tuple(sorted(subset))
subset_count[subset_tuple] = subset_count.get(subset_tuple, 0) + 1
for subset in lst:
subset_tuple = tuple(sorted(subset))
if subset_count[subset_tuple] == 1:
unique_subsets.append(subset)
return unique_subsets
list_of_lists = [[1, 2], [2, 1], [3, 4], [1, 3]]
print(remove_non_unique(list_of_lists))
```

This removes all subsets that appear more than once.

```
Output:
[[1, 3]]
```

## Conclusion

Detecting duplicate subsets in Python can be done with dictionaries and sets for efficiency. These methods help keep lists clean and organized.

For more on sorting data in Python, see Python Sort List: A Complete Guide or visit the Python documentation on sorting.