Skip to content

Commit

Permalink
chore: update urls
Browse files Browse the repository at this point in the history
  • Loading branch information
purarue committed Oct 24, 2024
1 parent 711109b commit 713787f
Show file tree
Hide file tree
Showing 8 changed files with 15 additions and 16 deletions.
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2020 Sean Breckenridge
Copyright (c) 2020 purarue

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ The `--pattern` argument can be used to change the resulting filename for the br

Feel free to create an issue/contribute a [browser](./browserexport/browsers/) file to locate the browser if this doesn't support some browser you use.

Can pass the `--debug` flag to show [`sqlite_backup`](https://github.com/seanbreckenridge/sqlite_backup) logs
Can pass the `--debug` flag to show [`sqlite_backup`](https://github.com/purarue/sqlite_backup) logs

```
$ browserexport --debug save -b firefox --to .
Expand Down Expand Up @@ -218,7 +218,7 @@ browserexport merge --stream --json ~/data/browsing/*.sqlite | gzip --best > ./h
browserexport --debug inspect ./history.jsonl.gz
```

If you don't care about keeping the raw databases for any other auxiliary info like form, bookmark data, or [from_visit](https://github.com/seanbreckenridge/browserexport/issues/30) info and just want the URL, visit date and metadata, you could use `merge` to periodically merge the bulky `.sqlite` files into a gzipped JSONL dump to reduce storage space, and improve parsing speed:
If you don't care about keeping the raw databases for any other auxiliary info like form, bookmark data, or [from_visit](https://github.com/purarue/browserexport/issues/30) info and just want the URL, visit date and metadata, you could use `merge` to periodically merge the bulky `.sqlite` files into a gzipped JSONL dump to reduce storage space, and improve parsing speed:

```bash
# backup databases
Expand All @@ -234,7 +234,7 @@ rm ~/data/browsing/*
mv /tmp/browsing.jsonl.gz ~/data/browsing
```

I do this every couple months with a script [here](https://github.com/seanbreckenridge/bleanser/blob/master/bin/merge-browser-history), and then sync my old databases to a harddrive for more long-term storage
I do this every couple months with a script [here](https://github.com/purarue/bleanser/blob/master/bin/merge-browser-history), and then sync my old databases to a harddrive for more long-term storage

## Shell Completion

Expand Down Expand Up @@ -305,7 +305,7 @@ from browserexport.merge import read_and_merge
read_and_merge(["/path/to/database", "/path/to/second/database", "..."])
```

You can also use [`sqlite_backup`](https://github.com/seanbreckenridge/sqlite_backup) to copy your current browser history into a sqlite connection in memory, as a `sqlite3.Connection`
You can also use [`sqlite_backup`](https://github.com/purarue/sqlite_backup) to copy your current browser history into a sqlite connection in memory, as a `sqlite3.Connection`

```python
from browserexport.browsers.all import Firefox
Expand All @@ -323,7 +323,7 @@ merged = list(merge_visits([
]))
```

If this doesn't support a browser and you wish to quickly extend without maintaining a fork (or contributing back to this repo), you can pass a `Browser` implementation (see [browsers/all.py](./browserexport/browsers/all.py) and [browsers/common.py](./browserexport/browsers/common.py) for more info) to `browserexport.parse.read_visits` or programmatically override/add your own browsers as part of the [`browserexport.browsers` namespace package](https://github.com/seanbreckenridge/browserexport/blob/0705629e1dc87fe47d6f731018d26dc3720cf2fe/browserexport/browsers/all.py#L15-L24)
If this doesn't support a browser and you wish to quickly extend without maintaining a fork (or contributing back to this repo), you can pass a `Browser` implementation (see [browsers/all.py](./browserexport/browsers/all.py) and [browsers/common.py](./browserexport/browsers/common.py) for more info) to `browserexport.parse.read_visits` or programmatically override/add your own browsers as part of the [`browserexport.browsers` namespace package](https://github.com/purarue/browserexport/blob/0705629e1dc87fe47d6f731018d26dc3720cf2fe/browserexport/browsers/all.py#L15-L24)

#### Comparisons with Promnesia

Expand All @@ -338,7 +338,7 @@ Since [promnesia #375](https://github.com/karlicoss/promnesia/pull/375), `browse
Clone the repository and [optionally] create a [virtual environment](https://docs.python.org/3/library/venv.html) to do your work in.

```bash
git clone https://github.com/seanbreckenridge/browserexport
git clone https://github.com/purarue/browserexport
cd ./browserexport
# create a virtual environment to prevent possible package dependency conflicts
python -m virtualenv .venv # python3 -m pip install virtualenv if missing
Expand Down
2 changes: 1 addition & 1 deletion browserexport/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
# target for python3 -m browserexport and console_script using click
@click.group(
context_settings=CONTEXT_SETTINGS,
epilog="For more info, see https://github.com/seanbreckenridge/browserexport",
epilog="For more info, see https://github.com/purarue/browserexport",
)
@click.option("--debug", is_flag=True, default=False, help="Increase log verbosity")
def cli(debug: bool) -> None:
Expand Down
2 changes: 1 addition & 1 deletion browserexport/browsers/all.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
# https://www.python.org/dev/peps/pep-0420/#dynamic-path-computation
# https://packaging.python.org/guides/creating-and-discovering-plugins/#using-namespace-packages
# https://packaging.python.org/guides/packaging-namespace-packages/
# https://github.com/seanbreckenridge/reorder_editable
# https://github.com/purarue/reorder_editable

DEFAULT_BROWSERS: List[Type[Browser]] = [
Chrome,
Expand Down
2 changes: 1 addition & 1 deletion browserexport/browsers/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ def handle_path(
Defaulting to {default_behaviour} behaviour...
If you're using a browser/platform this currently doesn't support, please make an issue
at https://github.com/seanbreckenridge/browserexport/issues/new with information.
at https://github.com/purarue/browserexport/issues/new with information.
In the meantime, you can point this directly at a history database using the --path flag""",
err=True,
)
Expand Down
2 changes: 1 addition & 1 deletion browserexport/browsers/firefox_mobile.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ class FirefoxMobile(Firefox):

# unclear how reliable it is
# but we prefer to set it anyway to tell apart whether visits came from desktop or mobile
# see https://github.com/seanbreckenridge/browserexport/issues/14#issuecomment-1037891476
# see https://github.com/purarue/browserexport/issues/14#issuecomment-1037891476
detector = "SELECT * FROM moz_meta, moz_tags"
has_save = False

Expand Down
5 changes: 2 additions & 3 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,8 @@ version = 0.4.2
description = save and merge browser history and metadata from different browsers
long_description = file: README.md
long_description_content_type = text/markdown
url = https://github.com/seanbreckenridge/browserexport
author = Sean Breckenridge
author_email = [email protected]
url = https://github.com/purarue/browserexport
author = purarue
license = MIT
license_files = LICENSE
classifiers =
Expand Down
4 changes: 2 additions & 2 deletions tests/test_browserexport.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ def test_read_safari(safari: Path) -> None:
assert v.metadata is not None
assert (
v.metadata.title
== "album amalgam - https://github.com/seanbreckenridge/albums - Google Sheets"
== "album amalgam - https://github.com/purarue/albums - Google Sheets"
)
expected = datetime(2021, 4, 18, 1, 3, 45, 293084, tzinfo=timezone.utc)
assert v.dt == expected
Expand All @@ -115,7 +115,7 @@ def test_read_vivaldi(vivaldi: Path) -> None:
assert v.metadata is not None
assert (
v.metadata.title
== "GitHub - seanbreckenridge/browserexport: backup and parse browser history databases"
== "GitHub - purarue/browserexport: backup and parse browser history databases"
)
expected = datetime(2021, 4, 19, 2, 26, 8, 29825, tzinfo=timezone.utc)
assert v.dt == expected
Expand Down

0 comments on commit 713787f

Please sign in to comment.