Compare commits

166 Commits

Author SHA1 Message Date
5d014f50df fix: Remove single dollar sign math rendering due to false positives
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-05 17:24:40 +00:00
bcfdff1067 Fix dt dd tags margin 2025-12-05 00:59:02 +00:00
a888e38ae8 fix: Adjust comment metadata indentation in comments
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-05 00:49:13 +00:00
2bd51bb1cb fix: Refactor comments with DL/DD for text browser compatibility
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-05 00:45:00 +00:00
655346a7eb chore: Remove unused nojs div 2025-12-05 00:44:58 +00:00
125c1c5225 Fix buttons in color themes 2025-12-05 00:35:06 +00:00
5dd2069af5 Clear stories first on checkbox change 2025-12-04 23:12:30 +00:00
d68fc73af5 Don't setStories when existing list is empty 2025-12-04 22:57:26 +00:00
ff1297e507 Style checkbox 2025-12-04 22:55:23 +00:00
1d019f880b fix: Implement custom transparent checkbox for dark mode visibility
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 22:31:11 +00:00
23b56b26b1 style: Apply transparent background to checkboxes 2025-12-04 22:31:07 +00:00
b439199836 fix: Cancel pending story fetches on filter change to prevent UI jumps
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 22:24:28 +00:00
5736cde21a feat: Fetch smallweb stories iteratively until limit met
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 22:18:22 +00:00
ed8ad1b6f6 feat: Add domain exclusion to smallweb list loading 2025-12-04 22:18:19 +00:00
75779722c1 feat: Add smallweb filter checkbox and server-side filtering
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 22:09:11 +00:00
13df4a7831 Put the loading status down below 2025-12-04 21:10:20 +00:00
d511453418 fix: Detect and render inline math using single dollar delimiters
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 20:56:14 +00:00
5e7240e2d0 fix: Convert inline align environments to display math
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 20:50:48 +00:00
96719f9e6f chore: Adjust console.log placement in Article component 2025-12-04 20:45:21 +00:00
0d4e674f3d chore: Add debug log for math block detection 2025-12-04 20:42:55 +00:00
7ce94e80dd fix: Render LaTeX expressions that are entire element contents
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 20:35:51 +00:00
1729e5b2be Add latex packages 2025-12-04 20:31:40 +00:00
d04bc2fe05 feat: Add LaTeX math rendering support
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 20:29:13 +00:00
02d165b2b2 fix: Extend direct HTML rendering to math elements
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 20:24:08 +00:00
2d10abf9aa fix: Prevent React warnings for SVG attributes
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 20:23:08 +00:00
e8911dc1d1 Move logos into public directory 2025-12-04 19:54:56 +00:00
41c4d7619d Downgrade humanize 2025-12-04 19:53:13 +00:00
e36fe3a403 Freeze requirements 2025-12-04 19:51:42 +00:00
fbec869257 Don't locate css file on server 2025-12-04 19:49:19 +00:00
e9e3cb30a4 chore: Remove conditional CSS import and improve alt attributes 2025-12-04 19:29:04 +00:00
a5e762c36b feat: Display relative time on non-JS article info line
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 19:11:27 +00:00
bbcb01f8d1 style: Remove zero-width spaces from story info 2025-12-04 19:11:24 +00:00
df0e66ad08 feat: Render homepage feed server-side
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 18:42:14 +00:00
1fefc149e2 feat: Include QotNews header for non-JS users
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 18:38:19 +00:00
449cb13dbd feat: Add relative timestamps and permalinks to comments
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 18:35:43 +00:00
f206485124 fix: Widen comments container on story page
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 18:32:57 +00:00
b185ecfe81 refactor: Align non-JS comments page structure and style
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 18:29:41 +00:00
274b4065e2 style: Match non-JS article page styling and layout to JS version
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 18:26:06 +00:00
85b6fbabf3 feat: Link compiled CSS bundle for non-JS client
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 18:16:50 +00:00
32cbf47d95 feat: Add static rendering for article pages
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-04 18:01:00 +00:00
7c600dcfba Only wrap code in comments 2025-12-03 04:18:36 +00:00
92e70229fe fix: Refine code block detection to ignore inline <code>
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 03:57:08 +00:00
b749e58f62 fix: Refine code block detection to exclude inline code
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 03:55:18 +00:00
b1b2be6080 fix: Use textContent for code block conversion to prevent content loss
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 03:51:33 +00:00
5ebe87dbc2 refactor: Optimize nodes() calls and simplify function in Article
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 03:50:10 +00:00
a8a36b693e fix: Render void elements correctly and copy all attributes
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 03:12:51 +00:00
60eefb4b27 refactor: Implement recursive rendering to detect and convert code blocks
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 02:52:07 +00:00
8f5dae4bdc fix: Unwrap single-child wrapper elements in nodes function
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 02:46:20 +00:00
89a511efc0 chore: Add debug log to isCodeBlock function 2025-12-03 02:46:18 +00:00
504fe745ea fix: Relax isCodeBlock check for nested code elements
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 02:37:58 +00:00
762e8a9a2e refactor: Refactor nodes logic from useMemo to a regular function 2025-12-03 02:37:56 +00:00
6dc47f6672 refactor: Extract code block detection into isCodeBlock function
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 01:46:19 +00:00
da108f25d4 fix: Detect code blocks nested in pre tags for conversion
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 01:43:33 +00:00
a2303841ec fix: Show 'Convert Code to Paragraph' button for <code> elements
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 01:37:08 +00:00
0e7aedbc5e fix: Adjust spacing below comment text content
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 01:28:10 +00:00
ec7d395407 fix: Wrap text in <pre> blocks to prevent horizontal overflow
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 00:58:39 +00:00
fd5acd4861 refactor: Convert 'show more' div to semantic button
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 00:50:58 +00:00
b1d4fc2903 refactor: Convert collapser span to button for accessibility
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-03 00:48:22 +00:00
0f87d47536 refactor: Remove unnecessary useCallback from comment functions
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-02 23:53:40 +00:00
8472907730 Mark deleted / empty comments 2025-12-02 23:39:24 +00:00
482753e96a Add a copy button to the article title 2025-12-02 23:19:31 +00:00
169a84faa1 fix: Align article title and copy button, correct icon font
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-02 23:19:31 +00:00
6fa929fb1f style: Update copy link button font 2025-12-02 23:19:31 +00:00
5f02a95cf3 fix: Improve copy button icon display and alignment
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-02 23:19:31 +00:00
1789f88d4d style: Style copy button icon 2025-12-02 23:19:31 +00:00
f5eab47496 feat: Use icons for copy link button feedback
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-02 23:19:31 +00:00
985e596790 feat: Add button to copy article title and URL to clipboard
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-02 23:19:31 +00:00
30298928f3 Move static build directory to apiserver/ 2025-12-02 22:38:49 +00:00
8d7d692d9c refactor: Iterate through stories in order for prioritized updates
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-02 22:37:58 +00:00
bd85127613 fix: Unregister service worker
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-02 17:13:52 +00:00
4c9d5eede1 Revert ScrollToTop component back to class-based 2025-12-02 17:02:03 +00:00
bf3e6bbc28 Don't setStories every loop iteration 2025-12-02 16:52:32 +00:00
856c360d98 feat: Add loading progress indicator to Feed
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-12-02 01:20:27 +00:00
1ce55e6d1f feat: Add fetching stories placeholder 2025-12-02 01:20:25 +00:00
6a329e3ba9 Misc fixes 2025-12-01 21:07:01 +00:00
3acaf230c4 fix: Improve submit error handling on API and refactor client with async/await
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-21 23:02:29 +00:00
7b84573dd8 fix: Improve error handling for non-JSON server responses in Submit
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-21 22:59:15 +00:00
7523426f15 feat: Display detailed submission errors to user
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-21 22:56:48 +00:00
b2ec85cfa5 feat: Display detailed, expandable connection error in Comments component
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-21 22:51:14 +00:00
8c201d5c2e fix: Conditionally render error details to avoid layout gap
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-21 22:45:58 +00:00
a21c84efc6 refactor: Improve article loading error and cache messages 2025-11-21 22:45:54 +00:00
15aa413584 fix: Prevent layout shift when error message appears
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-21 22:39:34 +00:00
e9ee231954 feat: Persist new stories and improve layout consistency 2025-11-21 22:39:32 +00:00
62d5915133 feat: Add detailed, expandable error messages to Article component
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-21 22:34:24 +00:00
61ec583882 feat: Show preload progress on fetch failure
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-21 00:59:14 +00:00
1443fdcc32 style: Improve error messages and loading text, add spacing to error details 2025-11-21 00:59:12 +00:00
f2310b6925 fix: Provide detailed error for story fetch failures
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-21 00:50:58 +00:00
aa80570da4 fix: Display network error on API fetch failure
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-21 00:49:14 +00:00
7d0e60f5f0 fix: Provide detailed error messages for network failures
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-21 00:45:59 +00:00
21b5d67052 feat: Show detailed connection errors in collapsible section 2025-11-21 00:41:57 +00:00
53468c8ccd feat: Add 10s timeout and early exit for story preloading on error
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-21 00:34:17 +00:00
6cfb4b317f feat: Immediately display stories on first load
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-20 23:02:59 +00:00
f08202d592 fix: Always fetch full story and update existing in feed
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-20 22:58:44 +00:00
5a7f55184d Begin stats API route 2025-11-20 22:25:26 +00:00
e84062394b Ignore aider files 2025-11-20 22:25:20 +00:00
e867d5d868 Add debug logging, debug add manual submissions to feed 2025-11-20 21:55:45 +00:00
845d87ec55 Logging 2025-11-19 19:17:38 +00:00
e18aaad741 fix: Batch story list updates and limit length
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-19 19:17:38 +00:00
02e86efb4f chore: Add console log for stories 2025-11-19 19:17:38 +00:00
b85d879ae7 fix: Fix infinite loop in Feed by removing stories from useEffect deps
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-19 19:17:38 +00:00
55bf75742e refactor: Refactor Feed story fetching for improved network resilience
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-11-19 19:17:38 +00:00
83cb6fc0ae chore: Disable story updates and preloading logic 2025-11-19 19:17:38 +00:00
667c2c5eaf refactor: Refactor dot components to functional 2025-11-19 19:17:38 +00:00
1df1c59d61 refactor: Refactor Submit component to use hooks 2025-11-19 19:17:38 +00:00
c4f2e7d595 refactor: Refactor Search component to use hooks 2025-11-19 19:17:38 +00:00
f61cfc09b0 refactor: Convert ScrollToTop to functional component with hooks 2025-11-19 19:17:38 +00:00
366e76e25d refactor: refactor Results component to functional component 2025-11-19 19:17:38 +00:00
6f1811c564 Update webclient dependencies 2025-11-19 19:17:38 +00:00
443115ac0f refactor: Refactor Feed component to functional with hooks 2025-11-19 19:17:38 +00:00
034c440e46 refactor: Convert Comments class to functional using hooks 2025-11-19 19:17:38 +00:00
26a6353ca5 refactor: Rename Article component to Comments 2025-11-19 19:17:38 +00:00
7ac4dfa01c refactor: Refactor Article component to use hooks 2025-11-19 19:17:38 +00:00
633429c976 refactor: Convert App class component to functional component 2025-11-19 19:17:38 +00:00
5cdbf6ef54 Ignore blank hackernews titles 2025-11-19 19:17:38 +00:00
f1a30d0af2 Skip "Removed by moderator" stories 2025-09-27 17:38:50 +00:00
9ec61ea5bc Ignore dead and political stories 2025-05-27 18:47:17 +00:00
bdc7a6c10d Fix Better HN api content extraction 2025-02-01 22:39:13 +00:00
4858516b01 Add Better HN as an API backup 2025-02-01 21:42:06 +00:00
f10e6063fc Bug fixes 2025-02-01 20:31:35 +00:00
249a616531 Alert on story update error 2024-03-16 20:41:24 +00:00
ab92bd5441 Adjust score and comment thresholds 2024-03-08 03:08:18 +00:00
6b16a768a7 Fix deletion script 2024-03-08 03:08:03 +00:00
57de076fec Increase database timeout 2024-02-27 18:48:56 +00:00
074b898508 Fix lobsters comment parsing 2024-02-27 18:47:00 +00:00
f049d194ab Move scripts into own folder 2024-02-27 18:32:29 +00:00
c2b9a1cb7a Update readability 2024-02-27 18:32:19 +00:00
4435f49e17 Make "dark" theme grey, add "black" theme 2023-09-13 01:19:47 +00:00
494d89ac30 Disable lobsters 2023-09-13 01:02:15 +00:00
e79fca6ecc Replace "indent_level" with "depth" in lobsters API
See:
fe09e5aa31
2023-08-31 07:35:44 +00:00
c65fb69092 Handle Lobsters comment parsing TypeErrors
Too lazy to debug this:

2023-08-29 12:56:35,111 - root - INFO - Updating lobsters story: yktkwr, index: 55
Traceback (most recent call last):
  File "src/gevent/greenlet.py", line 854, in gevent._gevent_cgreenlet.Greenlet.run
  File "/home/tanner/qotnews/apiserver/server.py", line 194, in feed_thread
    valid = feed.update_story(story)
  File "/home/tanner/qotnews/apiserver/feed.py", line 74, in update_story
    res = lobsters.story(story['ref'])
  File "/home/tanner/qotnews/apiserver/feeds/lobsters.py", line 103, in story
    s['comments'] = iter_comments(r['comments'])
  File "/home/tanner/qotnews/apiserver/feeds/lobsters.py", line 76, in iter_comments
    parent_stack = parent_stack[:indent-1]
TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'
2023-08-29T12:56:35Z <Greenlet at 0x7f92ad840ae0: feed_thread> failed with TypeError
2023-08-31 07:30:39 +00:00
632d028e4c Add Tildes group whitelist 2023-07-13 22:54:36 +00:00
ea8e9e5a23 Increase again 2023-06-13 17:11:50 +00:00
2838ea9b41 Increase Tildes story score requirement 2023-06-11 01:01:31 +00:00
f15d108971 Catch all possible Reddit API exceptions 2023-03-15 21:16:37 +00:00
f777348af8 Fix darkmode fullscreen button color 2022-08-11 19:36:36 +00:00
486404a413 Fix fix-stories bug 2022-08-10 04:06:39 +00:00
7c9c07a4cf Hide fullscreen button if it's not available 2022-08-10 04:05:25 +00:00
08d02f6013 Add fullscreen mode 2022-08-08 23:21:49 +00:00
1b54342702 Add red theme 2022-08-08 20:14:57 +00:00
9e9571a3c0 Write fixed stories to database 2022-07-05 00:57:56 +00:00
dc83a70887 Begin script to fix bad gzip text 2022-07-04 20:32:01 +00:00
2e2c9ae837 Move FEED_LENGTH to settings.py, use for search results 2022-07-04 19:08:24 +00:00
61021d8f91 Small UI changes 2022-07-04 19:08:24 +00:00
e65047fead Add accept gzip header to readability server 2022-07-04 19:07:31 +00:00
8e775c189f Add test file 2022-07-04 05:56:06 +00:00
3d9274309a Fix requests text encoding slowness 2022-07-04 05:55:52 +00:00
7bdbbf10b2 Return search results directly from the server 2022-07-04 04:33:01 +00:00
6aa0f78536 Remove Article / Comments, etc thing after name 2022-07-04 04:33:01 +00:00
bf3663bbec Remove hard-coded title 2022-06-30 00:12:22 +00:00
e6589dc61c Adjust title 2022-06-30 00:05:15 +00:00
307e8349f3 Change header based on page 2022-06-30 00:00:30 +00:00
04cd56daa8 Add index / noindex to client 2022-06-29 23:30:39 +00:00
c80769def6 Add noindex meta tag to stories 2022-06-29 23:20:53 +00:00
ebd1ad2140 Increase database timeout 2022-06-24 20:50:27 +00:00
2cc7dd0d6d Update software 2022-05-31 04:24:12 +00:00
6e7cb86d2e Explain no javascript 2022-05-31 04:23:52 +00:00
a25457254f Improve logging, sends tweets to nitter.net 2022-03-05 23:48:46 +00:00
a693ea5342 Remove outline API 2022-03-05 22:05:29 +00:00
7386e1d8b0 Include option to disable readerserver 2022-03-05 22:04:25 +00:00
f8e8597e3a Include option to disable search 2022-03-05 21:58:35 +00:00
55c282ee69 Fix search to work with low-RAM server 2022-03-05 21:33:07 +00:00
3f774a9e38 Improve logging 2021-09-06 00:21:05 +00:00
dcedd4caa1 Add script to reindex search, abstract search API 2021-09-06 00:20:21 +00:00
7a131ebd03 Change the order by which content-type is grabbed 2021-01-30 06:36:02 +00:00
6f64401785 Add optional skip and limit to API route 2021-01-18 03:59:33 +00:00
3ff917e806 Remove colons from date string so Python 3.5 can parse 2020-12-15 23:19:50 +00:00
128 changed files with 6483 additions and 10174 deletions

1
.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
.aider*

3
.gitmodules vendored
View File

@@ -1,3 +0,0 @@
[submodule "readerserver"]
path = readerserver
url = https://github.com/master5o1/declutter.git

View File

@@ -20,7 +20,7 @@ $ sudo apt install yarn
Clone this repo:
```text
$ git clone --recurse-submodules https://git.1j.nz/jason/qotnews.git
$ git clone https://gogs.tannercollin.com/tanner/qotnews.git
$ cd qotnews
```
@@ -37,14 +37,14 @@ $ source env/bin/activate
Configure Praw for your Reddit account (optional):
- Go to https://www.reddit.com/prefs/apps
- Click "Create app"
- Name: whatever
- App type: script
- Description: blank
- About URL: blank
- Redirect URL: your GitHub profile
- Submit, copy the client ID and client secret into `settings.py` below
* Go to https://www.reddit.com/prefs/apps
* Click "Create app"
* Name: whatever
* App type: script
* Description: blank
* About URL: blank
* Redirect URL: your GitHub profile
* Submit, copy the client ID and client secret into `settings.py` below
```text
(env) $ vim settings.py.example
@@ -109,7 +109,7 @@ stdout_logfile_maxbytes=1MB
[program:qotnewsreader]
user=qotnews
directory=/home/qotnews/qotnews/readerserver
command=node index.js
command=node main.js
autostart=true
autorestart=true
stderr_logfile=/var/log/qotnewsreader.log

0
apiserver/build/.gitkeep Normal file
View File

View File

@@ -1,11 +1,11 @@
from datetime import datetime, timedelta
import json
from sqlalchemy import create_engine, Column, String, ForeignKey, Integer
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.exc import IntegrityError
from sqlalchemy.types import JSON
engine = create_engine('sqlite:///data/qotnews.sqlite', connect_args={'timeout': 120})
engine = create_engine('sqlite:///data/qotnews.sqlite', connect_args={'timeout': 360})
Session = sessionmaker(bind=engine)
Base = declarative_base()
@@ -15,8 +15,8 @@ class Story(Base):
sid = Column(String(16), primary_key=True)
ref = Column(String(16), unique=True)
meta = Column(JSON)
data = Column(JSON)
meta_json = Column(String)
full_json = Column(String)
title = Column(String)
class Reflist(Base):
@@ -24,7 +24,6 @@ class Reflist(Base):
rid = Column(Integer, primary_key=True)
ref = Column(String(16), unique=True)
urlref = Column(String)
sid = Column(String, ForeignKey('stories.sid'), unique=True)
source = Column(String(16))
@@ -37,21 +36,19 @@ def get_story(sid):
def put_story(story):
story = story.copy()
data = {}
data.update(story)
full_json = json.dumps(story)
meta = {}
meta.update(story)
meta.pop('text', None)
meta.pop('comments', None)
story.pop('text', None)
story.pop('comments', None)
meta_json = json.dumps(story)
try:
session = Session()
s = Story(
sid=story['id'],
ref=story['ref'],
data=data,
meta=meta,
full_json=full_json,
meta_json=meta_json,
title=story.get('title', None),
)
session.merge(s)
@@ -66,41 +63,25 @@ def get_story_by_ref(ref):
session = Session()
return session.query(Story).filter(Story.ref==ref).first()
def get_stories_by_url(url):
def get_reflist(amount):
session = Session()
return session.query(Story).\
filter(Story.title != None).\
filter(Story.meta['url'].as_string() == url).\
order_by(Story.meta['date'].desc())
q = session.query(Reflist).order_by(Reflist.rid.desc()).limit(amount)
return [dict(ref=x.ref, sid=x.sid, source=x.source) for x in q.all()]
def get_ref_by_sid(sid):
def get_stories(amount, skip=0):
session = Session()
x = session.query(Reflist).\
filter(Reflist.sid == sid).\
first()
return dict(ref=x.ref, sid=x.sid, source=x.source, urlref=x.urlref)
def get_reflist():
session = Session()
q = session.query(Reflist).order_by(Reflist.rid.desc())
return [dict(ref=x.ref, sid=x.sid, source=x.source, urlref=x.urlref) for x in q.all()]
def get_stories(maxage=0, skip=0, limit=20):
time = datetime.now().timestamp() - maxage
session = Session()
q = session.query(Reflist, Story.meta).\
q = session.query(Reflist, Story.meta_json).\
order_by(Reflist.rid.desc()).\
join(Story).\
filter(Story.title != None).\
filter(maxage == 0 or Story.meta['date'].as_integer() > time).\
order_by(Story.meta['date'].desc()).\
offset(skip).\
limit(limit)
limit(amount)
return [x[1] for x in q]
def put_ref(ref, sid, source, urlref):
def put_ref(ref, sid, source):
try:
session = Session()
r = Reflist(ref=ref, sid=sid, source=source, urlref=urlref)
r = Reflist(ref=ref, sid=sid, source=source)
session.add(r)
session.commit()
except:
@@ -120,7 +101,22 @@ def del_ref(ref):
finally:
session.close()
def count_stories():
try:
session = Session()
return session.query(Story).count()
finally:
session.close()
def get_story_list():
try:
session = Session()
return session.query(Story.sid).all()
finally:
session.close()
if __name__ == '__main__':
init()
print(get_story_by_ref('hgi3sy'))
#print(get_story_by_ref('hgi3sy'))
print(len(get_reflist(99999)))

View File

@@ -6,126 +6,84 @@ logging.basicConfig(
import requests
import time
from bs4 import BeautifulSoup
import itertools
import settings
from feeds import hackernews, reddit, tildes, substack, manual, lobsters
from feeds.sitemap import Sitemap
from feeds.category import Category
from scrapers import outline
from scrapers.declutter import declutter, headless, simple
from feeds import hackernews, reddit, tildes, manual, lobsters
import utils
INVALID_DOMAINS = ['youtube.com', 'bloomberg.com', 'wsj.com', 'sec.gov']
TWO_DAYS = 60*60*24*2
substacks = {}
for key, value in settings.SUBSTACK.items():
substacks[key] = substack.Publication(value['url'])
categories = {}
for key, value in settings.CATEGORY.items():
categories[key] = Category(value)
sitemaps = {}
for key, value in settings.SITEMAP.items():
sitemaps[key] = Sitemap(value)
def get_list():
feeds = {}
def list():
feed = []
if settings.NUM_HACKERNEWS:
feeds['hackernews'] = [(x, 'hackernews', x) for x in hackernews.feed()[:settings.NUM_HACKERNEWS]]
feed += [(x, 'hackernews') for x in hackernews.feed()[:settings.NUM_HACKERNEWS]]
if settings.NUM_LOBSTERS:
feed += [(x, 'lobsters', x) for x in lobsters.feed()[:settings.NUM_LOBSTERS]]
feed += [(x, 'lobsters') for x in lobsters.feed()[:settings.NUM_LOBSTERS]]
if settings.NUM_REDDIT:
feeds['reddit'] = [(x, 'reddit', x) for x in reddit.feed()[:settings.NUM_REDDIT]]
feed += [(x, 'reddit') for x in reddit.feed()[:settings.NUM_REDDIT]]
if settings.NUM_TILDES:
feeds['tildes'] = [(x, 'tildes', x) for x in tildes.feed()[:settings.NUM_TILDES]]
feed += [(x, 'tildes') for x in tildes.feed()[:settings.NUM_TILDES]]
if settings.NUM_SUBSTACK:
feeds['substack'] = [(x, 'substack', x) for x in substack.top.feed()[:settings.NUM_SUBSTACK]]
for key, publication in substacks.items():
count = settings.SUBSTACK[key]['count']
feeds[key] = [(x, key, x) for x in publication.feed()[:count]]
for key, sites in categories.items():
count = settings.CATEGORY[key].get('count') or 0
excludes = settings.CATEGORY[key].get('excludes')
tz = settings.CATEGORY[key].get('tz')
feeds[key] = [(x, key, u) for x, u in sites.feed(excludes)[:count]]
for key, sites in sitemaps.items():
count = settings.SITEMAP[key].get('count') or 0
excludes = settings.SITEMAP[key].get('excludes')
feeds[key] = [(x, key, u) for x, u in sites.feed(excludes)[:count]]
values = feeds.values()
feed = itertools.chain.from_iterable(itertools.zip_longest(*values, fillvalue=None))
feed = list(filter(None, feed))
return feed
def get_article(url):
scrapers = {
'headless': headless,
'simple': simple,
'outline': outline,
'declutter': declutter,
}
available = settings.SCRAPERS or ['headless', 'simple']
if 'simple' not in available:
available += ['simple']
if not settings.READER_URL:
logging.info('Readerserver not configured, aborting.')
return ''
for scraper in available:
if scraper not in scrapers.keys():
continue
try:
details = scrapers[scraper].get_details(url)
if details and details.get('content'):
return details, scraper
except KeyboardInterrupt:
raise
except:
pass
return None, None
if url.startswith('https://twitter.com'):
logging.info('Replacing twitter.com url with nitter.net')
url = url.replace('twitter.com', 'nitter.net')
try:
r = requests.post(settings.READER_URL, data=dict(url=url), timeout=20)
if r.status_code != 200:
raise Exception('Bad response code ' + str(r.status_code))
return r.text
except KeyboardInterrupt:
raise
except BaseException as e:
logging.error('Problem getting article: {}'.format(str(e)))
return ''
def get_content_type(url):
try:
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0'}
return requests.get(url, headers=headers, timeout=5).headers['content-type']
except:
return ''
try:
headers = {
'User-Agent': 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
'X-Forwarded-For': '66.249.66.1',
}
return requests.get(url, headers=headers, timeout=5).headers['content-type']
return requests.get(url, headers=headers, timeout=10).headers['content-type']
except:
pass
try:
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0'}
return requests.get(url, headers=headers, timeout=10).headers['content-type']
except:
return ''
def update_story(story, is_manual=False, urlref=None):
def update_story(story, is_manual=False):
res = {}
if story['source'] == 'hackernews':
res = hackernews.story(story['ref'])
elif story['source'] == 'lobsters':
res = lobsters.story(story['ref'])
elif story['source'] == 'reddit':
res = reddit.story(story['ref'])
elif story['source'] == 'tildes':
res = tildes.story(story['ref'])
elif story['source'] == 'substack':
res = substack.top.story(story['ref'])
elif story['source'] in categories.keys():
res = categories[story['source']].story(story['ref'], urlref)
elif story['source'] in sitemaps.keys():
res = sitemaps[story['source']].story(story['ref'], urlref)
elif story['source'] in substacks.keys():
res = substacks[story['source']].story(story['ref'])
elif story['source'] == 'manual':
res = manual.story(story['ref'])
try:
if story['source'] == 'hackernews':
res = hackernews.story(story['ref'])
elif story['source'] == 'lobsters':
res = lobsters.story(story['ref'])
elif story['source'] == 'reddit':
res = reddit.story(story['ref'])
elif story['source'] == 'tildes':
res = tildes.story(story['ref'])
elif story['source'] == 'manual':
res = manual.story(story['ref'])
except BaseException as e:
utils.alert_tanner('Problem updating {} story, ref {}: {}'.format(story['source'], story['ref'], str(e)))
logging.exception(e)
return False
if res:
story.update(res) # join dicts
@@ -133,15 +91,11 @@ def update_story(story, is_manual=False, urlref=None):
logging.info('Story not ready yet')
return False
if story['date'] and not is_manual and story['date'] + settings.MAX_STORY_AGE < time.time():
logging.info('Story too old, removing')
if story['date'] and not is_manual and story['date'] + TWO_DAYS < time.time():
logging.info('Story too old, removing. Date: {}'.format(story['date']))
return False
has_url = story.get('url') or False
has_text = story.get('text') or False
#is_simple = story.get('scaper', '') == 'simple'
if has_url and not has_text:
if story.get('url', '') and not story.get('text', ''):
if not get_content_type(story['url']).startswith('text/'):
logging.info('URL invalid file type / content type:')
logging.info(story['url'])
@@ -152,21 +106,15 @@ def update_story(story, is_manual=False, urlref=None):
logging.info(story['url'])
return False
if 'trump' in story['title'].lower() or 'musk' in story['title'].lower() or 'Removed by moderator' in story['title']:
logging.info('Trump / Musk / removed story, skipping')
logging.info(story['url'])
return False
logging.info('Getting article ' + story['url'])
details, scraper = get_article(story['url'])
if not details: return False
story['scraper'] = scraper
story['text'] = details.get('content', '')
story['text'] = get_article(story['url'])
if not story['text']: return False
story['last_update'] = time.time()
story['excerpt'] = details.get('excerpt', '')
story['scraper_link'] = details.get('scraper_link', '')
meta = details.get('meta')
if meta:
og = meta.get('og')
story['image'] = meta.get('image', '')
if og:
story['image'] = og.get('og:image', meta.get('image', ''))
return True
@@ -181,7 +129,7 @@ if __name__ == '__main__':
#print(get_article('https://www.bloomberg.com/news/articles/2019-09-23/xi-s-communists-under-pressure-as-high-prices-hit-china-workers'))
a = get_article('https://blog.joinmastodon.org/2019/10/mastodon-3.0/')
a = get_content_type('https://tefkos.comminfo.rutgers.edu/Courses/e530/Readings/Beal%202008%20full%20text%20searching.pdf')
print(a)
print('done')

View File

@@ -1,72 +0,0 @@
import logging
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.DEBUG)
if __name__ == '__main__':
import sys
sys.path.insert(0,'.')
from bs4 import BeautifulSoup
import settings
from utils import clean
from misc.api import xml
from misc.news import Base
def _filter_links(links, category_url, excludes=None):
links = list(filter(None, [link if link.startswith(category_url) else None for link in links]))
links = list(filter(None, [link if link != category_url else None for link in links]))
links = list(set(links))
if excludes:
links = list(filter(None, [None if any(e in link for e in excludes) else link for link in links]))
return links
def _get_category(category_url, excludes=None):
base_url = '/'.join(category_url.split('/')[:3])
markup = xml(lambda x: category_url)
if not markup: return []
soup = BeautifulSoup(markup, features='html.parser')
links = soup.find_all('a', href=True)
links = [link.get('href') for link in links]
links = [f"{base_url}{link}" if link.startswith('/') else link for link in links]
links = _filter_links(links, category_url, excludes)
return links
class Category(Base):
def __init__(self, config):
self.config = config
self.category_url = config.get('url')
self.tz = config.get('tz')
def feed(self, excludes=None):
links = []
if isinstance(self.category_url, str):
links += _get_category(self.category_url, excludes)
elif isinstance(self.category_url, list):
for url in self.category_url:
links += _get_category(url, excludes)
links = list(set(links))
return [(self.get_id(link), link) for link in links]
# scratchpad so I can quickly develop the parser
if __name__ == '__main__':
print("Category: RadioNZ")
site = Category({ 'url': "https://www.rnz.co.nz/news/" })
excludes = [
'rnz.co.nz/news/sport',
'rnz.co.nz/weather',
'rnz.co.nz/news/weather',
]
posts = site.feed(excludes)
print(posts[:5])
print(site.story(posts[0][0], posts[0][1]))
print("Category: Newsroom")
site = Category({ 'url': "https://www.newsroom.co.nz/news/", 'tz': 'Pacific/Auckland'})
posts = site.feed()
print(posts[:5])
print(site.story(posts[0][0], posts[0][1]))

View File

@@ -12,7 +12,8 @@ import requests
from utils import clean
API_TOPSTORIES = lambda x: 'https://hacker-news.firebaseio.com/v0/topstories.json'
API_ITEM = lambda x : 'https://hn.algolia.com/api/v1/items/{}'.format(x)
ALG_API_ITEM = lambda x : 'https://hn.algolia.com/api/v1/items/{}'.format(x)
BHN_API_ITEM = lambda x : 'https://api.hnpwa.com/v0/item/{}.json'.format(x)
SITE_LINK = lambda x : 'https://news.ycombinator.com/item?id={}'.format(x)
SITE_AUTHOR_LINK = lambda x : 'https://news.ycombinator.com/user?id={}'.format(x)
@@ -40,9 +41,9 @@ def api(route, ref=None):
return False
def feed():
return ['hn:'+str(x) for x in api(API_TOPSTORIES) or []]
return [str(x) for x in api(API_TOPSTORIES) or []]
def comment(i):
def alg_comment(i):
if 'author' not in i:
return False
@@ -51,22 +52,25 @@ def comment(i):
c['score'] = i.get('points', 0)
c['date'] = i.get('created_at_i', 0)
c['text'] = clean(i.get('text', '') or '')
c['comments'] = [comment(j) for j in i['children']]
c['comments'] = [alg_comment(j) for j in i['children']]
c['comments'] = list(filter(bool, c['comments']))
return c
def comment_count(i):
def alg_comment_count(i):
alive = 1 if i['author'] else 0
return sum([comment_count(c) for c in i['comments']]) + alive
return sum([alg_comment_count(c) for c in i['comments']]) + alive
def story(ref):
ref = ref.replace('hn:', '')
r = api(API_ITEM, ref)
if not r: return False
def alg_story(ref):
r = api(ALG_API_ITEM, ref)
if not r:
logging.info('Bad Algolia Hackernews API response.')
return None
if 'deleted' in r:
logging.info('Story was deleted.')
return False
elif r.get('type', '') != 'story':
logging.info('Type "{}" is not "story".'.format(r.get('type', '')))
return False
s = {}
@@ -77,17 +81,88 @@ def story(ref):
s['title'] = r.get('title', '')
s['link'] = SITE_LINK(ref)
s['url'] = r.get('url', '')
s['comments'] = [comment(i) for i in r['children']]
s['comments'] = [alg_comment(i) for i in r['children']]
s['comments'] = list(filter(bool, s['comments']))
s['num_comments'] = comment_count(s) - 1
s['num_comments'] = alg_comment_count(s) - 1
if 'text' in r and r['text']:
s['text'] = clean(r['text'] or '')
return s
def bhn_comment(i):
if 'user' not in i:
return False
c = {}
c['author'] = i.get('user', '')
c['score'] = 0 # Not present?
c['date'] = i.get('time', 0)
c['text'] = clean(i.get('content', '') or '')
c['comments'] = [bhn_comment(j) for j in i['comments']]
c['comments'] = list(filter(bool, c['comments']))
return c
def bhn_story(ref):
r = api(BHN_API_ITEM, ref)
if not r:
logging.info('Bad BetterHN Hackernews API response.')
return None
if 'deleted' in r: # TODO: verify
logging.info('Story was deleted.')
return False
elif r.get('dead', False):
logging.info('Story was deleted.')
return False
elif r.get('type', '') != 'link':
logging.info('Type "{}" is not "link".'.format(r.get('type', '')))
return False
s = {}
s['author'] = r.get('user', '')
s['author_link'] = SITE_AUTHOR_LINK(r.get('user', ''))
s['score'] = r.get('points', 0)
s['date'] = r.get('time', 0)
s['title'] = r.get('title', '')
s['link'] = SITE_LINK(ref)
s['url'] = r.get('url', '')
if s['url'].startswith('item'):
s['url'] = SITE_LINK(ref)
s['comments'] = [bhn_comment(i) for i in r['comments']]
s['comments'] = list(filter(bool, s['comments']))
s['num_comments'] = r.get('comments_count', 0)
if 'content' in r and r['content']:
s['text'] = clean(r['content'] or '')
return s
def story(ref):
s = alg_story(ref)
if s is None:
s = bhn_story(ref)
if not s:
return False
if not s['title']:
return False
if s['score'] < 25 and s['num_comments'] < 10:
logging.info('Score ({}) or num comments ({}) below threshold.'.format(s['score'], s['num_comments']))
return False
return s
# scratchpad so I can quickly develop the parser
if __name__ == '__main__':
print(feed())
#print(story(20763961))
#print(story(20802050))
#print(story(42899834)) # type "job"
#print(story(42900076)) # Ask HN
#print(story(42898201)) # Show HN
#print(story(42899703)) # normal
print(story(42902678)) # bad title?

View File

@@ -44,12 +44,13 @@ def feed():
return [x['short_id'] for x in api(API_HOTTEST) or []]
def unix(date_str):
return int(datetime.strptime(date_str, '%Y-%m-%dT%H:%M:%S.%f%z').timestamp())
date_str = date_str.replace(':', '')
return int(datetime.strptime(date_str, '%Y-%m-%dT%H%M%S.%f%z').timestamp())
def make_comment(i):
c = {}
try:
c['author'] = i['commenting_user']['username']
c['author'] = i['commenting_user']
except KeyError:
c['author'] = ''
c['score'] = i.get('score', 0)
@@ -66,13 +67,13 @@ def iter_comments(flat_comments):
parent_stack = []
for comment in flat_comments:
c = make_comment(comment)
indent = comment['indent_level']
indent = comment['depth']
if indent == 1:
if indent == 0:
nested_comments.append(c)
parent_stack = [c]
else:
parent_stack = parent_stack[:indent-1]
parent_stack = parent_stack[:indent]
p = parent_stack[-1]
p['comments'].append(c)
parent_stack.append(c)
@@ -80,11 +81,13 @@ def iter_comments(flat_comments):
def story(ref):
r = api(API_ITEM, ref)
if not r: return False
if not r:
logging.info('Bad Lobsters API response.')
return False
s = {}
try:
s['author'] = r['submitter_user']['username']
s['author'] = r['submitter_user']
s['author_link'] = SITE_AUTHOR_LINK(s['author'])
except KeyError:
s['author'] = ''
@@ -100,6 +103,10 @@ def story(ref):
s['comments'] = iter_comments(r['comments'])
s['num_comments'] = r['comment_count']
if s['score'] < 15 and s['num_comments'] < 10:
logging.info('Score ({}) or num comments ({}) below threshold.'.format(s['score'], s['num_comments']))
return False
if 'description' in r and r['description']:
s['text'] = clean(r['description'] or '')
@@ -109,5 +116,5 @@ def story(ref):
if __name__ == '__main__':
#print(feed())
import json
print(json.dumps(story('fzvd1v')))
#print(story(20802050))
print(json.dumps(story('fzvd1v'), indent=4))
#print(json.dumps(story('ixyv5u'), indent=4))

View File

@@ -7,8 +7,6 @@ import requests
import time
from bs4 import BeautifulSoup
import settings
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0'
def api(route):
@@ -29,13 +27,15 @@ def api(route):
def story(ref):
html = api(ref)
if not html: return False
if not html:
logging.info('Bad http GET response.')
return False
soup = BeautifulSoup(html, features='html.parser')
s = {}
s['author'] = 'manual submission'
s['author_link'] = 'https://{}'.format(settings.HOSTNAME)
s['author_link'] = 'https://news.t0.vc'
s['score'] = 0
s['date'] = int(time.time())
s['title'] = str(soup.title.string) if soup.title else ref

View File

@@ -32,11 +32,8 @@ def feed():
return [x.id for x in reddit.subreddit(subs).hot()]
except KeyboardInterrupt:
raise
except PRAWException as e:
logging.error('Problem hitting reddit API: {}'.format(str(e)))
return []
except PrawcoreException as e:
logging.error('Problem hitting reddit API: {}'.format(str(e)))
except BaseException as e:
logging.critical('Problem hitting reddit API: {}'.format(str(e)))
return []
def comment(i):
@@ -59,7 +56,9 @@ def comment(i):
def story(ref):
try:
r = reddit.submission(ref)
if not r: return False
if not r:
logging.info('Bad Reddit API response.')
return False
s = {}
s['author'] = r.author.name if r.author else '[Deleted]'
@@ -73,7 +72,8 @@ def story(ref):
s['comments'] = list(filter(bool, s['comments']))
s['num_comments'] = r.num_comments
if s['score'] < settings.REDDIT_SCORE_THRESHOLD and s['num_comments'] < settings.REDDIT_COMMENT_THRESHOLD:
if s['score'] < 25 and s['num_comments'] < 10:
logging.info('Score ({}) or num comments ({}) below threshold.'.format(s['score'], s['num_comments']))
return False
if r.selftext:
@@ -84,10 +84,10 @@ def story(ref):
except KeyboardInterrupt:
raise
except PRAWException as e:
logging.error('Problem hitting reddit API: {}'.format(str(e)))
logging.critical('Problem hitting reddit API: {}'.format(str(e)))
return False
except PrawcoreException as e:
logging.error('Problem hitting reddit API: {}'.format(str(e)))
logging.critical('Problem hitting reddit API: {}'.format(str(e)))
return False
# scratchpad so I can quickly develop the parser

View File

@@ -1,101 +0,0 @@
import logging
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.DEBUG)
if __name__ == '__main__':
import sys
sys.path.insert(0,'.')
from datetime import datetime
from bs4 import BeautifulSoup
import settings
from utils import clean
from misc.time import unix
from misc.api import xml
from misc.news import Base
def _get_sitemap_date(a):
if a.find('lastmod'):
return a.find('lastmod').text
if a.find('news:publication_date'):
return a.find('news:publication_date').text
if a.find('ns2:publication_date'):
return a.find('ns2:publication_date').text
return ''
def _filter_links(links, excludes=None):
too_old = datetime.now().timestamp() - settings.MAX_STORY_AGE
links = list(filter(None, [a if _get_sitemap_date(a) else None for a in links]))
links = list(filter(None, [a if unix(_get_sitemap_date(a)) > too_old else None for a in links]))
links.sort(key=lambda a: unix(_get_sitemap_date(a)), reverse=True)
links = [x.find('loc').text for x in links] or []
links = list(set(links))
if excludes:
links = list(filter(None, [None if any(e in link for e in excludes) else link for link in links]))
return links
def _get_sitemap(feed_url, excludes=None):
markup = xml(lambda x: feed_url)
if not markup: return []
soup = BeautifulSoup(markup, features='lxml')
links = []
feed_urls = []
if soup.find('sitemapindex'):
sitemap = soup.find('sitemapindex').findAll('sitemap')
feed_urls = list(filter(None, [a if a.find('loc') else None for a in sitemap]))
if soup.find('urlset'):
sitemap = soup.find('urlset').findAll('url')
links = list(filter(None, [a if a.find('loc') else None for a in sitemap]))
feed_urls = _filter_links(feed_urls, excludes)
links = _filter_links(links, excludes)
for url in feed_urls:
links += _get_sitemap(url, excludes)
return list(set(links))
class Sitemap(Base):
def __init__(self, config):
self.config = config
self.sitemap_url = config.get('url')
self.tz = config.get('tz')
def feed(self, excludes=None):
links = []
if isinstance(self.sitemap_url, str):
links += _get_sitemap(self.sitemap_url, excludes)
elif isinstance(self.sitemap_url, list):
for url in self.sitemap_url:
links += _get_sitemap(url, excludes)
links = list(set(links))
return [(self.get_id(link), link) for link in links]
# scratchpad so I can quickly develop the parser
if __name__ == '__main__':
print("Sitemap: The Spinoff")
site = Sitemap({ 'url': "https://thespinoff.co.nz/sitemap.xml" })
excludes = [
'thespinoff.co.nz/sitemap-misc.xml',
'thespinoff.co.nz/sitemap-authors.xml',
'thespinoff.co.nz/sitemap-tax-category.xml',
]
posts = site.feed(excludes)
print(posts[:5])
print(site.story(posts[0][0], posts[0][1]))
print("Sitemap: Newshub")
site = Sitemap({
'url': [
'https://www.newshub.co.nz/home/politics.gnewssitemap.xml',
'https://www.newshub.co.nz/home/new-zealand.gnewssitemap.xml',
'https://www.newshub.co.nz/home/world.gnewssitemap.xml',
'https://www.newshub.co.nz/home/money.gnewssitemap.xml',
],
})
posts = site.feed()
print(posts[:5])
print(site.story(posts[0][0], posts[0][1]))

View File

@@ -1,174 +0,0 @@
import logging
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.DEBUG)
if __name__ == '__main__':
import sys
sys.path.insert(0,'.')
import requests
from datetime import datetime
import settings
from misc.time import unix
from misc.metadata import get_icons
from misc.api import xml, json
from utils import clean
SUBSTACK_REFERER = 'https://substack.com'
SUBSTACK_API_TOP_POSTS = lambda x: "https://substack.com/api/v1/reader/top-posts"
def author_link(author_id, base_url):
return f"{base_url}/people/{author_id}"
def api_comments(post_id, base_url):
return f"{base_url}/api/v1/post/{post_id}/comments?all_comments=true&sort=best_first"
def api_stories(x, base_url):
return f"{base_url}/api/v1/archive?sort=new&search=&offset=0&limit=100"
def comment(i):
if 'body' not in i:
return False
c = {}
c['date'] = unix(i.get('date'))
c['author'] = i.get('name', '')
c['score'] = i.get('reactions').get('')
c['text'] = clean(i.get('body', '') or '')
c['comments'] = [comment(j) for j in i['children']]
c['comments'] = list(filter(bool, c['comments']))
return c
class Publication:
def __init__(self, domain):
self.BASE_DOMAIN = domain
def ref_prefix(self, ref):
return f"{self.BASE_DOMAIN}/#id:{ref}"
def strip_ref_prefix(self, ref):
return ref.replace(f"{self.BASE_DOMAIN}/#id:", '')
def feed(self):
too_old = datetime.now().timestamp() - settings.MAX_STORY_AGE
stories = json(lambda x: api_stories(x, self.BASE_DOMAIN), headers={'Referer': self.BASE_DOMAIN})
if not stories: return []
stories = list(filter(None, [i if i.get("audience") == "everyone" else None for i in stories]))
stories = list(filter(None, [i if unix(i.get('post_date')) > too_old else None for i in stories]))
stories.sort(key=lambda a: unix(a.get('post_date')), reverse=True)
return [self.ref_prefix(str(i.get("id"))) for i in stories or []]
def story(self, ref):
ref = self.strip_ref_prefix(ref)
stories = json(lambda x: api_stories(x, self.BASE_DOMAIN), headers={'Referer': self.BASE_DOMAIN})
if not stories: return False
stories = list(filter(None, [i if i.get("audience") == "everyone" else None for i in stories]))
stories = list(filter(None, [i if str(i.get('id')) == ref else None for i in stories]))
if len(stories) == 0:
return False
r = stories[0]
if not r:
return False
s = {}
s['author'] = ''
s['author_link'] = ''
s['date'] = unix(r.get('post_date'))
s['score'] = r.get('reactions').get('')
s['title'] = r.get('title', '')
s['link'] = r.get('canonical_url', '')
s['url'] = r.get('canonical_url', '')
comments = json(lambda x: api_comments(x, self.BASE_DOMAIN), r.get('id'), headers={'Referer': self.BASE_DOMAIN})
s['comments'] = [] if not comments else [comment(i) for i in comments.get('comments')]
s['comments'] = list(filter(bool, s['comments']))
s['num_comments'] = r.get('comment_count', 0)
authors = list(filter(None, [self._bylines(byline) for byline in r.get('publishedBylines')]))
if len(authors):
s['author'] = authors[0].get('name')
s['author_link'] = authors[0].get('link')
markup = xml(lambda x: s['link'])
if markup:
icons = get_icons(markup, url=s['link'])
if icons:
s['icon'] = icons[0]
return s
def _bylines(self, b):
if 'id' not in b:
return None
a = {}
a['name'] = b.get('name')
a['link'] = author_link(b.get('id'), self.BASE_DOMAIN)
return a
class Top:
def ref_prefix(self, base_url, ref):
return f"{base_url}/#id:{ref}"
def strip_ref_prefix(self, ref):
if '/#id:' in ref:
base_url, item = ref.split(f"/#id:")
return item
return ref
def feed(self):
too_old = datetime.now().timestamp() - settings.MAX_STORY_AGE
stories = json(SUBSTACK_API_TOP_POSTS, headers={'Referer': SUBSTACK_REFERER})
if not stories: return []
stories = list(filter(None, [i if i.get("audience") == "everyone" else None for i in stories]))
stories = list(filter(None, [i if unix(i.get('post_date')) > too_old else None for i in stories]))
stories.sort(key=lambda a: unix(a.get('post_date')), reverse=True)
stories = [self.ref_prefix(str(i.get("pub").get("base_url")), str(i.get("id"))) for i in stories]
return stories
def story(self, ref):
ref = self.strip_ref_prefix(ref)
stories = json(SUBSTACK_API_TOP_POSTS, headers={'Referer': SUBSTACK_REFERER})
if not stories: return False
stories = list(filter(None, [i if i.get("audience") == "everyone" else None for i in stories]))
stories = list(filter(None, [i if str(i.get('id')) == ref else None for i in stories]))
if len(stories) == 0:
return False
r = stories[0]
if not r:
return False
s = {}
pub = r.get('pub')
base_url = pub.get('base_url')
s['author'] = pub.get('author_name')
s['author_link'] = author_link(pub.get('author_id'), base_url)
s['date'] = unix(r.get('post_date'))
s['score'] = r.get('score')
s['title'] = r.get('title', '')
s['link'] = r.get('canonical_url', '')
s['url'] = r.get('canonical_url', '')
comments = json(lambda x: api_comments(x, base_url), r.get('id'), headers={'Referer': base_url})
s['comments'] = [] if not comments else [comment(i) for i in comments.get('comments')]
s['comments'] = list(filter(bool, s['comments']))
s['num_comments'] = r.get('comment_count', 0)
return s
top = Top()
# scratchpad so I can quickly develop the parser
if __name__ == '__main__':
top_posts = top.feed()
print(top.story(top_posts[0]))
webworm = Publication("https://www.webworm.co/")
posts = webworm.feed()
print(webworm.story(posts[0]))

View File

@@ -34,7 +34,7 @@ def api(route):
except KeyboardInterrupt:
raise
except BaseException as e:
logging.error('Problem hitting tildes website: {}'.format(str(e)))
logging.critical('Problem hitting tildes website: {}'.format(str(e)))
return False
def feed():
@@ -71,11 +71,15 @@ def story(ref):
html = api(SITE_LINK(group_lookup[ref], ref))
else:
html = api(API_ITEM(ref))
if not html: return False
if not html:
logging.info('Bad Tildes API response.')
return False
soup = BeautifulSoup(html, features='html.parser')
a = soup.find('article', class_='topic-full')
if a is None: return False
if a is None:
logging.info('Tildes <article> element not found.')
return False
h = a.find('header')
lu = h.find('a', class_='link-user')
@@ -83,6 +87,7 @@ def story(ref):
error = a.find('div', class_='text-error')
if error:
if 'deleted' in error.string or 'removed' in error.string:
logging.info('Article was deleted or removed.')
return False
s = {}
@@ -102,7 +107,21 @@ def story(ref):
ch = a.find('header', class_='topic-comments-header')
s['num_comments'] = int(ch.h2.string.split(' ')[0]) if ch else 0
if s['score'] < 8 and s['num_comments'] < 6:
if s['group'].split('.')[0] not in [
'~arts',
'~comp',
'~creative',
'~design',
'~engineering',
'~finance',
'~science',
'~tech',
]:
logging.info('Group ({}) not in whitelist.'.format(s['group']))
return False
if s['score'] < 15 and s['num_comments'] < 10:
logging.info('Score ({}) or num comments ({}) below threshold.'.format(s['score'], s['num_comments']))
return False
td = a.find('div', class_='topic-full-text')
@@ -113,7 +132,7 @@ def story(ref):
# scratchpad so I can quickly develop the parser
if __name__ == '__main__':
#print(feed())
print(feed())
#normal = story('gxt')
#print(normal)
#no_comments = story('gxr')
@@ -122,8 +141,8 @@ if __name__ == '__main__':
#print(self_post)
#li_comment = story('gqx')
#print(li_comment)
broken = story('q4y')
print(broken)
#broken = story('q4y')
#print(broken)
# make sure there's no self-reference
#import copy

View File

@@ -1,40 +0,0 @@
import logging
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.DEBUG)
import requests
GOOGLEBOT_USER_AGENT = "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
GOOGLEBOT_IP = '66.249.66.1'
TIMEOUT = 30
def xml(route, ref=None, headers=dict(), use_googlebot=True):
try:
if use_googlebot:
headers['User-Agent'] = GOOGLEBOT_USER_AGENT
headers['X-Forwarded-For'] = GOOGLEBOT_IP
r = requests.get(route(ref), headers=headers, timeout=TIMEOUT)
if r.status_code != 200:
raise Exception('Bad response code ' + str(r.status_code))
return r.text
except KeyboardInterrupt:
raise
except BaseException as e:
logging.error('Problem hitting URL: {}'.format(str(e)))
return False
def json(route, ref=None, headers=dict(), use_googlebot=True):
try:
if use_googlebot:
headers['User-Agent'] = GOOGLEBOT_USER_AGENT
headers['X-Forwarded-For'] = GOOGLEBOT_IP
r = requests.get(route(ref), headers=headers, timeout=TIMEOUT)
if r.status_code != 200:
raise Exception('Bad response code ' + str(r.status_code))
return r.json()
except KeyboardInterrupt:
raise
except BaseException as e:
logging.error('Problem hitting URL: {}'.format(str(e)))
return False

View File

@@ -1,14 +0,0 @@
from bs4 import BeautifulSoup
def get_icons(markup):
soup = BeautifulSoup(markup, features='html.parser')
icon32 = soup.find_all('link', rel="icon", href=True, sizes="32x32")
icon16 = soup.find_all('link', rel="icon", href=True, sizes="16x16")
favicon = soup.find_all('link', rel="shortcut icon", href=True)
others = soup.find_all('link', rel="icon", href=True)
icons = icon32 + icon16 + favicon + others
base_url = '/'.join(urlref.split('/')[:3])
icons = list(set([i.get('href') for i in icons]))
icons = [i if i.startswith('http') else base_url + i for i in icons]
return icons

View File

@@ -1,84 +0,0 @@
from bs4 import BeautifulSoup
def get_icons(markup, url):
soup = BeautifulSoup(markup, features='html.parser')
icon32 = soup.find_all('link', rel="icon", href=True, sizes="32x32")
icon16 = soup.find_all('link', rel="icon", href=True, sizes="16x16")
favicon = soup.find_all('link', rel="shortcut icon", href=True)
others = soup.find_all('link', rel="icon", href=True)
icons = icon32 + icon16 + favicon + others
base_url = '/'.join(url.split('/')[:3])
icons = list(set([i.get('href') for i in icons]))
icons = [i if i.startswith('http') else base_url + i for i in icons]
return icons
def parse_extruct(s, data):
rdfa_keys = {
'title': [
'http://ogp.me/ns#title',
'https://ogp.me/ns#title',
],
'date': [
'http://ogp.me/ns/article#modified_time',
'https://ogp.me/ns/article#modified_time',
'http://ogp.me/ns/article#published_time',
'https://ogp.me/ns/article#published_time',
]
}
for rdfa in data['rdfa']:
for key, props in rdfa.items():
for attribute, properties in rdfa_keys.items():
for prop in properties:
if prop in props:
for values in props[prop]:
s[attribute] = values['@value']
for og in data['opengraph']:
titles = list(filter(None, [value if 'og:title' in key else None for key, value in og['properties']]))
modified = list(filter(None, [value if 'article:modified_time' in key else None for key, value in og['properties']]))
published = list(filter(None, [value if 'article:published_time' in key else None for key, value in og['properties']]))
if len(modified):
s['date'] = modified[0]
if len(published):
s['date'] = published[0]
if len(titles):
s['title'] = titles[0]
for md in data['microdata']:
if md['type'] in ['https://schema.org/NewsArticle', 'http://schema.org/NewsArticle']:
props = md['properties']
s['title'] = props['headline']
if props['dateModified']:
s['date'] = props['dateModified']
if props['datePublished']:
s['date'] = props['datePublished']
if 'author' in props and props['author']:
if 'properties' in props['author']:
s['author'] = props['author']['properties']['name']
elif isinstance(props['author'], list):
s['author'] = props['author'][0]['properties']['name']
for ld in data['json-ld']:
if '@type' in ld and ld['@type'] in ['Article', 'NewsArticle']:
s['title'] = ld['headline']
if ld['dateModified']:
s['date'] = ld['dateModified']
if ld['datePublished']:
s['date'] = ld['datePublished']
if 'author' in ld and ld['author']:
if 'name' in ld['author']:
s['author'] = ld['author']['name']
elif isinstance(ld['author'], list):
s['author'] = ld['author'][0]['name']
if '@graph' in ld:
for gld in ld['@graph']:
if '@type' in gld and gld['@type'] in ['Article', 'NewsArticle']:
s['title'] = gld['headline']
if gld['dateModified']:
s['date'] = gld['dateModified']
if gld['datePublished']:
s['date'] = gld['datePublished']
return s

View File

@@ -1,94 +0,0 @@
import logging
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.DEBUG)
import re
import requests
from bs4 import BeautifulSoup
from scrapers.declutter import declutter, headless
import extruct
import settings
from utils import clean
from misc.metadata import parse_extruct, get_icons
from misc.time import unix
from misc.api import xml
import misc.stuff as stuff
def clean_comment(comment):
comment['text'] = clean(comment['text'])
comment['comments'] = [clean_comments(c) for c in comment['comments']]
return comment
def comment_count(i):
alive = 1 if i['author'] else 0
return sum([comment_count(c) for c in i['comments']]) + alive
class Base:
def __init__(config):
self.config = config
self.url = config.get('url')
self.tz = config.get('tz')
def get_id(self, link):
patterns = self.config.get('patterns')
if not patterns:
return link
patterns = [re.compile(p) for p in patterns]
patterns = list(filter(None, [p.match(link) for p in patterns]))
patterns = list(set([':'.join(p.groups()) for p in patterns]))
if not patterns:
return link
return patterns[0]
def feed(self, excludes=None):
return []
def story(self, ref, urlref):
if urlref is None:
return False
markup = xml(lambda x: urlref)
if not markup:
return False
s = {}
s['author_link'] = ''
s['score'] = 0
s['comments'] = []
s['num_comments'] = 0
s['link'] = urlref
s['url'] = urlref
s['date'] = 0
s['title'] = ''
icons = get_icons(markup, url=urlref)
if icons:
s['icon'] = icons[0]
data = extruct.extract(markup)
s = parse_extruct(s, data)
if s['title']:
s['title'] = clean(s['title'])
if s['date']:
s['date'] = unix(s['date'], tz=self.tz)
if 'disqus' in markup:
try:
s['comments'] = declutter.get_comments(urlref)
s['comments'] = [clean_comments(c) for c in s['comments']]
s['comments'] = list(filter(bool, s['comments']))
s['num_comments'] = comment_count(s['comments'])
except KeyboardInterrupt:
raise
except:
pass
if urlref.startswith('https://www.stuff.co.nz'):
s['comments'] = stuff.get_comments(urlref)
s['comments'] = list(filter(bool, s['comments']))
s['num_comments'] = len(s['comments'])
if not s['date']:
return False
return s

View File

@@ -1,65 +0,0 @@
import re
from bs4 import BeautifulSoup
if __name__ == '__main__':
import sys
sys.path.insert(0,'.')
from misc.time import unix
from misc.api import xml
from utils import clean
def _soup_get_text(soup):
if not soup: return None
if soup.text: return soup.text
s = soup.find(text=lambda tag: isinstance(tag, bs4.CData))
if s and s.string: return s.string.strip()
return None
def _parse_comment(soup):
c = {
'author': '',
'authorLink': '',
'score': 0,
'date': 0,
'text': '',
'comments': [],
}
if soup.find('link'):
title = _soup_get_text(soup.find('link'))
if title and 'By:' in title:
c['author'] = title.strip('By:').strip()
if soup.find('dc:creator'):
c['author'] = _soup_get_text(soup.find('dc:creator'))
if soup.find('link'):
c['authorLink'] = _soup_get_text(soup.find('link'))
if soup.find('description'):
c['text'] = clean(_soup_get_text(soup.find('description')))
if soup.find('pubdate'):
c['date'] = unix(soup.find('pubdate').text)
elif soup.find('pubDate'):
c['date'] = unix(soup.find('pubDate').text)
return c
def get_comments(url):
regex = r"https:\/\/www\.stuff\.co\.nz\/(.*\/\d+)/[^\/]+"
p = re.compile(regex).match(url)
path = p.groups()[0]
comment_url = f'https://comments.us1.gigya.com/comments/rss/6201101/Stuff/stuff/{path}'
markup = xml(lambda x: comment_url)
if not markup: return []
soup = BeautifulSoup(markup, features='html.parser')
comments = soup.find_all('item')
if not comments: return []
comments = [_parse_comment(c) for c in comments]
return comments
# scratchpad so I can quickly develop the parser
if __name__ == '__main__':
comments = get_comments('https://www.stuff.co.nz/life-style/homed/houses/123418468/dear-jacinda-we-need-to-talk-about-housing')
print(len(comments))
print(comments[:5])

View File

@@ -1,24 +0,0 @@
import pytz
from datetime import timedelta
import dateutil.parser
TZINFOS = {
'NZDT': pytz.timezone('Pacific/Auckland'),
'NZST': pytz.timezone('Pacific/Auckland'),
}
TZINFOS = {
'NZDT': 13*60*60,
'NZST': 12*60*60,
}
def unix(date_str, tz=None, tzinfos=TZINFOS):
try:
dt = dateutil.parser.parse(date_str, tzinfos=tzinfos)
if tz:
dt = pytz.timezone(tz).localize(dt)
return int(dt.timestamp())
except:
pass
return 0

View File

@@ -4,21 +4,19 @@ certifi==2020.6.20
chardet==3.0.4
click==7.1.2
commonmark==0.9.1
extruct==0.10.0
Flask==1.1.2
Flask-Cors==3.0.8
gevent==20.6.2
greenlet==0.4.16
humanize==4.10.0
idna==2.10
itsdangerous==1.1.0
Jinja2==2.11.2
lxml==4.6.1
MarkupSafe==1.1.1
packaging==20.4
praw==6.4.0
prawcore==1.4.0
pyparsing==2.4.7
pytz==2020.4
requests==2.24.0
six==1.15.0
soupsieve==2.0.1
@@ -30,4 +28,3 @@ websocket-client==0.57.0
Werkzeug==1.0.1
zope.event==4.4
zope.interface==5.1.0
python-dateutil==2.8.1

View File

@@ -1,64 +0,0 @@
import logging
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.DEBUG)
import requests
from settings import HEADLESS_READER_PORT, SIMPLE_READER_PORT
class Simple:
def __init__(self, host, name, internal=True, timeout=90):
self.host = host
self.name = name
self.internal = internal
self.timeout = timeout
self.variant = 'simple'
def as_readable(self, details):
if not self.internal:
details['scraper_link'] = self.host
return details
def get_html(self, url):
details = self.get_details(url)
if not details:
return ''
return details['content']
def get_details(self, url):
logging.info(f"{self.name} Scraper: {url}")
details = self._json(f"{self.host}/{self.variant}/details", dict(url=url), "article")
if not details: return None
return self.as_readable(details)
def _json(self, url, data, adjective):
try:
r = requests.post(url, data=data, timeout=self.timeout)
if r.status_code != 200:
raise Exception('Bad response code ' + str(r.status_code))
return r.json()
except KeyboardInterrupt:
raise
except BaseException as e:
logging.error('{}: Problem scraping {}: {}'.format(self.name, adjective, str(e)))
return None
class Headless(Simple):
def __init__(self, host, name, internal=True, timeout=90):
self.host = host
self.name = name
self.internal = internal
self.timeout = timeout
self.variant = 'headless'
def get_comments(self, url):
logging.info(f"{self.name} Scraper: {url}")
comments = self._json(f"{self.host}/{self.variant}/comments", dict(url=url), "comments")
if not comments: return None
return comments
declutter = Headless('https://declutter.1j.nz', 'Declutter scraper', internal=False)
headless = Headless(f"http://127.0.0.1:{HEADLESS_READER_PORT or 33843}", 'Headless scraper')
simple = Simple(f"http://127.0.0.1:{SIMPLE_READER_PORT or 33843}", 'Simple scraper')

View File

@@ -1,64 +0,0 @@
import logging
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.DEBUG)
import requests
OUTLINE_REFERER = 'https://outline.com/'
OUTLINE_API = 'https://api.outline.com/v3/parse_article'
TIMEOUT = 20
def get_html(url):
details = get_details(url)
if not details:
return ''
return details['content']
def get_details(url):
outline = _get_outline(url)
if not outline:
return None
return as_readable(outline)
def as_readable(details):
readable = {
'title': details['title'],
'byline': details['author'],
'content': details['html'],
'excerpt': _excerpt(details),
'siteName': details['site_name'],
'url': details['article_url'],
'publisher': details['site_name'],
'scraper_link': 'https://outline.com/' + details['short_code'],
'meta': {}
}
readable['meta'].update(details['meta'])
return readable
def _get_outline(url):
try:
logging.info(f"Outline Scraper: {url}")
params = {'source_url': url}
headers = {'Referer': OUTLINE_REFERER}
r = requests.get(OUTLINE_API, params=params, headers=headers, timeout=TIMEOUT)
if r.status_code == 429:
logging.info('Rate limited by outline, skipping...')
return None
if r.status_code != 200:
raise Exception('Bad response code ' + str(r.status_code))
data = r.json()['data']
if 'URL is not supported by Outline' in data['html']:
raise Exception('URL not supported by Outline')
return data
except KeyboardInterrupt:
raise
except BaseException as e:
logging.error('Problem outlining article: {}'.format(str(e)))
return None
def _excerpt(details):
meta = details.get('meta')
if not meta: return ''
if meta.get('description'): return meta.get('description', '')
if not meta.get('og'): return ''
return meta.get('og').get('og:description', '')

View File

@@ -1,6 +1,8 @@
import database
import search
import sys
import settings
import logging
import json
import requests
@@ -21,7 +23,7 @@ def database_del_story(sid):
def search_del_story(sid):
try:
r = requests.delete(search.MEILI_URL + 'indexes/qotnews/documents/'+sid, timeout=2)
r = requests.delete(settings.MEILI_URL + 'indexes/qotnews/documents/'+sid, timeout=2)
if r.status_code != 202:
raise Exception('Bad response code ' + str(r.status_code))
return r.json()

View File

@@ -0,0 +1,58 @@
import time
import json
import logging
import feed
import database
import search
database.init()
def fix_gzip_bug(story_list):
FIX_THRESHOLD = 150
count = 1
for sid in story_list:
try:
sid = sid[0]
story = database.get_story(sid)
full_json = json.loads(story.full_json)
meta_json = json.loads(story.meta_json)
text = full_json.get('text', '')
count = text.count('<EFBFBD>')
if not count: continue
ratio = count / len(text) * 1000
print('Bad story:', sid, 'Num ?:', count, 'Ratio:', ratio)
if ratio < FIX_THRESHOLD: continue
print('Attempting to fix...')
valid = feed.update_story(meta_json, is_manual=True)
if valid:
database.put_story(meta_json)
search.put_story(meta_json)
print('Success')
else:
print('Story was not valid')
time.sleep(3)
except KeyboardInterrupt:
raise
except BaseException as e:
logging.exception(e)
breakpoint()
if __name__ == '__main__':
num_stories = database.count_stories()
print('Fix {} stories?'.format(num_stories))
print('Press ENTER to continue, ctrl-c to cancel')
input()
story_list = database.get_story_list()
fix_gzip_bug(story_list)

View File

@@ -0,0 +1,62 @@
import logging
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.INFO)
import database
from sqlalchemy import select
import search
import sys
import time
import json
import requests
database.init()
search.init()
BATCH_SIZE = 5000
def put_stories(stories):
return search.meili_api(requests.post, 'indexes/qotnews/documents', stories)
def get_update(update_id):
return search.meili_api(requests.get, 'tasks/{}'.format(update_id))
if __name__ == '__main__':
num_stories = database.count_stories()
print('Reindex {} stories?'.format(num_stories))
print('Press ENTER to continue, ctrl-c to cancel')
input()
story_list = database.get_story_list()
count = 1
while len(story_list):
stories = []
for _ in range(BATCH_SIZE):
try:
sid = story_list.pop()
except IndexError:
break
story = database.get_story(sid)
print('Indexing {}/{} id: {} title: {}'.format(count, num_stories, sid[0], story.title))
story_obj = json.loads(story.meta_json)
stories.append(story_obj)
count += 1
res = put_stories(stories)
update_id = res['uid']
print('Waiting for processing', end='')
while get_update(update_id)['status'] != 'succeeded':
time.sleep(0.5)
print('.', end='', flush=True)
print()
print('Done.')

View File

@@ -0,0 +1,23 @@
import time
import requests
def test_search_api():
num_tests = 100
total_time = 0
for i in range(num_tests):
start = time.time()
res = requests.get('http://127.0.0.1:33842/api/search?q=iphone')
res.raise_for_status()
duration = time.time() - start
total_time += duration
avg_time = total_time / num_tests
print('Average search time:', avg_time)
if __name__ == '__main__':
test_search_api()

View File

@@ -4,83 +4,62 @@ logging.basicConfig(
level=logging.DEBUG)
import requests
import settings
MEILI_URL = 'http://127.0.0.1:7700/'
SEARCH_ENABLED = bool(settings.MEILI_URL)
def meili_api(method, route, json=None, params=None, parse_json=True):
try:
r = method(settings.MEILI_URL + route, json=json, params=params, timeout=4)
if r.status_code > 299:
raise Exception('Bad response code ' + str(r.status_code))
if parse_json:
return r.json()
else:
r.encoding = 'utf-8'
return r.text
except KeyboardInterrupt:
raise
except BaseException as e:
logging.error('Problem with MeiliSearch api route: %s: %s', route, str(e))
return False
def create_index():
try:
json = dict(name='qotnews', uid='qotnews')
r = requests.post(MEILI_URL + 'indexes', json=json, timeout=2)
if r.status_code != 201:
raise Exception('Bad response code ' + str(r.status_code))
return r.json()
except KeyboardInterrupt:
raise
except BaseException as e:
logging.error('Problem creating MeiliSearch index: {}'.format(str(e)))
return False
json = dict(uid='qotnews', primaryKey='id')
return meili_api(requests.post, 'indexes', json=json)
def update_rankings():
try:
json = ['typo', 'words', 'proximity', 'attribute', 'desc(date)', 'wordsPosition', 'exactness']
r = requests.post(MEILI_URL + 'indexes/qotnews/settings/ranking-rules', json=json, timeout=2)
if r.status_code != 202:
raise Exception('Bad response code ' + str(r.status_code))
return r.json()
except KeyboardInterrupt:
raise
except BaseException as e:
logging.error('Problem setting MeiliSearch ranking rules: {}'.format(str(e)))
return False
json = ['typo', 'words', 'proximity', 'date:desc', 'exactness']
return meili_api(requests.post, 'indexes/qotnews/settings/ranking-rules', json=json)
def update_attributes():
try:
json = ['title', 'url', 'author', 'link', 'id', 'source']
r = requests.post(MEILI_URL + 'indexes/qotnews/settings/searchable-attributes', json=json, timeout=2)
if r.status_code != 202:
raise Exception('Bad response code ' + str(r.status_code))
requests.delete(MEILI_URL + 'indexes/qotnews/settings/displayed-attributes', timeout=2)
return r.json()
except KeyboardInterrupt:
raise
except BaseException as e:
logging.error('Problem setting MeiliSearch searchable attributes: {}'.format(str(e)))
return False
json = ['title', 'url', 'author']
r = meili_api(requests.post, 'indexes/qotnews/settings/searchable-attributes', json=json)
json = ['id', 'ref', 'source', 'author', 'author_link', 'score', 'date', 'title', 'link', 'url', 'num_comments']
r = meili_api(requests.post, 'indexes/qotnews/settings/displayed-attributes', json=json)
return r
def init():
create_index()
if not SEARCH_ENABLED:
logging.info('Search is not enabled, skipping init.')
return
print(create_index())
update_rankings()
update_attributes()
def put_story(story):
story = story.copy()
story.pop('text', None)
story.pop('comments', None)
try:
r = requests.post(MEILI_URL + 'indexes/qotnews/documents', json=[story], timeout=2)
if r.status_code != 202:
raise Exception('Bad response code ' + str(r.status_code))
return r.json()
except KeyboardInterrupt:
raise
except BaseException as e:
logging.error('Problem putting MeiliSearch story: {}'.format(str(e)))
return False
if not SEARCH_ENABLED: return
return meili_api(requests.post, 'indexes/qotnews/documents', [story])
def search(q, skip=0, limit=250):
try:
params = dict(q=q, offset=skip, limit=limit)
r = requests.get(MEILI_URL + 'indexes/qotnews/search', params=params, timeout=2)
if r.status_code != 200:
raise Exception('Bad response code ' + str(r.status_code))
return r.json()['hits']
except KeyboardInterrupt:
raise
except BaseException as e:
logging.error('Problem searching MeiliSearch: {}'.format(str(e)))
return False
def search(q):
if not SEARCH_ENABLED: return []
params = dict(q=q, limit=settings.FEED_LENGTH)
r = meili_api(requests.get, 'indexes/qotnews/search', params=params, parse_json=False)
return r
if __name__ == '__main__':
create_index()
init()
print(search('the'))
print(update_rankings())
print(search('facebook'))

View File

@@ -1,7 +1,8 @@
import logging
import os, logging
DEBUG = os.environ.get('DEBUG')
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.INFO)
level=logging.DEBUG if DEBUG else logging.INFO)
import gevent
from gevent import monkey
@@ -13,58 +14,146 @@ import json
import threading
import traceback
import time
from datetime import datetime, timedelta
import datetime
import humanize
import urllib.request
from urllib.parse import urlparse, parse_qs
import settings
import database
import search
import feed
from utils import gen_rand_id
from utils import gen_rand_id, NUM_ID_CHARS
from flask import abort, Flask, request, render_template, stream_with_context, Response
from werkzeug.exceptions import NotFound
from flask_cors import CORS
smallweb_set = set()
def load_smallweb_list():
EXCLUDED = [
'github.com',
]
global smallweb_set
try:
url = 'https://raw.githubusercontent.com/kagisearch/smallweb/refs/heads/main/smallweb.txt'
with urllib.request.urlopen(url, timeout=10) as response:
urls = response.read().decode('utf-8').splitlines()
hosts = {urlparse(u).hostname for u in urls if u and urlparse(u).hostname}
smallweb_set = {h.replace('www.', '') for h in hosts if h not in EXCLUDED}
logging.info('Loaded {} smallweb domains.'.format(len(smallweb_set)))
except Exception as e:
logging.error('Failed to load smallweb list: {}'.format(e))
load_smallweb_list()
database.init()
search.init()
news_index = 0
ref_list = []
current_item = {}
def new_id():
nid = gen_rand_id()
while database.get_story(nid):
nid = gen_rand_id()
return nid
build_folder = '../webclient/build'
def fromnow(ts):
return humanize.naturaltime(datetime.datetime.fromtimestamp(ts))
build_folder = './build'
flask_app = Flask(__name__, template_folder=build_folder, static_folder=build_folder, static_url_path='')
flask_app.jinja_env.filters['fromnow'] = fromnow
cors = CORS(flask_app)
@flask_app.route('/api')
def api():
skip = request.args.get('skip', 0)
limit = request.args.get('limit', 20)
stories = database.get_stories(skip=skip, limit=limit)
res = Response(json.dumps({"stories": stories}))
limit = request.args.get('limit', settings.FEED_LENGTH)
if request.args.get('smallweb') == 'true' and smallweb_set:
limit = int(limit)
skip = int(skip)
filtered_stories = []
current_skip = skip
while len(filtered_stories) < limit:
stories_batch = database.get_stories(limit, current_skip)
if not stories_batch:
break
for story_str in stories_batch:
story = json.loads(story_str)
story_url = story.get('url') or story.get('link') or ''
if not story_url:
continue
hostname = urlparse(story_url).hostname
if hostname:
hostname = hostname.replace('www.', '')
if hostname in smallweb_set:
filtered_stories.append(story_str)
if len(filtered_stories) == limit:
break
if len(filtered_stories) == limit:
break
current_skip += limit
stories = filtered_stories
else:
stories = database.get_stories(limit, skip)
# hacky nested json
res = Response('{"stories":[' + ','.join(stories) + ']}')
res.headers['content-type'] = 'application/json'
return res
@flask_app.route('/api/stats', strict_slashes=False)
def apistats():
stats = {
'news_index': news_index,
'ref_list': ref_list,
'len_ref_list': len(ref_list),
'current_item': current_item,
'total_stories': database.count_stories(),
'id_space': 26**NUM_ID_CHARS,
}
return stats
@flask_app.route('/api/search', strict_slashes=False)
def apisearch():
q = request.args.get('q', '')
skip = request.args.get('skip', 0)
limit = request.args.get('limit', 20)
if len(q) >= 3:
results = search.search(q, skip=skip, limit=limit)
results = search.search(q)
else:
results = []
return dict(results=results)
results = '[]'
res = Response(results)
res.headers['content-type'] = 'application/json'
return res
@flask_app.route('/api/submit', methods=['POST'], strict_slashes=False)
def submit():
try:
url = request.form['url']
for prefix in ['http://', 'https://']:
if url.lower().startswith(prefix):
break
else: # for
url = 'http://' + url
nid = new_id()
logging.info('Manual submission: ' + url)
parse = urlparse(url)
if 'news.ycombinator.com' in parse.hostname:
source = 'hackernews'
@@ -78,13 +167,18 @@ def submit():
elif 'reddit.com' in parse.hostname and 'comments' in url:
source = 'reddit'
ref = parse.path.split('/')[4]
elif settings.HOSTNAME in parse.hostname:
elif 'news.t0.vc' in parse.hostname:
raise Exception('Invalid article')
else:
source = 'manual'
ref = url
existing = database.get_story_by_ref(ref)
if existing and DEBUG:
ref = ref + '#' + str(time.time())
existing = False
if existing:
return {'nid': existing.sid}
else:
@@ -93,25 +187,28 @@ def submit():
if valid:
database.put_story(story)
search.put_story(story)
if DEBUG:
logging.info('Adding manual ref: {}, id: {}, source: {}'.format(ref, nid, source))
database.put_ref(ref, nid, source)
return {'nid': nid}
else:
raise Exception('Invalid article')
except BaseException as e:
logging.error('Problem with article submission: {} - {}'.format(e.__class__.__name__, str(e)))
except Exception as e:
msg = 'Problem with article submission: {} - {}'.format(e.__class__.__name__, str(e))
logging.error(msg)
print(traceback.format_exc())
abort(400)
return {'error': msg.split('\n')[0]}, 400
@flask_app.route('/api/<sid>')
def story(sid):
story = database.get_story(sid)
if story:
related = []
if story.meta['url']:
related = database.get_stories_by_url(story.meta['url'])
related = [r.meta for r in related]
res = Response(json.dumps({"story": story.data, "related": related}))
# hacky nested json
res = Response('{"story":' + story.full_json + '}')
res.headers['content-type'] = 'application/json'
return res
else:
@@ -120,10 +217,19 @@ def story(sid):
@flask_app.route('/')
@flask_app.route('/search')
def index():
stories_json = database.get_stories(settings.FEED_LENGTH, 0)
stories = [json.loads(s) for s in stories_json]
for s in stories:
url = urlparse(s.get('url') or s.get('link') or '').hostname or ''
s['hostname'] = url.replace('www.', '')
return render_template('index.html',
title='Feed',
url=settings.HOSTNAME,
description='Hacker News, Reddit, Lobsters, and Tildes articles rendered in reader mode')
title='QotNews',
url='news.t0.vc',
description='Hacker News, Reddit, Lobsters, and Tildes articles rendered in reader mode',
robots='index',
stories=stories,
)
@flask_app.route('/<sid>', strict_slashes=False)
@flask_app.route('/<sid>/c', strict_slashes=False)
@@ -133,9 +239,9 @@ def static_story(sid):
except NotFound:
pass
story = database.get_story(sid)
if not story: return abort(404)
story = story.data
story_obj = database.get_story(sid)
if not story_obj: return abort(404)
story = json.loads(story_obj.full_json)
score = story['score']
num_comments = story['num_comments']
@@ -144,104 +250,78 @@ def static_story(sid):
score, 's' if score != 1 else '',
num_comments, 's' if num_comments != 1 else '',
source)
url = urlparse(story['url']).hostname or urlparse(story['link']).hostname or ''
url = urlparse(story.get('url') or story.get('link') or '').hostname or ''
url = url.replace('www.', '')
return render_template('index.html',
title=story['title'],
url=url,
description=description)
http_server = WSGIServer(('', settings.API_PORT or 33842), flask_app)
def _add_new_refs():
added = []
for ref, source, urlref in feed.get_list():
if database.get_story_by_ref(ref):
continue
try:
nid = new_id()
database.put_ref(ref, nid, source, urlref)
logging.info('Added ref ' + ref)
added.append(ref)
except database.IntegrityError:
#logging.info('Unable to add ref ' + ref)
continue
return added
def _update_current_story(item):
try:
story = database.get_story(item['sid']).data
except AttributeError:
story = dict(id=item['sid'], ref=item['ref'], source=item['source'])
logging.info('Updating story: {}'.format(str(story['ref'])))
valid = feed.update_story(story, urlref=item['urlref'])
if valid:
try:
database.put_story(story)
search.put_story(story)
except database.IntegrityError:
logging.info('Unable to add story with ref ' + item['ref'])
else:
database.del_ref(item['ref'])
logging.info('Removed ref {}'.format(item['ref']))
title=story['title'] + ' | QotNews',
url=url,
description=description,
robots='noindex',
story=story,
show_comments=request.path.endswith('/c'),
)
http_server = WSGIServer(('', 33842), flask_app)
def feed_thread():
new_refs = []
update_refs = []
last_check = datetime.now() - timedelta(minutes=20)
global news_index, ref_list, current_item
try:
while True:
# onboard new stories
time_since_check = datetime.now() - last_check
if not len(new_refs) and time_since_check > timedelta(minutes=15):
added = _add_new_refs()
ref_list = database.get_reflist()
new_refs = list(filter(None, [i if i['ref'] in added else None for i in ref_list]))
update_queue = list(filter(None, [i if i['ref'] not in added else None for i in ref_list]))
current_queue_refs = [i['ref'] for i in update_refs]
update_queue = list(filter(None, [i if i['ref'] not in current_queue_refs else None for i in update_queue]))
update_refs += update_queue
logging.info('Added {} new refs'.format(len(added)))
logging.info('Have {} refs in update queue'.format(len(current_queue_refs)))
logging.info('Fetched {} refs for update queue'.format(len(update_queue)))
last_check = datetime.now()
gevent.sleep(1)
# update new stories
if len(new_refs):
item = new_refs.pop(0)
logging.info('Processing new story ref {}'.format(item['ref']))
_update_current_story(item)
gevent.sleep(1)
if news_index == 0:
for ref, source in feed.list():
if database.get_story_by_ref(ref):
continue
try:
nid = new_id()
logging.info('Adding ref: {}, id: {}, source: {}'.format(ref, nid, source))
database.put_ref(ref, nid, source)
except database.IntegrityError:
logging.info('Already have ID / ref, skipping.')
continue
ref_list = database.get_reflist(settings.FEED_LENGTH)
# update current stories
if len(update_refs):
item = update_refs.pop(0)
logging.info('Processing existing story ref {}'.format(item['ref']))
_update_current_story(item)
gevent.sleep(1)
if news_index < len(ref_list):
current_item = ref_list[news_index]
gevent.sleep(1)
try:
story_json = database.get_story(current_item['sid']).full_json
story = json.loads(story_json)
except AttributeError:
story = dict(id=current_item['sid'], ref=current_item['ref'], source=current_item['source'])
logging.info('Updating {} story: {}, index: {}'.format(story['source'], story['ref'], news_index))
valid = feed.update_story(story)
if valid:
database.put_story(story)
search.put_story(story)
else:
database.del_ref(current_item['ref'])
logging.info('Removed ref {}'.format(current_item['ref']))
else:
logging.info('Skipping index: ' + str(news_index))
gevent.sleep(6)
news_index += 1
if news_index == settings.FEED_LENGTH: news_index = 0
except KeyboardInterrupt:
logging.info('Ending feed thread...')
except ValueError as e:
logging.error('feed_thread error: {} {}'.format(e.__class__.__name__, e))
logging.critical('feed_thread error: {} {}'.format(e.__class__.__name__, e))
http_server.stop()
http_server.stop()
gevent.kill(feed_thread_ref)
logging.info('Starting Feed thread...')
gevent.spawn(feed_thread)
print('Starting Feed thread...')
feed_thread_ref = gevent.spawn(feed_thread)
print('Starting HTTP thread...')
logging.info('Starting HTTP thread...')
try:
http_server.serve_forever()
except KeyboardInterrupt:
gevent.kill(feed_thread_ref)
logging.info('Exiting...')

View File

@@ -1,61 +1,23 @@
# QotNews settings
# edit this file and save it as settings.py
HOSTNAME = 'news.t0.vc'
MAX_STORY_AGE = 3*24*60*60
SCRAPERS = ['headless', 'outline', 'declutter', 'simple']
API_PORT = 33842
SIMPLE_READER_PORT = 33843
HEADLESS_READER_PORT = 33843
# Feed Lengths
# Number of top items from each site to pull
# set to 0 to disable that site
FEED_LENGTH = 75
NUM_HACKERNEWS = 15
NUM_LOBSTERS = 10
NUM_REDDIT = 10
NUM_REDDIT = 15
NUM_TILDES = 5
NUM_SUBSTACK = 10
SITEMAP = {}
# SITEMAP['nzherald'] = {
# 'url': "https://www.nzherald.co.nz/arcio/news-sitemap/",
# 'count': 20,
# 'patterns': [
# r'^https:\/\/www\.(nzherald\.co\.nz)\/.*\/([^/]+)\/?$',
# ],
# 'excludes': [
# 'driven.co.nz',
# 'oneroof.co.nz',
# 'nzherald.co.nz/sponsored-stories',
# 'nzherald.co.nz/entertainment/',
# 'nzherald.co.nz/lifestyle/',
# 'nzherald.co.nz/travel/',
# 'nzherald.co.nz/sport/',
# 'nzherald.co.nz/promotions/',
# 'nzherald.co.nzhttp',
# 'herald-afternoon-quiz',
# 'herald-morning-quiz'
# ],
# }
# Meilisearch server URL
# Leave blank if not using search
#MEILI_URL = 'http://127.0.0.1:7700/'
MEILI_URL = ''
SUBSTACK = {}
# SUBSTACK['webworm'] = { 'url': "https://www.webworm.co", 'count': 10},
# SUBSTACK['the bulletin'] = { 'url': "https://thespinoff.substack.com", 'count': 10},
CATEGORY = {}
# CATEGORY['radionz'] = {
# 'url': "https://www.rnz.co.nz/news/",
# 'count': 20,
# 'patterns': [
# r'https:\/\/www\.(rnz\.co\.nz)\/news\/[^\/]+\/(\d+)\/[^\/]+\/?'
# ],
# 'excludes': [
# 'rnz.co.nz/news/sport',
# 'rnz.co.nz/weather',
# ],
# }
# Readerserver URL
# Leave blank if not using, but that defeats the whole point
READER_URL = 'http://127.0.0.1:33843/'
# Reddit account info
# leave blank if not using Reddit
@@ -63,10 +25,6 @@ REDDIT_CLIENT_ID = ''
REDDIT_CLIENT_SECRET = ''
REDDIT_USER_AGENT = ''
# Minimum points or number of comments before including a thread:
REDDIT_COMMENT_THRESHOLD = 10
REDDIT_SCORE_THRESHOLD = 25
SUBREDDITS = [
'Economics',
'AcademicPhilosophy',
@@ -77,9 +35,7 @@ SUBREDDITS = [
'PhilosophyofScience',
'StateOfTheUnion',
'TheAgora',
'TrueFilm',
'TrueReddit',
'UniversityofReddit',
'culturalstudies',
'hardscience',
'indepthsports',
@@ -89,6 +45,6 @@ SUBREDDITS = [
'resilientcommunities',
'worldevents',
'StallmanWasRight',
'DarkFuturology',
'EverythingScience',
'longevity',
]

View File

@@ -1,48 +0,0 @@
import logging
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.INFO)
import sys
import json
import requests
import database
import feed
import search
database.init()
search.init()
def _update_current_story(story, item):
logging.info('Updating story: {}'.format(str(story['ref'])))
if story.get('url', ''):
story['text'] = ''
valid = feed.update_story(story, urlref=item['urlref'])
if valid:
database.put_story(story)
search.put_story(story)
else:
database.del_ref(item['ref'])
logging.info('Removed ref {}'.format(item['ref']))
if __name__ == '__main__':
if len(sys.argv) == 2:
sid = sys.argv[1]
else:
print('Usage: python delete-story.py [story id]')
exit(1)
item = database.get_ref_by_sid(sid)
if item:
story = database.get_story(item['sid']).data
if story:
print('Updating story:')
_update_current_story(story, item)
else:
print('Story not found. Exiting.')
else:
print('Story not found. Exiting.')

View File

@@ -8,8 +8,17 @@ import string
from bleach.sanitizer import Cleaner
def alert_tanner(message):
try:
logging.info('Alerting Tanner: ' + message)
params = dict(qotnews=message)
requests.get('https://tbot.tannercollin.com/message', params=params, timeout=4)
except BaseException as e:
logging.error('Problem alerting Tanner: ' + str(e))
NUM_ID_CHARS = 4
def gen_rand_id():
return ''.join(random.choice(string.ascii_uppercase) for _ in range(5))
return ''.join(random.choice(string.ascii_uppercase) for _ in range(NUM_ID_CHARS))
def render_md(md):
if md:

Submodule readerserver deleted from 507ac40695

92
readerserver/.gitignore vendored Normal file
View File

@@ -0,0 +1,92 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
# Diagnostic reports (https://nodejs.org/api/report.html)
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
*.lcov
# nyc test coverage
.nyc_output
# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# TypeScript v1 declaration files
typings/
# TypeScript cache
*.tsbuildinfo
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variables file
.env
.env.test
# parcel-bundler cache (https://parceljs.org/)
.cache
# next.js build output
.next
# nuxt.js build output
.nuxt
# vuepress build output
.vuepress/dist
# Serverless directories
.serverless/
# FuseBox cache
.fusebox/
# DynamoDB Local files
.dynamodb/
# Editor
*.swp
*.swo

54
readerserver/main.js Normal file
View File

@@ -0,0 +1,54 @@
const express = require('express');
const app = express();
const port = 33843;
const request = require('request');
const JSDOM = require('jsdom').JSDOM;
const { Readability } = require('readability');
app.use(express.urlencoded({ extended: true }));
app.get('/', (req, res) => {
res.send('<form method="POST" accept-charset="UTF-8"><input name="url"><button type="submit">SUBMIT</button></form>');
});
const requestCallback = (url, res) => (error, response, body) => {
if (!error && response.statusCode == 200) {
console.log('Response OK.');
const doc = new JSDOM(body, {url: url});
const reader = new Readability(doc.window.document);
const article = reader.parse();
if (article && article.content) {
res.send(article.content);
} else {
res.sendStatus(404);
}
} else {
console.log('Response error:', error ? error.toString() : response.statusCode);
res.sendStatus(response ? response.statusCode : 404);
}
};
app.post('/', (req, res) => {
const url = req.body.url;
const requestOptions = {
url: url,
gzip: true,
//headers: {'User-Agent': 'Googlebot/2.1 (+http://www.google.com/bot.html)'},
//headers: {'User-Agent': 'Twitterbot/1.0'},
headers: {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0',
'X-Forwarded-For': '66.249.66.1',
},
};
console.log('Parse request for:', url);
request(requestOptions, requestCallback(url, res));
});
app.listen(port, () => {
console.log(`Example app listening on port ${port}!`);
});

13
readerserver/package.json Normal file
View File

@@ -0,0 +1,13 @@
{
"name": "readerserver",
"version": "1.0.0",
"main": "main.js",
"license": "MIT",
"dependencies": {
"dompurify": "^1.0.11",
"express": "^4.17.1",
"jsdom": "^15.1.1",
"readability": "https://github.com/mozilla/readability",
"request": "^2.88.0"
}
}

994
readerserver/yarn.lock Normal file
View File

@@ -0,0 +1,994 @@
# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.
# yarn lockfile v1
abab@^2.0.0:
version "2.0.5"
resolved "https://registry.yarnpkg.com/abab/-/abab-2.0.5.tgz#c0b678fb32d60fc1219c784d6a826fe385aeb79a"
integrity sha512-9IK9EadsbHo6jLWIpxpR6pL0sazTXV6+SQv25ZB+F7Bj9mJNaOc4nCRabwd5M/JwmUa8idz6Eci6eKfJryPs6Q==
accepts@~1.3.8:
version "1.3.8"
resolved "https://registry.yarnpkg.com/accepts/-/accepts-1.3.8.tgz#0bf0be125b67014adcb0b0921e62db7bffe16b2e"
integrity sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw==
dependencies:
mime-types "~2.1.34"
negotiator "0.6.3"
acorn-globals@^4.3.2:
version "4.3.4"
resolved "https://registry.yarnpkg.com/acorn-globals/-/acorn-globals-4.3.4.tgz#9fa1926addc11c97308c4e66d7add0d40c3272e7"
integrity sha512-clfQEh21R+D0leSbUdWf3OcfqyaCSAQ8Ryq00bofSekfr9W8u1jyYZo6ir0xu9Gtcf7BjcHJpnbZH7JOCpP60A==
dependencies:
acorn "^6.0.1"
acorn-walk "^6.0.1"
acorn-walk@^6.0.1:
version "6.2.0"
resolved "https://registry.yarnpkg.com/acorn-walk/-/acorn-walk-6.2.0.tgz#123cb8f3b84c2171f1f7fb252615b1c78a6b1a8c"
integrity sha512-7evsyfH1cLOCdAzZAd43Cic04yKydNx0cF+7tiA19p1XnLLPU4dpCQOqpjqwokFe//vS0QqfqqjCS2JkiIs0cA==
acorn@^6.0.1:
version "6.4.2"
resolved "https://registry.yarnpkg.com/acorn/-/acorn-6.4.2.tgz#35866fd710528e92de10cf06016498e47e39e1e6"
integrity sha512-XtGIhXwF8YM8bJhGxG5kXgjkEuNGLTkoYqVE+KMR+aspr4KGYmKYg7yUe3KghyQ9yheNwLnjmzh/7+gfDBmHCQ==
acorn@^7.1.0:
version "7.4.1"
resolved "https://registry.yarnpkg.com/acorn/-/acorn-7.4.1.tgz#feaed255973d2e77555b83dbc08851a6c63520fa"
integrity sha512-nQyp0o1/mNdbTO1PO6kHkwSrmgZ0MT/jCCpNiwbUjGoRN4dlBhqJtoQuCnEOKzgTVwg0ZWiCoQy6SxMebQVh8A==
ajv@^6.12.3:
version "6.12.6"
resolved "https://registry.yarnpkg.com/ajv/-/ajv-6.12.6.tgz#baf5a62e802b07d977034586f8c3baf5adf26df4"
integrity sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==
dependencies:
fast-deep-equal "^3.1.1"
fast-json-stable-stringify "^2.0.0"
json-schema-traverse "^0.4.1"
uri-js "^4.2.2"
array-equal@^1.0.0:
version "1.0.0"
resolved "https://registry.yarnpkg.com/array-equal/-/array-equal-1.0.0.tgz#8c2a5ef2472fd9ea742b04c77a75093ba2757c93"
integrity sha1-jCpe8kcv2ep0KwTHenUJO6J1fJM=
array-flatten@1.1.1:
version "1.1.1"
resolved "https://registry.yarnpkg.com/array-flatten/-/array-flatten-1.1.1.tgz#9a5f699051b1e7073328f2a008968b64ea2955d2"
integrity sha1-ml9pkFGx5wczKPKgCJaLZOopVdI=
asn1@~0.2.3:
version "0.2.6"
resolved "https://registry.yarnpkg.com/asn1/-/asn1-0.2.6.tgz#0d3a7bb6e64e02a90c0303b31f292868ea09a08d"
integrity sha512-ix/FxPn0MDjeyJ7i/yoHGFt/EX6LyNbxSEhPPXODPL+KB0VPk86UYfL0lMdy+KCnv+fmvIzySwaK5COwqVbWTQ==
dependencies:
safer-buffer "~2.1.0"
assert-plus@1.0.0, assert-plus@^1.0.0:
version "1.0.0"
resolved "https://registry.yarnpkg.com/assert-plus/-/assert-plus-1.0.0.tgz#f12e0f3c5d77b0b1cdd9146942e4e96c1e4dd525"
integrity sha512-NfJ4UzBCcQGLDlQq7nHxH+tv3kyZ0hHQqF5BO6J7tNJeP5do1llPr8dZ8zHonfhAu0PHAdMkSo+8o0wxg9lZWw==
asynckit@^0.4.0:
version "0.4.0"
resolved "https://registry.yarnpkg.com/asynckit/-/asynckit-0.4.0.tgz#c79ed97f7f34cb8f2ba1bc9790bcc366474b4b79"
integrity sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==
aws-sign2@~0.7.0:
version "0.7.0"
resolved "https://registry.yarnpkg.com/aws-sign2/-/aws-sign2-0.7.0.tgz#b46e890934a9591f2d2f6f86d7e6a9f1b3fe76a8"
integrity sha512-08kcGqnYf/YmjoRhfxyu+CLxBjUtHLXLXX/vUfx9l2LYzG3c1m61nrpyFUZI6zeS+Li/wWMMidD9KgrqtGq3mA==
aws4@^1.8.0:
version "1.11.0"
resolved "https://registry.yarnpkg.com/aws4/-/aws4-1.11.0.tgz#d61f46d83b2519250e2784daf5b09479a8b41c59"
integrity sha512-xh1Rl34h6Fi1DC2WWKfxUTVqRsNnr6LsKz2+hfwDxQJWmrx8+c7ylaqBMcHfl1U1r2dsifOvKX3LQuLNZ+XSvA==
bcrypt-pbkdf@^1.0.0:
version "1.0.2"
resolved "https://registry.yarnpkg.com/bcrypt-pbkdf/-/bcrypt-pbkdf-1.0.2.tgz#a4301d389b6a43f9b67ff3ca11a3f6637e360e9e"
integrity sha512-qeFIXtP4MSoi6NLqO12WfqARWWuCKi2Rn/9hJLEmtB5yTNr9DqFWkJRCf2qShWzPeAMRnOgCrq0sg/KLv5ES9w==
dependencies:
tweetnacl "^0.14.3"
body-parser@1.19.2:
version "1.19.2"
resolved "https://registry.yarnpkg.com/body-parser/-/body-parser-1.19.2.tgz#4714ccd9c157d44797b8b5607d72c0b89952f26e"
integrity sha512-SAAwOxgoCKMGs9uUAUFHygfLAyaniaoun6I8mFY9pRAJL9+Kec34aU+oIjDhTycub1jozEfEwx1W1IuOYxVSFw==
dependencies:
bytes "3.1.2"
content-type "~1.0.4"
debug "2.6.9"
depd "~1.1.2"
http-errors "1.8.1"
iconv-lite "0.4.24"
on-finished "~2.3.0"
qs "6.9.7"
raw-body "2.4.3"
type-is "~1.6.18"
browser-process-hrtime@^1.0.0:
version "1.0.0"
resolved "https://registry.yarnpkg.com/browser-process-hrtime/-/browser-process-hrtime-1.0.0.tgz#3c9b4b7d782c8121e56f10106d84c0d0ffc94626"
integrity sha512-9o5UecI3GhkpM6DrXr69PblIuWxPKk9Y0jHBRhdocZ2y7YECBFCsHm79Pr3OyR2AvjhDkabFJaDJMYRazHgsow==
bytes@3.1.2:
version "3.1.2"
resolved "https://registry.yarnpkg.com/bytes/-/bytes-3.1.2.tgz#8b0beeb98605adf1b128fa4386403c009e0221a5"
integrity sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==
caseless@~0.12.0:
version "0.12.0"
resolved "https://registry.yarnpkg.com/caseless/-/caseless-0.12.0.tgz#1b681c21ff84033c826543090689420d187151dc"
integrity sha512-4tYFyifaFfGacoiObjJegolkwSU4xQNGbVgUiNYVUxbQ2x2lUsFvY4hVgVzGiIe6WLOPqycWXA40l+PWsxthUw==
combined-stream@^1.0.6, combined-stream@~1.0.6:
version "1.0.8"
resolved "https://registry.yarnpkg.com/combined-stream/-/combined-stream-1.0.8.tgz#c3d45a8b34fd730631a110a8a2520682b31d5a7f"
integrity sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==
dependencies:
delayed-stream "~1.0.0"
content-disposition@0.5.4:
version "0.5.4"
resolved "https://registry.yarnpkg.com/content-disposition/-/content-disposition-0.5.4.tgz#8b82b4efac82512a02bb0b1dcec9d2c5e8eb5bfe"
integrity sha512-FveZTNuGw04cxlAiWbzi6zTAL/lhehaWbTtgluJh4/E95DqMwTmha3KZN1aAWA8cFIhHzMZUvLevkw5Rqk+tSQ==
dependencies:
safe-buffer "5.2.1"
content-type@~1.0.4:
version "1.0.4"
resolved "https://registry.yarnpkg.com/content-type/-/content-type-1.0.4.tgz#e138cc75e040c727b1966fe5e5f8c9aee256fe3b"
integrity sha512-hIP3EEPs8tB9AT1L+NUqtwOAps4mk2Zob89MWXMHjHWg9milF/j4osnnQLXBCBFBk/tvIG/tUc9mOUJiPBhPXA==
cookie-signature@1.0.6:
version "1.0.6"
resolved "https://registry.yarnpkg.com/cookie-signature/-/cookie-signature-1.0.6.tgz#e303a882b342cc3ee8ca513a79999734dab3ae2c"
integrity sha1-4wOogrNCzD7oylE6eZmXNNqzriw=
cookie@0.4.2:
version "0.4.2"
resolved "https://registry.yarnpkg.com/cookie/-/cookie-0.4.2.tgz#0e41f24de5ecf317947c82fc789e06a884824432"
integrity sha512-aSWTXFzaKWkvHO1Ny/s+ePFpvKsPnjc551iI41v3ny/ow6tBG5Vd+FuqGNhh1LxOmVzOlGUriIlOaokOvhaStA==
core-util-is@1.0.2:
version "1.0.2"
resolved "https://registry.yarnpkg.com/core-util-is/-/core-util-is-1.0.2.tgz#b5fd54220aa2bc5ab57aab7140c940754503c1a7"
integrity sha512-3lqz5YjWTYnW6dlDa5TLaTCcShfar1e40rmcJVwCBJC6mWlFuj0eCHIElmG1g5kyuJ/GD+8Wn4FFCcz4gJPfaQ==
cssom@^0.4.1:
version "0.4.4"
resolved "https://registry.yarnpkg.com/cssom/-/cssom-0.4.4.tgz#5a66cf93d2d0b661d80bf6a44fb65f5c2e4e0a10"
integrity sha512-p3pvU7r1MyyqbTk+WbNJIgJjG2VmTIaB10rI93LzVPrmDJKkzKYMtxxyAvQXR/NS6otuzveI7+7BBq3SjBS2mw==
cssom@~0.3.6:
version "0.3.8"
resolved "https://registry.yarnpkg.com/cssom/-/cssom-0.3.8.tgz#9f1276f5b2b463f2114d3f2c75250af8c1a36f4a"
integrity sha512-b0tGHbfegbhPJpxpiBPU2sCkigAqtM9O121le6bbOlgyV+NyGyCmVfJ6QW9eRjz8CpNfWEOYBIMIGRYkLwsIYg==
cssstyle@^2.0.0:
version "2.3.0"
resolved "https://registry.yarnpkg.com/cssstyle/-/cssstyle-2.3.0.tgz#ff665a0ddbdc31864b09647f34163443d90b0852"
integrity sha512-AZL67abkUzIuvcHqk7c09cezpGNcxUxU4Ioi/05xHk4DQeTkWmGYftIE6ctU6AEt+Gn4n1lDStOtj7FKycP71A==
dependencies:
cssom "~0.3.6"
dashdash@^1.12.0:
version "1.14.1"
resolved "https://registry.yarnpkg.com/dashdash/-/dashdash-1.14.1.tgz#853cfa0f7cbe2fed5de20326b8dd581035f6e2f0"
integrity sha512-jRFi8UDGo6j+odZiEpjazZaWqEal3w/basFjQHQEwVtZJGDpxbH1MeYluwCS8Xq5wmLJooDlMgvVarmWfGM44g==
dependencies:
assert-plus "^1.0.0"
data-urls@^1.1.0:
version "1.1.0"
resolved "https://registry.yarnpkg.com/data-urls/-/data-urls-1.1.0.tgz#15ee0582baa5e22bb59c77140da8f9c76963bbfe"
integrity sha512-YTWYI9se1P55u58gL5GkQHW4P6VJBJ5iBT+B5a7i2Tjadhv52paJG0qHX4A0OR6/t52odI64KP2YvFpkDOi3eQ==
dependencies:
abab "^2.0.0"
whatwg-mimetype "^2.2.0"
whatwg-url "^7.0.0"
debug@2.6.9:
version "2.6.9"
resolved "https://registry.yarnpkg.com/debug/-/debug-2.6.9.tgz#5d128515df134ff327e90a4c93f4e077a536341f"
integrity sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==
dependencies:
ms "2.0.0"
deep-is@~0.1.3:
version "0.1.4"
resolved "https://registry.yarnpkg.com/deep-is/-/deep-is-0.1.4.tgz#a6f2dce612fadd2ef1f519b73551f17e85199831"
integrity sha512-oIPzksmTg4/MriiaYGO+okXDT7ztn/w3Eptv/+gSIdMdKsJo0u4CfYNFJPy+4SKMuCqGw2wxnA+URMg3t8a/bQ==
delayed-stream@~1.0.0:
version "1.0.0"
resolved "https://registry.yarnpkg.com/delayed-stream/-/delayed-stream-1.0.0.tgz#df3ae199acadfb7d440aaae0b29e2272b24ec619"
integrity sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==
depd@~1.1.2:
version "1.1.2"
resolved "https://registry.yarnpkg.com/depd/-/depd-1.1.2.tgz#9bcd52e14c097763e749b274c4346ed2e560b5a9"
integrity sha1-m81S4UwJd2PnSbJ0xDRu0uVgtak=
destroy@~1.0.4:
version "1.0.4"
resolved "https://registry.yarnpkg.com/destroy/-/destroy-1.0.4.tgz#978857442c44749e4206613e37946205826abd80"
integrity sha1-l4hXRCxEdJ5CBmE+N5RiBYJqvYA=
domexception@^1.0.1:
version "1.0.1"
resolved "https://registry.yarnpkg.com/domexception/-/domexception-1.0.1.tgz#937442644ca6a31261ef36e3ec677fe805582c90"
integrity sha512-raigMkn7CJNNo6Ihro1fzG7wr3fHuYVytzquZKX5n0yizGsTcYgzdIUwj1X9pK0VvjeihV+XiclP+DjwbsSKug==
dependencies:
webidl-conversions "^4.0.2"
dompurify@^1.0.11:
version "1.0.11"
resolved "https://registry.yarnpkg.com/dompurify/-/dompurify-1.0.11.tgz#fe0f4a40d147f7cebbe31a50a1357539cfc1eb4d"
integrity sha512-XywCTXZtc/qCX3iprD1pIklRVk/uhl8BKpkTxr+ZyMVUzSUg7wkQXRBp/euJ5J5moa1QvfpvaPQVP71z1O59dQ==
ecc-jsbn@~0.1.1:
version "0.1.2"
resolved "https://registry.yarnpkg.com/ecc-jsbn/-/ecc-jsbn-0.1.2.tgz#3a83a904e54353287874c564b7549386849a98c9"
integrity sha512-eh9O+hwRHNbG4BLTjEl3nw044CkGm5X6LoaCf7LPp7UU8Qrt47JYNi6nPX8xjW97TKGKm1ouctg0QSpZe9qrnw==
dependencies:
jsbn "~0.1.0"
safer-buffer "^2.1.0"
ee-first@1.1.1:
version "1.1.1"
resolved "https://registry.yarnpkg.com/ee-first/-/ee-first-1.1.1.tgz#590c61156b0ae2f4f0255732a158b266bc56b21d"
integrity sha1-WQxhFWsK4vTwJVcyoViyZrxWsh0=
encodeurl@~1.0.2:
version "1.0.2"
resolved "https://registry.yarnpkg.com/encodeurl/-/encodeurl-1.0.2.tgz#ad3ff4c86ec2d029322f5a02c3a9a606c95b3f59"
integrity sha1-rT/0yG7C0CkyL1oCw6mmBslbP1k=
escape-html@~1.0.3:
version "1.0.3"
resolved "https://registry.yarnpkg.com/escape-html/-/escape-html-1.0.3.tgz#0258eae4d3d0c0974de1c169188ef0051d1d1988"
integrity sha1-Aljq5NPQwJdN4cFpGI7wBR0dGYg=
escodegen@^1.11.1:
version "1.14.3"
resolved "https://registry.yarnpkg.com/escodegen/-/escodegen-1.14.3.tgz#4e7b81fba61581dc97582ed78cab7f0e8d63f503"
integrity sha512-qFcX0XJkdg+PB3xjZZG/wKSuT1PnQWx57+TVSjIMmILd2yC/6ByYElPwJnslDsuWuSAp4AwJGumarAAmJch5Kw==
dependencies:
esprima "^4.0.1"
estraverse "^4.2.0"
esutils "^2.0.2"
optionator "^0.8.1"
optionalDependencies:
source-map "~0.6.1"
esprima@^4.0.1:
version "4.0.1"
resolved "https://registry.yarnpkg.com/esprima/-/esprima-4.0.1.tgz#13b04cdb3e6c5d19df91ab6987a8695619b0aa71"
integrity sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A==
estraverse@^4.2.0:
version "4.3.0"
resolved "https://registry.yarnpkg.com/estraverse/-/estraverse-4.3.0.tgz#398ad3f3c5a24948be7725e83d11a7de28cdbd1d"
integrity sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw==
esutils@^2.0.2:
version "2.0.3"
resolved "https://registry.yarnpkg.com/esutils/-/esutils-2.0.3.tgz#74d2eb4de0b8da1293711910d50775b9b710ef64"
integrity sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==
etag@~1.8.1:
version "1.8.1"
resolved "https://registry.yarnpkg.com/etag/-/etag-1.8.1.tgz#41ae2eeb65efa62268aebfea83ac7d79299b0887"
integrity sha1-Qa4u62XvpiJorr/qg6x9eSmbCIc=
express@^4.17.1:
version "4.17.3"
resolved "https://registry.yarnpkg.com/express/-/express-4.17.3.tgz#f6c7302194a4fb54271b73a1fe7a06478c8f85a1"
integrity sha512-yuSQpz5I+Ch7gFrPCk4/c+dIBKlQUxtgwqzph132bsT6qhuzss6I8cLJQz7B3rFblzd6wtcI0ZbGltH/C4LjUg==
dependencies:
accepts "~1.3.8"
array-flatten "1.1.1"
body-parser "1.19.2"
content-disposition "0.5.4"
content-type "~1.0.4"
cookie "0.4.2"
cookie-signature "1.0.6"
debug "2.6.9"
depd "~1.1.2"
encodeurl "~1.0.2"
escape-html "~1.0.3"
etag "~1.8.1"
finalhandler "~1.1.2"
fresh "0.5.2"
merge-descriptors "1.0.1"
methods "~1.1.2"
on-finished "~2.3.0"
parseurl "~1.3.3"
path-to-regexp "0.1.7"
proxy-addr "~2.0.7"
qs "6.9.7"
range-parser "~1.2.1"
safe-buffer "5.2.1"
send "0.17.2"
serve-static "1.14.2"
setprototypeof "1.2.0"
statuses "~1.5.0"
type-is "~1.6.18"
utils-merge "1.0.1"
vary "~1.1.2"
extend@~3.0.2:
version "3.0.2"
resolved "https://registry.yarnpkg.com/extend/-/extend-3.0.2.tgz#f8b1136b4071fbd8eb140aff858b1019ec2915fa"
integrity sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==
extsprintf@1.3.0:
version "1.3.0"
resolved "https://registry.yarnpkg.com/extsprintf/-/extsprintf-1.3.0.tgz#96918440e3041a7a414f8c52e3c574eb3c3e1e05"
integrity sha512-11Ndz7Nv+mvAC1j0ktTa7fAb0vLyGGX+rMHNBYQviQDGU0Hw7lhctJANqbPhu9nV9/izT/IntTgZ7Im/9LJs9g==
extsprintf@^1.2.0:
version "1.4.1"
resolved "https://registry.yarnpkg.com/extsprintf/-/extsprintf-1.4.1.tgz#8d172c064867f235c0c84a596806d279bf4bcc07"
integrity sha512-Wrk35e8ydCKDj/ArClo1VrPVmN8zph5V4AtHwIuHhvMXsKf73UT3BOD+azBIW+3wOJ4FhEH7zyaJCFvChjYvMA==
fast-deep-equal@^3.1.1:
version "3.1.3"
resolved "https://registry.yarnpkg.com/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz#3a7d56b559d6cbc3eb512325244e619a65c6c525"
integrity sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==
fast-json-stable-stringify@^2.0.0:
version "2.1.0"
resolved "https://registry.yarnpkg.com/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz#874bf69c6f404c2b5d99c481341399fd55892633"
integrity sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==
fast-levenshtein@~2.0.6:
version "2.0.6"
resolved "https://registry.yarnpkg.com/fast-levenshtein/-/fast-levenshtein-2.0.6.tgz#3d8a5c66883a16a30ca8643e851f19baa7797917"
integrity sha1-PYpcZog6FqMMqGQ+hR8Zuqd5eRc=
finalhandler@~1.1.2:
version "1.1.2"
resolved "https://registry.yarnpkg.com/finalhandler/-/finalhandler-1.1.2.tgz#b7e7d000ffd11938d0fdb053506f6ebabe9f587d"
integrity sha512-aAWcW57uxVNrQZqFXjITpW3sIUQmHGG3qSb9mUah9MgMC4NeWhNOlNjXEYq3HjRAvL6arUviZGGJsBg6z0zsWA==
dependencies:
debug "2.6.9"
encodeurl "~1.0.2"
escape-html "~1.0.3"
on-finished "~2.3.0"
parseurl "~1.3.3"
statuses "~1.5.0"
unpipe "~1.0.0"
forever-agent@~0.6.1:
version "0.6.1"
resolved "https://registry.yarnpkg.com/forever-agent/-/forever-agent-0.6.1.tgz#fbc71f0c41adeb37f96c577ad1ed42d8fdacca91"
integrity sha512-j0KLYPhm6zeac4lz3oJ3o65qvgQCcPubiyotZrXqEaG4hNagNYO8qdlUrX5vwqv9ohqeT/Z3j6+yW067yWWdUw==
form-data@~2.3.2:
version "2.3.3"
resolved "https://registry.yarnpkg.com/form-data/-/form-data-2.3.3.tgz#dcce52c05f644f298c6a7ab936bd724ceffbf3a6"
integrity sha512-1lLKB2Mu3aGP1Q/2eCOx0fNbRMe7XdwktwOruhfqqd0rIJWwN4Dh+E3hrPSlDCXnSR7UtZ1N38rVXm+6+MEhJQ==
dependencies:
asynckit "^0.4.0"
combined-stream "^1.0.6"
mime-types "^2.1.12"
forwarded@0.2.0:
version "0.2.0"
resolved "https://registry.yarnpkg.com/forwarded/-/forwarded-0.2.0.tgz#2269936428aad4c15c7ebe9779a84bf0b2a81811"
integrity sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==
fresh@0.5.2:
version "0.5.2"
resolved "https://registry.yarnpkg.com/fresh/-/fresh-0.5.2.tgz#3d8cadd90d976569fa835ab1f8e4b23a105605a7"
integrity sha1-PYyt2Q2XZWn6g1qx+OSyOhBWBac=
getpass@^0.1.1:
version "0.1.7"
resolved "https://registry.yarnpkg.com/getpass/-/getpass-0.1.7.tgz#5eff8e3e684d569ae4cb2b1282604e8ba62149fa"
integrity sha512-0fzj9JxOLfJ+XGLhR8ze3unN0KZCgZwiSSDz168VERjK8Wl8kVSdcu2kspd4s4wtAa1y/qrVRiAA0WclVsu0ng==
dependencies:
assert-plus "^1.0.0"
har-schema@^2.0.0:
version "2.0.0"
resolved "https://registry.yarnpkg.com/har-schema/-/har-schema-2.0.0.tgz#a94c2224ebcac04782a0d9035521f24735b7ec92"
integrity sha512-Oqluz6zhGX8cyRaTQlFMPw80bSJVG2x/cFb8ZPhUILGgHka9SsokCCOQgpveePerqidZOrT14ipqfJb7ILcW5Q==
har-validator@~5.1.3:
version "5.1.5"
resolved "https://registry.yarnpkg.com/har-validator/-/har-validator-5.1.5.tgz#1f0803b9f8cb20c0fa13822df1ecddb36bde1efd"
integrity sha512-nmT2T0lljbxdQZfspsno9hgrG3Uir6Ks5afism62poxqBM6sDnMEuPmzTq8XN0OEwqKLLdh1jQI3qyE66Nzb3w==
dependencies:
ajv "^6.12.3"
har-schema "^2.0.0"
html-encoding-sniffer@^1.0.2:
version "1.0.2"
resolved "https://registry.yarnpkg.com/html-encoding-sniffer/-/html-encoding-sniffer-1.0.2.tgz#e70d84b94da53aa375e11fe3a351be6642ca46f8"
integrity sha512-71lZziiDnsuabfdYiUeWdCVyKuqwWi23L8YeIgV9jSSZHCtb6wB1BKWooH7L3tn4/FuZJMVWyNaIDr4RGmaSYw==
dependencies:
whatwg-encoding "^1.0.1"
http-errors@1.8.1:
version "1.8.1"
resolved "https://registry.yarnpkg.com/http-errors/-/http-errors-1.8.1.tgz#7c3f28577cbc8a207388455dbd62295ed07bd68c"
integrity sha512-Kpk9Sm7NmI+RHhnj6OIWDI1d6fIoFAtFt9RLaTMRlg/8w49juAStsrBgp0Dp4OdxdVbRIeKhtCUvoi/RuAhO4g==
dependencies:
depd "~1.1.2"
inherits "2.0.4"
setprototypeof "1.2.0"
statuses ">= 1.5.0 < 2"
toidentifier "1.0.1"
http-signature@~1.2.0:
version "1.2.0"
resolved "https://registry.yarnpkg.com/http-signature/-/http-signature-1.2.0.tgz#9aecd925114772f3d95b65a60abb8f7c18fbace1"
integrity sha512-CAbnr6Rz4CYQkLYUtSNXxQPUH2gK8f3iWexVlsnMeD+GjlsQ0Xsy1cOX+mN3dtxYomRy21CiOzU8Uhw6OwncEQ==
dependencies:
assert-plus "^1.0.0"
jsprim "^1.2.2"
sshpk "^1.7.0"
iconv-lite@0.4.24:
version "0.4.24"
resolved "https://registry.yarnpkg.com/iconv-lite/-/iconv-lite-0.4.24.tgz#2022b4b25fbddc21d2f524974a474aafe733908b"
integrity sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==
dependencies:
safer-buffer ">= 2.1.2 < 3"
inherits@2.0.4:
version "2.0.4"
resolved "https://registry.yarnpkg.com/inherits/-/inherits-2.0.4.tgz#0fa2c64f932917c3433a0ded55363aae37416b7c"
integrity sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==
ip-regex@^2.1.0:
version "2.1.0"
resolved "https://registry.yarnpkg.com/ip-regex/-/ip-regex-2.1.0.tgz#fa78bf5d2e6913c911ce9f819ee5146bb6d844e9"
integrity sha1-+ni/XS5pE8kRzp+BnuUUa7bYROk=
ipaddr.js@1.9.1:
version "1.9.1"
resolved "https://registry.yarnpkg.com/ipaddr.js/-/ipaddr.js-1.9.1.tgz#bff38543eeb8984825079ff3a2a8e6cbd46781b3"
integrity sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==
is-typedarray@~1.0.0:
version "1.0.0"
resolved "https://registry.yarnpkg.com/is-typedarray/-/is-typedarray-1.0.0.tgz#e479c80858df0c1b11ddda6940f96011fcda4a9a"
integrity sha512-cyA56iCMHAh5CdzjJIa4aohJyeO1YbwLi3Jc35MmRU6poroFjIGZzUzupGiRPOjgHg9TLu43xbpwXk523fMxKA==
isstream@~0.1.2:
version "0.1.2"
resolved "https://registry.yarnpkg.com/isstream/-/isstream-0.1.2.tgz#47e63f7af55afa6f92e1500e690eb8b8529c099a"
integrity sha512-Yljz7ffyPbrLpLngrMtZ7NduUgVvi6wG9RJ9IUcyCd59YQ911PBJphODUcbOVbqYfxe1wuYf/LJ8PauMRwsM/g==
jsbn@~0.1.0:
version "0.1.1"
resolved "https://registry.yarnpkg.com/jsbn/-/jsbn-0.1.1.tgz#a5e654c2e5a2deb5f201d96cefbca80c0ef2f513"
integrity sha512-UVU9dibq2JcFWxQPA6KCqj5O42VOmAY3zQUfEKxU0KpTGXwNoCjkX1e13eHNvw/xPynt6pU0rZ1htjWTNTSXsg==
jsdom@^15.1.1:
version "15.2.1"
resolved "https://registry.yarnpkg.com/jsdom/-/jsdom-15.2.1.tgz#d2feb1aef7183f86be521b8c6833ff5296d07ec5"
integrity sha512-fAl1W0/7T2G5vURSyxBzrJ1LSdQn6Tr5UX/xD4PXDx/PDgwygedfW6El/KIj3xJ7FU61TTYnc/l/B7P49Eqt6g==
dependencies:
abab "^2.0.0"
acorn "^7.1.0"
acorn-globals "^4.3.2"
array-equal "^1.0.0"
cssom "^0.4.1"
cssstyle "^2.0.0"
data-urls "^1.1.0"
domexception "^1.0.1"
escodegen "^1.11.1"
html-encoding-sniffer "^1.0.2"
nwsapi "^2.2.0"
parse5 "5.1.0"
pn "^1.1.0"
request "^2.88.0"
request-promise-native "^1.0.7"
saxes "^3.1.9"
symbol-tree "^3.2.2"
tough-cookie "^3.0.1"
w3c-hr-time "^1.0.1"
w3c-xmlserializer "^1.1.2"
webidl-conversions "^4.0.2"
whatwg-encoding "^1.0.5"
whatwg-mimetype "^2.3.0"
whatwg-url "^7.0.0"
ws "^7.0.0"
xml-name-validator "^3.0.0"
json-schema-traverse@^0.4.1:
version "0.4.1"
resolved "https://registry.yarnpkg.com/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz#69f6a87d9513ab8bb8fe63bdb0979c448e684660"
integrity sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==
json-schema@0.4.0:
version "0.4.0"
resolved "https://registry.yarnpkg.com/json-schema/-/json-schema-0.4.0.tgz#f7de4cf6efab838ebaeb3236474cbba5a1930ab5"
integrity sha512-es94M3nTIfsEPisRafak+HDLfHXnKBhV3vU5eqPcS3flIWqcxJWgXHXiey3YrpaNsanY5ei1VoYEbOzijuq9BA==
json-stringify-safe@~5.0.1:
version "5.0.1"
resolved "https://registry.yarnpkg.com/json-stringify-safe/-/json-stringify-safe-5.0.1.tgz#1296a2d58fd45f19a0f6ce01d65701e2c735b6eb"
integrity sha512-ZClg6AaYvamvYEE82d3Iyd3vSSIjQ+odgjaTzRuO3s7toCdFKczob2i0zCh7JE8kWn17yvAWhUVxvqGwUalsRA==
jsprim@^1.2.2:
version "1.4.2"
resolved "https://registry.yarnpkg.com/jsprim/-/jsprim-1.4.2.tgz#712c65533a15c878ba59e9ed5f0e26d5b77c5feb"
integrity sha512-P2bSOMAc/ciLz6DzgjVlGJP9+BrJWu5UDGK70C2iweC5QBIeFf0ZXRvGjEj2uYgrY2MkAAhsSWHDWlFtEroZWw==
dependencies:
assert-plus "1.0.0"
extsprintf "1.3.0"
json-schema "0.4.0"
verror "1.10.0"
levn@~0.3.0:
version "0.3.0"
resolved "https://registry.yarnpkg.com/levn/-/levn-0.3.0.tgz#3b09924edf9f083c0490fdd4c0bc4421e04764ee"
integrity sha1-OwmSTt+fCDwEkP3UwLxEIeBHZO4=
dependencies:
prelude-ls "~1.1.2"
type-check "~0.3.2"
lodash.sortby@^4.7.0:
version "4.7.0"
resolved "https://registry.yarnpkg.com/lodash.sortby/-/lodash.sortby-4.7.0.tgz#edd14c824e2cc9c1e0b0a1b42bb5210516a42438"
integrity sha1-7dFMgk4sycHgsKG0K7UhBRakJDg=
lodash@^4.17.19:
version "4.17.21"
resolved "https://registry.yarnpkg.com/lodash/-/lodash-4.17.21.tgz#679591c564c3bffaae8454cf0b3df370c3d6911c"
integrity sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==
media-typer@0.3.0:
version "0.3.0"
resolved "https://registry.yarnpkg.com/media-typer/-/media-typer-0.3.0.tgz#8710d7af0aa626f8fffa1ce00168545263255748"
integrity sha1-hxDXrwqmJvj/+hzgAWhUUmMlV0g=
merge-descriptors@1.0.1:
version "1.0.1"
resolved "https://registry.yarnpkg.com/merge-descriptors/-/merge-descriptors-1.0.1.tgz#b00aaa556dd8b44568150ec9d1b953f3f90cbb61"
integrity sha1-sAqqVW3YtEVoFQ7J0blT8/kMu2E=
methods@~1.1.2:
version "1.1.2"
resolved "https://registry.yarnpkg.com/methods/-/methods-1.1.2.tgz#5529a4d67654134edcc5266656835b0f851afcee"
integrity sha1-VSmk1nZUE07cxSZmVoNbD4Ua/O4=
mime-db@1.51.0:
version "1.51.0"
resolved "https://registry.yarnpkg.com/mime-db/-/mime-db-1.51.0.tgz#d9ff62451859b18342d960850dc3cfb77e63fb0c"
integrity sha512-5y8A56jg7XVQx2mbv1lu49NR4dokRnhZYTtL+KGfaa27uq4pSTXkwQkFJl4pkRMyNFz/EtYDSkiiEHx3F7UN6g==
mime-db@1.52.0:
version "1.52.0"
resolved "https://registry.yarnpkg.com/mime-db/-/mime-db-1.52.0.tgz#bbabcdc02859f4987301c856e3387ce5ec43bf70"
integrity sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==
mime-types@^2.1.12, mime-types@~2.1.19:
version "2.1.35"
resolved "https://registry.yarnpkg.com/mime-types/-/mime-types-2.1.35.tgz#381a871b62a734450660ae3deee44813f70d959a"
integrity sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==
dependencies:
mime-db "1.52.0"
mime-types@~2.1.24, mime-types@~2.1.34:
version "2.1.34"
resolved "https://registry.yarnpkg.com/mime-types/-/mime-types-2.1.34.tgz#5a712f9ec1503511a945803640fafe09d3793c24"
integrity sha512-6cP692WwGIs9XXdOO4++N+7qjqv0rqxxVvJ3VHPh/Sc9mVZcQP+ZGhkKiTvWMQRr2tbHkJP/Yn7Y0npb3ZBs4A==
dependencies:
mime-db "1.51.0"
mime@1.6.0:
version "1.6.0"
resolved "https://registry.yarnpkg.com/mime/-/mime-1.6.0.tgz#32cd9e5c64553bd58d19a568af452acff04981b1"
integrity sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==
ms@2.0.0:
version "2.0.0"
resolved "https://registry.yarnpkg.com/ms/-/ms-2.0.0.tgz#5608aeadfc00be6c2901df5f9861788de0d597c8"
integrity sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=
ms@2.1.3:
version "2.1.3"
resolved "https://registry.yarnpkg.com/ms/-/ms-2.1.3.tgz#574c8138ce1d2b5861f0b44579dbadd60c6615b2"
integrity sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==
negotiator@0.6.3:
version "0.6.3"
resolved "https://registry.yarnpkg.com/negotiator/-/negotiator-0.6.3.tgz#58e323a72fedc0d6f9cd4d31fe49f51479590ccd"
integrity sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg==
nwsapi@^2.2.0:
version "2.2.0"
resolved "https://registry.yarnpkg.com/nwsapi/-/nwsapi-2.2.0.tgz#204879a9e3d068ff2a55139c2c772780681a38b7"
integrity sha512-h2AatdwYH+JHiZpv7pt/gSX1XoRGb7L/qSIeuqA6GwYoF9w1vP1cw42TO0aI2pNyshRK5893hNSl+1//vHK7hQ==
oauth-sign@~0.9.0:
version "0.9.0"
resolved "https://registry.yarnpkg.com/oauth-sign/-/oauth-sign-0.9.0.tgz#47a7b016baa68b5fa0ecf3dee08a85c679ac6455"
integrity sha512-fexhUFFPTGV8ybAtSIGbV6gOkSv8UtRbDBnAyLQw4QPKkgNlsH2ByPGtMUqdWkos6YCRmAqViwgZrJc/mRDzZQ==
on-finished@~2.3.0:
version "2.3.0"
resolved "https://registry.yarnpkg.com/on-finished/-/on-finished-2.3.0.tgz#20f1336481b083cd75337992a16971aa2d906947"
integrity sha1-IPEzZIGwg811M3mSoWlxqi2QaUc=
dependencies:
ee-first "1.1.1"
optionator@^0.8.1:
version "0.8.3"
resolved "https://registry.yarnpkg.com/optionator/-/optionator-0.8.3.tgz#84fa1d036fe9d3c7e21d99884b601167ec8fb495"
integrity sha512-+IW9pACdk3XWmmTXG8m3upGUJst5XRGzxMRjXzAuJ1XnIFNvfhjjIuYkDvysnPQ7qzqVzLt78BCruntqRhWQbA==
dependencies:
deep-is "~0.1.3"
fast-levenshtein "~2.0.6"
levn "~0.3.0"
prelude-ls "~1.1.2"
type-check "~0.3.2"
word-wrap "~1.2.3"
parse5@5.1.0:
version "5.1.0"
resolved "https://registry.yarnpkg.com/parse5/-/parse5-5.1.0.tgz#c59341c9723f414c452975564c7c00a68d58acd2"
integrity sha512-fxNG2sQjHvlVAYmzBZS9YlDp6PTSSDwa98vkD4QgVDDCAo84z5X1t5XyJQ62ImdLXx5NdIIfihey6xpum9/gRQ==
parseurl@~1.3.3:
version "1.3.3"
resolved "https://registry.yarnpkg.com/parseurl/-/parseurl-1.3.3.tgz#9da19e7bee8d12dff0513ed5b76957793bc2e8d4"
integrity sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==
path-to-regexp@0.1.7:
version "0.1.7"
resolved "https://registry.yarnpkg.com/path-to-regexp/-/path-to-regexp-0.1.7.tgz#df604178005f522f15eb4490e7247a1bfaa67f8c"
integrity sha1-32BBeABfUi8V60SQ5yR6G/qmf4w=
performance-now@^2.1.0:
version "2.1.0"
resolved "https://registry.yarnpkg.com/performance-now/-/performance-now-2.1.0.tgz#6309f4e0e5fa913ec1c69307ae364b4b377c9e7b"
integrity sha512-7EAHlyLHI56VEIdK57uwHdHKIaAGbnXPiw0yWbarQZOKaKpvUIgW0jWRVLiatnM+XXlSwsanIBH/hzGMJulMow==
pn@^1.1.0:
version "1.1.0"
resolved "https://registry.yarnpkg.com/pn/-/pn-1.1.0.tgz#e2f4cef0e219f463c179ab37463e4e1ecdccbafb"
integrity sha512-2qHaIQr2VLRFoxe2nASzsV6ef4yOOH+Fi9FBOVH6cqeSgUnoyySPZkxzLuzd+RYOQTRpROA0ztTMqxROKSb/nA==
prelude-ls@~1.1.2:
version "1.1.2"
resolved "https://registry.yarnpkg.com/prelude-ls/-/prelude-ls-1.1.2.tgz#21932a549f5e52ffd9a827f570e04be62a97da54"
integrity sha1-IZMqVJ9eUv/ZqCf1cOBL5iqX2lQ=
proxy-addr@~2.0.7:
version "2.0.7"
resolved "https://registry.yarnpkg.com/proxy-addr/-/proxy-addr-2.0.7.tgz#f19fe69ceab311eeb94b42e70e8c2070f9ba1025"
integrity sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==
dependencies:
forwarded "0.2.0"
ipaddr.js "1.9.1"
psl@^1.1.28:
version "1.9.0"
resolved "https://registry.yarnpkg.com/psl/-/psl-1.9.0.tgz#d0df2a137f00794565fcaf3b2c00cd09f8d5a5a7"
integrity sha512-E/ZsdU4HLs/68gYzgGTkMicWTLPdAftJLfJFlLUAAKZGkStNU72sZjT66SnMDVOfOWY/YAoiD7Jxa9iHvngcag==
punycode@^2.1.0, punycode@^2.1.1:
version "2.1.1"
resolved "https://registry.yarnpkg.com/punycode/-/punycode-2.1.1.tgz#b58b010ac40c22c5657616c8d2c2c02c7bf479ec"
integrity sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==
qs@6.9.7:
version "6.9.7"
resolved "https://registry.yarnpkg.com/qs/-/qs-6.9.7.tgz#4610846871485e1e048f44ae3b94033f0e675afe"
integrity sha512-IhMFgUmuNpyRfxA90umL7ByLlgRXu6tIfKPpF5TmcfRLlLCckfP/g3IQmju6jjpu+Hh8rA+2p6A27ZSPOOHdKw==
qs@~6.5.2:
version "6.5.3"
resolved "https://registry.yarnpkg.com/qs/-/qs-6.5.3.tgz#3aeeffc91967ef6e35c0e488ef46fb296ab76aad"
integrity sha512-qxXIEh4pCGfHICj1mAJQ2/2XVZkjCDTcEgfoSQxc/fYivUZxTkk7L3bDBJSoNrEzXI17oUO5Dp07ktqE5KzczA==
range-parser@~1.2.1:
version "1.2.1"
resolved "https://registry.yarnpkg.com/range-parser/-/range-parser-1.2.1.tgz#3cf37023d199e1c24d1a55b84800c2f3e6468031"
integrity sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==
raw-body@2.4.3:
version "2.4.3"
resolved "https://registry.yarnpkg.com/raw-body/-/raw-body-2.4.3.tgz#8f80305d11c2a0a545c2d9d89d7a0286fcead43c"
integrity sha512-UlTNLIcu0uzb4D2f4WltY6cVjLi+/jEN4lgEUj3E04tpMDpUlkBo/eSn6zou9hum2VMNpCCUone0O0WeJim07g==
dependencies:
bytes "3.1.2"
http-errors "1.8.1"
iconv-lite "0.4.24"
unpipe "1.0.0"
"readability@https://github.com/mozilla/readability":
version "0.5.0"
resolved "https://github.com/mozilla/readability#39a5c5409fb653858b1832141895b882b9092b47"
request-promise-core@1.1.4:
version "1.1.4"
resolved "https://registry.yarnpkg.com/request-promise-core/-/request-promise-core-1.1.4.tgz#3eedd4223208d419867b78ce815167d10593a22f"
integrity sha512-TTbAfBBRdWD7aNNOoVOBH4pN/KigV6LyapYNNlAPA8JwbovRti1E88m3sYAwsLi5ryhPKsE9APwnjFTgdUjTpw==
dependencies:
lodash "^4.17.19"
request-promise-native@^1.0.7:
version "1.0.9"
resolved "https://registry.yarnpkg.com/request-promise-native/-/request-promise-native-1.0.9.tgz#e407120526a5efdc9a39b28a5679bf47b9d9dc28"
integrity sha512-wcW+sIUiWnKgNY0dqCpOZkUbF/I+YPi+f09JZIDa39Ec+q82CpSYniDp+ISgTTbKmnpJWASeJBPZmoxH84wt3g==
dependencies:
request-promise-core "1.1.4"
stealthy-require "^1.1.1"
tough-cookie "^2.3.3"
request@^2.88.0:
version "2.88.2"
resolved "https://registry.yarnpkg.com/request/-/request-2.88.2.tgz#d73c918731cb5a87da047e207234146f664d12b3"
integrity sha512-MsvtOrfG9ZcrOwAW+Qi+F6HbD0CWXEh9ou77uOb7FM2WPhwT7smM833PzanhJLsgXjN89Ir6V2PczXNnMpwKhw==
dependencies:
aws-sign2 "~0.7.0"
aws4 "^1.8.0"
caseless "~0.12.0"
combined-stream "~1.0.6"
extend "~3.0.2"
forever-agent "~0.6.1"
form-data "~2.3.2"
har-validator "~5.1.3"
http-signature "~1.2.0"
is-typedarray "~1.0.0"
isstream "~0.1.2"
json-stringify-safe "~5.0.1"
mime-types "~2.1.19"
oauth-sign "~0.9.0"
performance-now "^2.1.0"
qs "~6.5.2"
safe-buffer "^5.1.2"
tough-cookie "~2.5.0"
tunnel-agent "^0.6.0"
uuid "^3.3.2"
safe-buffer@5.2.1, safe-buffer@^5.0.1, safe-buffer@^5.1.2:
version "5.2.1"
resolved "https://registry.yarnpkg.com/safe-buffer/-/safe-buffer-5.2.1.tgz#1eaf9fa9bdb1fdd4ec75f58f9cdb4e6b7827eec6"
integrity sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==
"safer-buffer@>= 2.1.2 < 3", safer-buffer@^2.0.2, safer-buffer@^2.1.0, safer-buffer@~2.1.0:
version "2.1.2"
resolved "https://registry.yarnpkg.com/safer-buffer/-/safer-buffer-2.1.2.tgz#44fa161b0187b9549dd84bb91802f9bd8385cd6a"
integrity sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==
saxes@^3.1.9:
version "3.1.11"
resolved "https://registry.yarnpkg.com/saxes/-/saxes-3.1.11.tgz#d59d1fd332ec92ad98a2e0b2ee644702384b1c5b"
integrity sha512-Ydydq3zC+WYDJK1+gRxRapLIED9PWeSuuS41wqyoRmzvhhh9nc+QQrVMKJYzJFULazeGhzSV0QleN2wD3boh2g==
dependencies:
xmlchars "^2.1.1"
send@0.17.2:
version "0.17.2"
resolved "https://registry.yarnpkg.com/send/-/send-0.17.2.tgz#926622f76601c41808012c8bf1688fe3906f7820"
integrity sha512-UJYB6wFSJE3G00nEivR5rgWp8c2xXvJ3OPWPhmuteU0IKj8nKbG3DrjiOmLwpnHGYWAVwA69zmTm++YG0Hmwww==
dependencies:
debug "2.6.9"
depd "~1.1.2"
destroy "~1.0.4"
encodeurl "~1.0.2"
escape-html "~1.0.3"
etag "~1.8.1"
fresh "0.5.2"
http-errors "1.8.1"
mime "1.6.0"
ms "2.1.3"
on-finished "~2.3.0"
range-parser "~1.2.1"
statuses "~1.5.0"
serve-static@1.14.2:
version "1.14.2"
resolved "https://registry.yarnpkg.com/serve-static/-/serve-static-1.14.2.tgz#722d6294b1d62626d41b43a013ece4598d292bfa"
integrity sha512-+TMNA9AFxUEGuC0z2mevogSnn9MXKb4fa7ngeRMJaaGv8vTwnIEkKi+QGvPt33HSnf8pRS+WGM0EbMtCJLKMBQ==
dependencies:
encodeurl "~1.0.2"
escape-html "~1.0.3"
parseurl "~1.3.3"
send "0.17.2"
setprototypeof@1.2.0:
version "1.2.0"
resolved "https://registry.yarnpkg.com/setprototypeof/-/setprototypeof-1.2.0.tgz#66c9a24a73f9fc28cbe66b09fed3d33dcaf1b424"
integrity sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==
source-map@~0.6.1:
version "0.6.1"
resolved "https://registry.yarnpkg.com/source-map/-/source-map-0.6.1.tgz#74722af32e9614e9c287a8d0bbde48b5e2f1a263"
integrity sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==
sshpk@^1.7.0:
version "1.17.0"
resolved "https://registry.yarnpkg.com/sshpk/-/sshpk-1.17.0.tgz#578082d92d4fe612b13007496e543fa0fbcbe4c5"
integrity sha512-/9HIEs1ZXGhSPE8X6Ccm7Nam1z8KcoCqPdI7ecm1N33EzAetWahvQWVqLZtaZQ+IDKX4IyA2o0gBzqIMkAagHQ==
dependencies:
asn1 "~0.2.3"
assert-plus "^1.0.0"
bcrypt-pbkdf "^1.0.0"
dashdash "^1.12.0"
ecc-jsbn "~0.1.1"
getpass "^0.1.1"
jsbn "~0.1.0"
safer-buffer "^2.0.2"
tweetnacl "~0.14.0"
"statuses@>= 1.5.0 < 2", statuses@~1.5.0:
version "1.5.0"
resolved "https://registry.yarnpkg.com/statuses/-/statuses-1.5.0.tgz#161c7dac177659fd9811f43771fa99381478628c"
integrity sha1-Fhx9rBd2Wf2YEfQ3cfqZOBR4Yow=
stealthy-require@^1.1.1:
version "1.1.1"
resolved "https://registry.yarnpkg.com/stealthy-require/-/stealthy-require-1.1.1.tgz#35b09875b4ff49f26a777e509b3090a3226bf24b"
integrity sha1-NbCYdbT/SfJqd35QmzCQoyJr8ks=
symbol-tree@^3.2.2:
version "3.2.4"
resolved "https://registry.yarnpkg.com/symbol-tree/-/symbol-tree-3.2.4.tgz#430637d248ba77e078883951fb9aa0eed7c63fa2"
integrity sha512-9QNk5KwDF+Bvz+PyObkmSYjI5ksVUYtjW7AU22r2NKcfLJcXp96hkDWU3+XndOsUb+AQ9QhfzfCT2O+CNWT5Tw==
toidentifier@1.0.1:
version "1.0.1"
resolved "https://registry.yarnpkg.com/toidentifier/-/toidentifier-1.0.1.tgz#3be34321a88a820ed1bd80dfaa33e479fbb8dd35"
integrity sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==
tough-cookie@^2.3.3, tough-cookie@~2.5.0:
version "2.5.0"
resolved "https://registry.yarnpkg.com/tough-cookie/-/tough-cookie-2.5.0.tgz#cd9fb2a0aa1d5a12b473bd9fb96fa3dcff65ade2"
integrity sha512-nlLsUzgm1kfLXSXfRZMc1KLAugd4hqJHDTvc2hDIwS3mZAfMEuMbc03SujMF+GEcpaX/qboeycw6iO8JwVv2+g==
dependencies:
psl "^1.1.28"
punycode "^2.1.1"
tough-cookie@^3.0.1:
version "3.0.1"
resolved "https://registry.yarnpkg.com/tough-cookie/-/tough-cookie-3.0.1.tgz#9df4f57e739c26930a018184887f4adb7dca73b2"
integrity sha512-yQyJ0u4pZsv9D4clxO69OEjLWYw+jbgspjTue4lTQZLfV0c5l1VmK2y1JK8E9ahdpltPOaAThPcp5nKPUgSnsg==
dependencies:
ip-regex "^2.1.0"
psl "^1.1.28"
punycode "^2.1.1"
tr46@^1.0.1:
version "1.0.1"
resolved "https://registry.yarnpkg.com/tr46/-/tr46-1.0.1.tgz#a8b13fd6bfd2489519674ccde55ba3693b706d09"
integrity sha1-qLE/1r/SSJUZZ0zN5VujaTtwbQk=
dependencies:
punycode "^2.1.0"
tunnel-agent@^0.6.0:
version "0.6.0"
resolved "https://registry.yarnpkg.com/tunnel-agent/-/tunnel-agent-0.6.0.tgz#27a5dea06b36b04a0a9966774b290868f0fc40fd"
integrity sha512-McnNiV1l8RYeY8tBgEpuodCC1mLUdbSN+CYBL7kJsJNInOP8UjDDEwdk6Mw60vdLLrr5NHKZhMAOSrR2NZuQ+w==
dependencies:
safe-buffer "^5.0.1"
tweetnacl@^0.14.3, tweetnacl@~0.14.0:
version "0.14.5"
resolved "https://registry.yarnpkg.com/tweetnacl/-/tweetnacl-0.14.5.tgz#5ae68177f192d4456269d108afa93ff8743f4f64"
integrity sha512-KXXFFdAbFXY4geFIwoyNK+f5Z1b7swfXABfL7HXCmoIWMKU3dmS26672A4EeQtDzLKy7SXmfBu51JolvEKwtGA==
type-check@~0.3.2:
version "0.3.2"
resolved "https://registry.yarnpkg.com/type-check/-/type-check-0.3.2.tgz#5884cab512cf1d355e3fb784f30804b2b520db72"
integrity sha1-WITKtRLPHTVeP7eE8wgEsrUg23I=
dependencies:
prelude-ls "~1.1.2"
type-is@~1.6.18:
version "1.6.18"
resolved "https://registry.yarnpkg.com/type-is/-/type-is-1.6.18.tgz#4e552cd05df09467dcbc4ef739de89f2cf37c131"
integrity sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==
dependencies:
media-typer "0.3.0"
mime-types "~2.1.24"
unpipe@1.0.0, unpipe@~1.0.0:
version "1.0.0"
resolved "https://registry.yarnpkg.com/unpipe/-/unpipe-1.0.0.tgz#b2bf4ee8514aae6165b4817829d21b2ef49904ec"
integrity sha1-sr9O6FFKrmFltIF4KdIbLvSZBOw=
uri-js@^4.2.2:
version "4.4.1"
resolved "https://registry.yarnpkg.com/uri-js/-/uri-js-4.4.1.tgz#9b1a52595225859e55f669d928f88c6c57f2a77e"
integrity sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==
dependencies:
punycode "^2.1.0"
utils-merge@1.0.1:
version "1.0.1"
resolved "https://registry.yarnpkg.com/utils-merge/-/utils-merge-1.0.1.tgz#9f95710f50a267947b2ccc124741c1028427e713"
integrity sha1-n5VxD1CiZ5R7LMwSR0HBAoQn5xM=
uuid@^3.3.2:
version "3.4.0"
resolved "https://registry.yarnpkg.com/uuid/-/uuid-3.4.0.tgz#b23e4358afa8a202fe7a100af1f5f883f02007ee"
integrity sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A==
vary@~1.1.2:
version "1.1.2"
resolved "https://registry.yarnpkg.com/vary/-/vary-1.1.2.tgz#2299f02c6ded30d4a5961b0b9f74524a18f634fc"
integrity sha1-IpnwLG3tMNSllhsLn3RSShj2NPw=
verror@1.10.0:
version "1.10.0"
resolved "https://registry.yarnpkg.com/verror/-/verror-1.10.0.tgz#3a105ca17053af55d6e270c1f8288682e18da400"
integrity sha512-ZZKSmDAEFOijERBLkmYfJ+vmk3w+7hOLYDNkRCuRuMJGEmqYNCNLyBBFwWKVMhfwaEF3WOd0Zlw86U/WC/+nYw==
dependencies:
assert-plus "^1.0.0"
core-util-is "1.0.2"
extsprintf "^1.2.0"
w3c-hr-time@^1.0.1:
version "1.0.2"
resolved "https://registry.yarnpkg.com/w3c-hr-time/-/w3c-hr-time-1.0.2.tgz#0a89cdf5cc15822df9c360543676963e0cc308cd"
integrity sha512-z8P5DvDNjKDoFIHK7q8r8lackT6l+jo/Ye3HOle7l9nICP9lf1Ci25fy9vHd0JOWewkIFzXIEig3TdKT7JQ5fQ==
dependencies:
browser-process-hrtime "^1.0.0"
w3c-xmlserializer@^1.1.2:
version "1.1.2"
resolved "https://registry.yarnpkg.com/w3c-xmlserializer/-/w3c-xmlserializer-1.1.2.tgz#30485ca7d70a6fd052420a3d12fd90e6339ce794"
integrity sha512-p10l/ayESzrBMYWRID6xbuCKh2Fp77+sA0doRuGn4tTIMrrZVeqfpKjXHY+oDh3K4nLdPgNwMTVP6Vp4pvqbNg==
dependencies:
domexception "^1.0.1"
webidl-conversions "^4.0.2"
xml-name-validator "^3.0.0"
webidl-conversions@^4.0.2:
version "4.0.2"
resolved "https://registry.yarnpkg.com/webidl-conversions/-/webidl-conversions-4.0.2.tgz#a855980b1f0b6b359ba1d5d9fb39ae941faa63ad"
integrity sha512-YQ+BmxuTgd6UXZW3+ICGfyqRyHXVlD5GtQr5+qjiNW7bF0cqrzX500HVXPBOvgXb5YnzDd+h0zqyv61KUD7+Sg==
whatwg-encoding@^1.0.1, whatwg-encoding@^1.0.5:
version "1.0.5"
resolved "https://registry.yarnpkg.com/whatwg-encoding/-/whatwg-encoding-1.0.5.tgz#5abacf777c32166a51d085d6b4f3e7d27113ddb0"
integrity sha512-b5lim54JOPN9HtzvK9HFXvBma/rnfFeqsic0hSpjtDbVxR3dJKLc+KB4V6GgiGOvl7CY/KNh8rxSo9DKQrnUEw==
dependencies:
iconv-lite "0.4.24"
whatwg-mimetype@^2.2.0, whatwg-mimetype@^2.3.0:
version "2.3.0"
resolved "https://registry.yarnpkg.com/whatwg-mimetype/-/whatwg-mimetype-2.3.0.tgz#3d4b1e0312d2079879f826aff18dbeeca5960fbf"
integrity sha512-M4yMwr6mAnQz76TbJm914+gPpB/nCwvZbJU28cUD6dR004SAxDLOOSUaB1JDRqLtaOV/vi0IC5lEAGFgrjGv/g==
whatwg-url@^7.0.0:
version "7.1.0"
resolved "https://registry.yarnpkg.com/whatwg-url/-/whatwg-url-7.1.0.tgz#c2c492f1eca612988efd3d2266be1b9fc6170d06"
integrity sha512-WUu7Rg1DroM7oQvGWfOiAK21n74Gg+T4elXEQYkOhtyLeWiJFoOGLXPKI/9gzIie9CtwVLm8wtw6YJdKyxSjeg==
dependencies:
lodash.sortby "^4.7.0"
tr46 "^1.0.1"
webidl-conversions "^4.0.2"
word-wrap@~1.2.3:
version "1.2.3"
resolved "https://registry.yarnpkg.com/word-wrap/-/word-wrap-1.2.3.tgz#610636f6b1f703891bd34771ccb17fb93b47079c"
integrity sha512-Hz/mrNwitNRh/HUAtM/VT/5VH+ygD6DV7mYKZAtHOrbs8U7lvPS6xf7EJKMF0uW1KJCl0H701g3ZGus+muE5vQ==
ws@^7.0.0:
version "7.5.7"
resolved "https://registry.yarnpkg.com/ws/-/ws-7.5.7.tgz#9e0ac77ee50af70d58326ecff7e85eb3fa375e67"
integrity sha512-KMvVuFzpKBuiIXW3E4u3mySRO2/mCHSyZDJQM5NQ9Q9KHWHWh0NHgfbRMLLrceUK5qAL4ytALJbpRMjixFZh8A==
xml-name-validator@^3.0.0:
version "3.0.0"
resolved "https://registry.yarnpkg.com/xml-name-validator/-/xml-name-validator-3.0.0.tgz#6ae73e06de4d8c6e47f9fb181f78d648ad457c6a"
integrity sha512-A5CUptxDsvxKJEU3yO6DuWBSJz/qizqzJKOMIfUJHETbBw/sFaDxgd6fxm1ewUaM0jZ444Fc5vC5ROYurg/4Pw==
xmlchars@^2.1.1:
version "2.2.0"
resolved "https://registry.yarnpkg.com/xmlchars/-/xmlchars-2.2.0.tgz#060fe1bcb7f9c76fe2a17db86a9bc3ab894210cb"
integrity sha512-JZnDKK8B0RCDw84FNdDAIpZK+JuJw+s7Lz8nksI7SIuU3UXJJslUthsi+uWBUYOwPFwW7W7PRLRfUKpxjtjFCw==

View File

@@ -3,7 +3,7 @@
Download MeiliSearch with:
```
wget https://github.com/meilisearch/MeiliSearch/releases/download/v0.11.1/meilisearch-linux-amd64
wget https://github.com/meilisearch/meilisearch/releases/download/v0.27.0/meilisearch-linux-amd64
chmod +x meilisearch-linux-amd64
```

5
webapp/.gitignore vendored
View File

@@ -1,5 +0,0 @@
.DS_Store
/node_modules/
/src/node_modules/@sapper/
yarn-error.log
/__sapper__/

View File

@@ -1,152 +0,0 @@
# sapper-template
The default template for setting up a [Sapper](https://github.com/sveltejs/sapper) project. Can use either Rollup or webpack as bundler.
## Getting started
### Using `degit`
To create a new Sapper project based on Rollup locally, run
```bash
npx degit "sveltejs/sapper-template#rollup" my-app
```
For a webpack-based project, instead run
```bash
npx degit "sveltejs/sapper-template#webpack" my-app
```
[`degit`](https://github.com/Rich-Harris/degit) is a scaffolding tool that lets you create a directory from a branch in a repository.
Replace `my-app` with the path where you wish to create the project.
### Using GitHub templates
Alternatively, you can create the new project as a GitHub repository using GitHub's template feature.
Go to either [sapper-template-rollup](https://github.com/sveltejs/sapper-template-rollup) or [sapper-template-webpack](https://github.com/sveltejs/sapper-template-webpack) and click on "Use this template" to create a new project repository initialized by the template.
### Running the project
Once you have created the project, install dependencies and run the project in development mode:
```bash
cd my-app
npm install # or yarn
npm run dev
```
This will start the development server on [localhost:3000](http://localhost:3000). Open it and click around.
You now have a fully functional Sapper project! To get started developing, consult [sapper.svelte.dev](https://sapper.svelte.dev).
### Using TypeScript
By default, the template uses plain JavaScript. If you wish to use TypeScript instead, you need some changes to the project:
* Add `typescript` as well as typings as dependences in `package.json`
* Configure the bundler to use [`svelte-preprocess`](https://github.com/sveltejs/svelte-preprocess) and transpile the TypeScript code.
* Add a `tsconfig.json` file
* Update the project code to TypeScript
The template comes with a script that will perform these changes for you by running
```bash
node scripts/setupTypeScript.js
```
`@sapper` dependencies are resolved through `src/node_modules/@sapper`, which is created during the build. You therefore need to run or build the project once to avoid warnings about missing dependencies.
The script does not support webpack at the moment.
## Directory structure
Sapper expects to find two directories in the root of your project — `src` and `static`.
### src
The [src](src) directory contains the entry points for your app — `client.js`, `server.js` and (optionally) a `service-worker.js` — along with a `template.html` file and a `routes` directory.
#### src/routes
This is the heart of your Sapper app. There are two kinds of routes — *pages*, and *server routes*.
**Pages** are Svelte components written in `.svelte` files. When a user first visits the application, they will be served a server-rendered version of the route in question, plus some JavaScript that 'hydrates' the page and initialises a client-side router. From that point forward, navigating to other pages is handled entirely on the client for a fast, app-like feel. (Sapper will preload and cache the code for these subsequent pages, so that navigation is instantaneous.)
**Server routes** are modules written in `.js` files, that export functions corresponding to HTTP methods. Each function receives Express `request` and `response` objects as arguments, plus a `next` function. This is useful for creating a JSON API, for example.
There are three simple rules for naming the files that define your routes:
* A file called `src/routes/about.svelte` corresponds to the `/about` route. A file called `src/routes/blog/[slug].svelte` corresponds to the `/blog/:slug` route, in which case `params.slug` is available to the route
* The file `src/routes/index.svelte` (or `src/routes/index.js`) corresponds to the root of your app. `src/routes/about/index.svelte` is treated the same as `src/routes/about.svelte`.
* Files and directories with a leading underscore do *not* create routes. This allows you to colocate helper modules and components with the routes that depend on them — for example you could have a file called `src/routes/_helpers/datetime.js` and it would *not* create a `/_helpers/datetime` route.
#### src/node_modules/images
Images added to `src/node_modules/images` can be imported into your code using `import 'images/<filename>'`. They will be given a dynamically generated filename containing a hash, allowing for efficient caching and serving the images on a CDN.
See [`index.svelte`](src/routes/index.svelte) for an example.
#### src/node_modules/@sapper
This directory is managed by Sapper and generated when building. It contains all the code you import from `@sapper` modules.
### static
The [static](static) directory contains static assets that should be served publicly. Files in this directory will be available directly under the root URL, e.g. an `image.jpg` will be available as `/image.jpg`.
The default [service-worker.js](src/service-worker.js) will preload and cache these files, by retrieving a list of `files` from the generated manifest:
```js
import { files } from '@sapper/service-worker';
```
If you have static files you do not want to cache, you should exclude them from this list after importing it (and before passing it to `cache.addAll`).
Static files are served using [sirv](https://github.com/lukeed/sirv).
## Bundler configuration
Sapper uses Rollup or webpack to provide code-splitting and dynamic imports, as well as compiling your Svelte components. With webpack, it also provides hot module reloading. As long as you don't do anything daft, you can edit the configuration files to add whatever plugins you'd like.
## Production mode and deployment
To start a production version of your app, run `npm run build && npm start`. This will disable live reloading, and activate the appropriate bundler plugins.
You can deploy your application to any environment that supports Node 10 or above. As an example, to deploy to [Vercel Now](https://vercel.com) when using `sapper export`, run these commands:
```bash
npm install -g vercel
vercel
```
If your app can't be exported to a static site, you can use the [now-sapper](https://github.com/thgh/now-sapper) builder. You can find instructions on how to do so in its [README](https://github.com/thgh/now-sapper#basic-usage).
## Using external components
When using Svelte components installed from npm, such as [@sveltejs/svelte-virtual-list](https://github.com/sveltejs/svelte-virtual-list), Svelte needs the original component source (rather than any precompiled JavaScript that ships with the component). This allows the component to be rendered server-side, and also keeps your client-side app smaller.
Because of that, it's essential that the bundler doesn't treat the package as an *external dependency*. You can either modify the `external` option under `server` in [rollup.config.js](rollup.config.js) or the `externals` option in [webpack.config.js](webpack.config.js), or simply install the package to `devDependencies` rather than `dependencies`, which will cause it to get bundled (and therefore compiled) with your app:
```bash
npm install -D @sveltejs/svelte-virtual-list
```
## Bugs and feedback
Sapper is in early development, and may have the odd rough edge here and there. Please be vocal over on the [Sapper issue tracker](https://github.com/sveltejs/sapper/issues).

View File

@@ -1,33 +0,0 @@
{
"name": "TODO",
"description": "TODO",
"version": "0.0.1",
"scripts": {
"dev": "sapper dev",
"build": "sapper build",
"export": "sapper export",
"start": "node __sapper__/build"
},
"dependencies": {
"@polka/redirect": "^1.0.0-next.0",
"body-parser": "^1.19.0",
"compression": "^1.7.1",
"date-fns": "^2.16.1",
"dompurify": "^2.2.2",
"form-data": "^3.0.0",
"isomorphic-fetch": "^3.0.0",
"jsdom": "^16.4.0",
"lodash": "^4.17.20",
"node-fetch": "^2.6.1",
"polka": "next",
"sirv": "^1.0.0"
},
"devDependencies": {
"file-loader": "^6.0.0",
"sapper": "^0.28.0",
"svelte": "^3.17.3",
"svelte-loader": "^2.9.0",
"webpack": "^4.7.0",
"webpack-modules": "^1.0.0"
}
}

View File

@@ -1,307 +0,0 @@
/**
* Run this script to convert the project to TypeScript. This is only guaranteed to work
* on the unmodified default template; if you have done code changes you are likely need
* to touch up the generated project manually.
*/
// @ts-check
const fs = require('fs');
const path = require('path');
const { argv } = require('process');
const projectRoot = argv[2] || path.join(__dirname, '..');
const isRollup = fs.existsSync(path.join(projectRoot, "rollup.config.js"));
function warn(message) {
console.warn('Warning: ' + message);
}
function replaceInFile(fileName, replacements) {
if (fs.existsSync(fileName)) {
let contents = fs.readFileSync(fileName, 'utf8');
let hadUpdates = false;
replacements.forEach(([from, to]) => {
const newContents = contents.replace(from, to);
const isAlreadyApplied = typeof to !== 'string' || contents.includes(to);
if (newContents !== contents) {
contents = newContents;
hadUpdates = true;
} else if (!isAlreadyApplied) {
warn(`Wanted to update "${from}" in ${fileName}, but did not find it.`);
}
});
if (hadUpdates) {
fs.writeFileSync(fileName, contents);
} else {
console.log(`${fileName} had already been updated.`);
}
} else {
warn(`Wanted to update ${fileName} but the file did not exist.`);
}
}
function createFile(fileName, contents) {
if (fs.existsSync(fileName)) {
warn(`Wanted to create ${fileName}, but it already existed. Leaving existing file.`);
} else {
fs.writeFileSync(fileName, contents);
}
}
function addDepsToPackageJson() {
const pkgJSONPath = path.join(projectRoot, 'package.json');
const packageJSON = JSON.parse(fs.readFileSync(pkgJSONPath, 'utf8'));
packageJSON.devDependencies = Object.assign(packageJSON.devDependencies, {
...(isRollup ? { '@rollup/plugin-typescript': '^6.0.0' } : { 'ts-loader': '^8.0.4' }),
'@tsconfig/svelte': '^1.0.10',
'@types/compression': '^1.7.0',
'@types/node': '^14.11.1',
'@types/polka': '^0.5.1',
'svelte-check': '^1.0.46',
'svelte-preprocess': '^4.3.0',
tslib: '^2.0.1',
typescript: '^4.0.3'
});
// Add script for checking
packageJSON.scripts = Object.assign(packageJSON.scripts, {
validate: 'svelte-check --ignore src/node_modules/@sapper'
});
// Write the package JSON
fs.writeFileSync(pkgJSONPath, JSON.stringify(packageJSON, null, ' '));
}
function changeJsExtensionToTs(dir) {
const elements = fs.readdirSync(dir, { withFileTypes: true });
for (let i = 0; i < elements.length; i++) {
if (elements[i].isDirectory()) {
changeJsExtensionToTs(path.join(dir, elements[i].name));
} else if (elements[i].name.match(/^[^_]((?!json).)*js$/)) {
fs.renameSync(path.join(dir, elements[i].name), path.join(dir, elements[i].name).replace('.js', '.ts'));
}
}
}
function updateSingleSvelteFile({ view, vars, contextModule }) {
replaceInFile(path.join(projectRoot, 'src', `${view}.svelte`), [
[/(?:<script)(( .*?)*?)>/gm, (m, attrs) => `<script${attrs}${!attrs.includes('lang="ts"') ? ' lang="ts"' : ''}>`],
...(vars ? vars.map(({ name, type }) => [`export let ${name};`, `export let ${name}: ${type};`]) : []),
...(contextModule ? contextModule.map(({ js, ts }) => [js, ts]) : [])
]);
}
// Switch the *.svelte file to use TS
function updateSvelteFiles() {
[
{
view: 'components/Nav',
vars: [{ name: 'segment', type: 'string' }]
},
{
view: 'routes/_layout',
vars: [{ name: 'segment', type: 'string' }]
},
{
view: 'routes/_error',
vars: [
{ name: 'status', type: 'number' },
{ name: 'error', type: 'Error' }
]
},
{
view: 'routes/blog/index',
vars: [{ name: 'posts', type: '{ slug: string; title: string, html: any }[]' }],
contextModule: [
{
js: '.then(r => r.json())',
ts: '.then((r: { json: () => any; }) => r.json())'
},
{
js: '.then(posts => {',
ts: '.then((posts: { slug: string; title: string, html: any }[]) => {'
}
]
},
{
view: 'routes/blog/[slug]',
vars: [{ name: 'post', type: '{ slug: string; title: string, html: any }' }]
}
].forEach(updateSingleSvelteFile);
}
function updateRollupConfig() {
// Edit rollup config
replaceInFile(path.join(projectRoot, 'rollup.config.js'), [
// Edit imports
[
/'rollup-plugin-terser';\n(?!import sveltePreprocess)/,
`'rollup-plugin-terser';
import sveltePreprocess from 'svelte-preprocess';
import typescript from '@rollup/plugin-typescript';
`
],
// Edit inputs
[
/(?<!THIS_IS_UNDEFINED[^\n]*\n\s*)onwarn\(warning\);/,
`(warning.code === 'THIS_IS_UNDEFINED') ||\n\tonwarn(warning);`
],
[/input: config.client.input\(\)(?!\.replace)/, `input: config.client.input().replace(/\\.js$/, '.ts')`],
[
/input: config.server.input\(\)(?!\.replace)/,
`input: { server: config.server.input().server.replace(/\\.js$/, ".ts") }`
],
[
/input: config.serviceworker.input\(\)(?!\.replace)/,
`input: config.serviceworker.input().replace(/\\.js$/, '.ts')`
],
// Add preprocess to the svelte config, this is tricky because there's no easy signifier.
// Instead we look for 'hydratable: true,'
[/hydratable: true(?!,\n\s*preprocess)/g, 'hydratable: true,\n\t\t\t\tpreprocess: sveltePreprocess()'],
// Add TypeScript
[/commonjs\(\)(?!,\n\s*typescript)/g, 'commonjs(),\n\t\t\ttypescript({ sourceMap: dev })']
]);
}
function updateWebpackConfig() {
// Edit webpack config
replaceInFile(path.join(projectRoot, 'webpack.config.js'), [
// Edit imports
[
/require\('webpack-modules'\);\n(?!const sveltePreprocess)/,
`require('webpack-modules');\nconst sveltePreprocess = require('svelte-preprocess');\n`
],
// Edit extensions
[
/\['\.mjs', '\.js', '\.json', '\.svelte', '\.html'\]/,
`['.mjs', '.js', '.ts', '.json', '.svelte', '.html']`
],
// Edit entries
[
/entry: config\.client\.entry\(\)/,
`entry: { main: config.client.entry().main.replace(/\\.js$/, '.ts') }`
],
[
/entry: config\.server\.entry\(\)/,
`entry: { server: config.server.entry().server.replace(/\\.js$/, '.ts') }`
],
[
/entry: config\.serviceworker\.entry\(\)/,
`entry: { 'service-worker': config.serviceworker.entry()['service-worker'].replace(/\\.js$/, '.ts') }`
],
// Add preprocess to the svelte config, this is tricky because there's no easy signifier.
// Instead we look for 'hydratable: true,'
[
/hydratable: true(?!,\n\s*preprocess)/g,
'hydratable: true,\n\t\t\t\t\t\t\tpreprocess: sveltePreprocess()'
],
// Add TypeScript rules for client and server
[
/module: {\n\s*rules: \[\n\s*(?!{\n\s*test: \/\\\.ts\$\/)/g,
`module: {\n\t\t\trules: [\n\t\t\t\t{\n\t\t\t\t\ttest: /\\.ts$/,\n\t\t\t\t\tloader: 'ts-loader'\n\t\t\t\t},\n\t\t\t\t`
],
// Add TypeScript rules for serviceworker
[
/output: config\.serviceworker\.output\(\),\n\s*(?!module)/,
`output: config.serviceworker.output(),\n\t\tmodule: {\n\t\t\trules: [\n\t\t\t\t{\n\t\t\t\t\ttest: /\\.ts$/,\n\t\t\t\t\tloader: 'ts-loader'\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t`
],
// Edit outputs
[
/output: config\.serviceworker\.output\(\),\n\s*(?!resolve)/,
`output: config.serviceworker.output(),\n\t\tresolve: { extensions: ['.mjs', '.js', '.ts', '.json'] },\n\t\t`
]
]);
}
function updateServiceWorker() {
replaceInFile(path.join(projectRoot, 'src', 'service-worker.ts'), [
[`shell.concat(files);`, `(shell as string[]).concat(files as string[]);`],
[`self.skipWaiting();`, `((self as any) as ServiceWorkerGlobalScope).skipWaiting();`],
[`self.clients.claim();`, `((self as any) as ServiceWorkerGlobalScope).clients.claim();`],
[`fetchAndCache(request)`, `fetchAndCache(request: Request)`],
[`self.addEventListener('activate', event =>`, `self.addEventListener('activate', (event: ExtendableEvent) =>`],
[`self.addEventListener('install', event =>`, `self.addEventListener('install', (event: ExtendableEvent) =>`],
[`addEventListener('fetch', event =>`, `addEventListener('fetch', (event: FetchEvent) =>`],
]);
}
function createTsConfig() {
const tsconfig = `{
"extends": "@tsconfig/svelte/tsconfig.json",
"compilerOptions": {
"lib": ["DOM", "ES2017", "WebWorker"]
},
"include": ["src/**/*", "src/node_modules/**/*"],
"exclude": ["node_modules/*", "__sapper__/*", "static/*"]
}`;
createFile(path.join(projectRoot, 'tsconfig.json'), tsconfig);
}
// Adds the extension recommendation
function configureVsCode() {
const dir = path.join(projectRoot, '.vscode');
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir);
}
createFile(path.join(projectRoot, '.vscode', 'extensions.json'), `{"recommendations": ["svelte.svelte-vscode"]}`);
}
function deleteThisScript() {
fs.unlinkSync(path.join(__filename));
// Check for Mac's DS_store file, and if it's the only one left remove it
const remainingFiles = fs.readdirSync(path.join(__dirname));
if (remainingFiles.length === 1 && remainingFiles[0] === '.DS_store') {
fs.unlinkSync(path.join(__dirname, '.DS_store'));
}
// Check if the scripts folder is empty
if (fs.readdirSync(path.join(__dirname)).length === 0) {
// Remove the scripts folder
fs.rmdirSync(path.join(__dirname));
}
}
console.log(`Adding TypeScript with ${isRollup ? "Rollup" : "webpack" }...`);
addDepsToPackageJson();
changeJsExtensionToTs(path.join(projectRoot, 'src'));
updateSvelteFiles();
if (isRollup) {
updateRollupConfig();
} else {
updateWebpackConfig();
}
updateServiceWorker();
createTsConfig();
configureVsCode();
// Delete this script, but not during testing
if (!argv[2]) {
deleteThisScript();
}
console.log('Converted to TypeScript.');
if (fs.existsSync(path.join(projectRoot, 'node_modules'))) {
console.log(`
Next:
1. run 'npm install' again to install TypeScript dependencies
2. run 'npm run build' for the @sapper imports in your project to work
`);
}

View File

@@ -1,39 +0,0 @@
/**
* These declarations tell TypeScript that we allow import of images, e.g.
* ```
<script lang='ts'>
import successkid from 'images/successkid.jpg';
</script>
<img src="{successkid}">
```
*/
declare module "*.gif" {
const value: string;
export = value;
}
declare module "*.jpg" {
const value: string;
export = value;
}
declare module "*.jpeg" {
const value: string;
export = value;
}
declare module "*.png" {
const value: string;
export = value;
}
declare module "*.svg" {
const value: string;
export = value;
}
declare module "*.webp" {
const value: string;
export = value;
}

View File

@@ -1,5 +0,0 @@
import * as sapper from '@sapper/app';
sapper.start({
target: document.querySelector('#sapper')
});

View File

@@ -1,81 +0,0 @@
<script>
import StoryInfo from "../components/StoryInfo.svelte";
import StoryMeta from "../components/StoryMeta.svelte";
export let story;
</script>
<style>
@import url(/fonts/Fonts.css);
.article :global(h1),
.article :global(h2),
.article :global(h3),
.article :global(h4),
.article :global(h5),
.article :global(h6) {
margin: 0 0 0.5em 0;
font-weight: 400;
line-height: 1.2;
}
.article :global(h1) {
font-size: 2rem;
}
@media only screen and (min-device-width: 320px) and (max-device-width: 480px) {
.article :global(h1) {
font-size: 1.5rem;
}
}
.article-title {
text-align: left;
}
.article-header {
padding: 0 0 1rem;
}
.article-body {
max-width: 45rem;
margin: 0 auto;
font: 1.2rem/1.5 "Apparatus SIL", sans-serif;
text-rendering: optimizeLegibility;
}
.article-body :global(figure) {
margin: 0;
}
.article-body :global(figcaption p),
.article-body :global(figcaption) {
padding: 0;
margin: 0;
}
.article-body :global(figcaption) {
font-style: italic;
margin: 0 1rem;
font-size: 0.9em;
text-align: justify;
}
.article-body :global(figure),
.article-body :global(video),
.article-body :global(img) {
max-width: 100%;
height: auto;
}
</style>
<article class="article">
<header class="article-header">
<h1 class="article-title">
{@html story.title}
</h1>
<section class="article-info">
<StoryInfo {story} />
</section>
<aside class="article-info">
<StoryMeta {story} />
</aside>
</header>
<section class="article-body">
{@html story.text}
</section>
</article>

View File

@@ -1,106 +0,0 @@
<script>
import Time from "../components/Time.svelte";
export let story;
export let comment;
export let showComments = true;
let author = (comment.author || "").replace(" ", "");
let id = `${author}-${comment.date}`;
function toggleComments() {
showComments = !showComments;
}
</script>
<style>
.comment {
margin: 0.5rem 0;
}
.comment:not(:first-of-type) {
margin: 0.5rem 0;
border-top: solid 1px #ddd;
padding: 0.5rem 0 0;
}
.comment-info {
color: #222;
}
.comment-author {
font-weight: 600;
padding: 0 0.4em 0.2em;
border-radius: 0.5em;
background: #f1f1f1;
color: #000;
}
.comment-author.is-op {
background: #333;
color: #fff;
}
.comment-text {
padding: 0 0.5rem;
color: #000;
}
.comment-text.is-collapsed {
height: 3rem;
overflow: hidden;
color: #888;
}
.comment-children {
margin-left: 0.5rem;
padding-left: 0.5rem;
border-left: solid 1px #000;
}
.toggle-children {
background: none;
border: none;
padding: 0 0.25rem;
color: inherit;
cursor: pointer;
}
.time-link {
text-decoration: none;
}
.time-link:hover {
text-decoration: underline;
}
.is-lighter {
color: #888;
}
</style>
<article class="comment" id="comment-{id}">
<header class="comment-info">
<span
class={comment.author === story.author ? 'comment-author is-op' : 'comment-author'}>{comment.author || '[Deleted]'}</span>
<a class="time-link" href="{story.id}#comment-{id}">
<Time date={comment.date} />
</a>
{#if comment.comments.length}
<button
class="toggle-children"
on:click={toggleComments}>{#if showComments}
[&ndash;]
{:else}[+]{/if}</button>
{/if}
</header>
<section class={showComments ? 'comment-text' : 'comment-text is-collapsed'}>
{@html comment.text}
</section>
{#if !showComments}
<div class="comment-children">
<button
class="toggle-children is-lighter"
on:click={toggleComments}>[expand]</button>
</div>
{/if}
{#if showComments && comment.comments.length}
<footer class="comment-children">
{#each comment.comments as child}
<svelte:self {story} comment={child} />
{/each}
</footer>
{/if}
</article>

View File

@@ -1,16 +0,0 @@
<script>
import DOMPurify from "dompurify";
import { onMount } from "svelte";
export let html;
export let text;
let purify;
onMount(() => {
purify = (html) => DOMPurify.sanitize(html);
});
</script>
{#if purify}
{@html html}
{:else if text}{text}{/if}

View File

@@ -1,156 +0,0 @@
<script>
import debounce from "lodash/debounce";
import { goto, prefetch, stores } from "@sapper/app";
export let segment;
const { page } = stores();
let search;
let isSearching;
let __handleSearch = debounce(_handleSearch, 300, {
trailing: true,
leading: false,
});
let handleSearch = (e) => {
isSearching = true;
__handleSearch(e);
};
page.subscribe((page) => {
setTimeout(() => {
if (segment === "search") {
search && search.focus();
}
}, 0);
});
async function _handleSearch(event) {
const url = `/search?q=${event.target.value}`;
await prefetch(url);
await goto(url);
isSearching = false;
}
</script>
<style>
[aria-current] {
position: relative;
display: inline-block;
}
[aria-current]::after {
position: absolute;
content: "";
width: calc(100% - 1em);
height: 2px;
background-color: rgb(255, 62, 0);
display: block;
bottom: -1px;
}
.navigation {
border-bottom: 1px solid rgba(255, 62, 0, 0.1);
font-weight: 300;
padding: 0;
}
.navigation-container {
margin: 0 auto;
padding: 0;
max-width: 64rem;
display: flex;
flex-direction: row;
justify-content: space-between;
}
/* @media (max-device-width: 480px) {
.navigation-container {
justify-content: space-evenly;
}
} */
.navigation-container > * {
vertical-align: middle;
}
.navigation-list {
margin: 0;
padding: 0;
display: flex;
flex-direction: row;
}
.navigation-item {
list-style: none;
}
.navigation-link {
text-decoration: none;
padding: 1em 0.5em;
display: block;
}
.navigation-input {
line-height: 2;
vertical-align: middle;
width: 30rem;
max-width: 45vw;
font-size: 1.1rem;
padding: 0.25em 0.5em;
margin: 0.25em 0.5em;
border-radius: 5px;
border: solid 1px #aaa;
}
input:focus {
box-shadow: 0 0 0.25rem rgba(0, 0, 0, 0.25);
}
.is-searching {
padding-right: 0.5rem;
background-image: url(/svg-loaders/black/grid.svg);
background-size: 1.2em 1.2em;
background-position: right 0.5em center;
background-repeat: no-repeat;
}
</style>
<svelte:head>
<link rel="preload" href="/svg-loaders/black/grid.svg" as="image" />
</svelte:head>
<nav class="navigation">
<div class="navigation-container">
<ul class="navigation-list" role="menu">
<li class="navigation-item">
<a
class="navigation-link"
aria-current={segment === undefined ? 'page' : undefined}
rel="prefetch"
href=".">
{#if [undefined, 'submit'].includes(segment)}
Qot. news
{:else}&larr; News feed{/if}
</a>
</li>
{#if [undefined, 'submit'].includes(segment)}
<li class="navigation-item">
<a
class="navigation-link"
aria-current={segment === 'submit' ? 'page' : undefined}
rel="prefetch"
href="/submit">
Submit
</a>
</li>
{/if}
</ul>
<form action="/search" method="GET" rel="prefetch" role="search">
<input
class="navigation-input {(isSearching && 'is-searching') || ''}"
id="search"
bind:this={search}
type="text"
name="q"
value={$page.query.q || ''}
placeholder="Search..."
on:keypress={handleSearch} />
</form>
</div>
</nav>

View File

@@ -1,62 +0,0 @@
<script>
import { stores } from "@sapper/app";
export let href;
export let search;
export let count;
const { page } = stores();
let skip = 0;
let limit = 20;
let prevLink = "";
let nextLink = "";
page.subscribe((p) => {
count = Number(count);
skip = Number(p.query.skip) || 0;
limit = Number(p.query.limit) || 20;
let previous = new URLSearchParams(search || "");
let next = new URLSearchParams(search || "");
previous.append("skip", skip - Math.min(skip, limit));
previous.append("limit", limit);
next.append("skip", skip + limit);
next.append("limit", limit);
prevLink = href + "?" + previous.toString();
nextLink = href + "?" + next.toString();
});
</script>
<style>
.pagination {
margin: 3rem 0;
display: flex;
flex-direction: row;
justify-content: space-between;
}
.pagination-link {
font-size: 1.5rem;
text-decoration: none;
}
.pagination-link:hover {
text-decoration: underline;
}
.pagination-link.is-next {
margin-left: auto;
}
</style>
<div class="pagination">
{#if skip > 0}
<a class="pagination-link is-prev" href={prevLink} rel="prefetch">&larr;
Previous</a>
{/if}
{#if count >= limit}
<a class="pagination-link is-next" href={nextLink} rel="prefetch">Next
&rarr;</a>
{/if}
</div>

View File

@@ -1,18 +0,0 @@
<script>
import Time from "../components/Time.svelte";
export let story;
</script>
<Time date={story.date} />
{#if story.author && story.author_link}
by
<a class="author" href={story.author_link}>{story.author}</a>
{:else if story.author}by <span class="author">{story.author}</span>{/if}
on
<a class="source" href={story.link || story.url}>{story.source}</a>
{#if story.score}&bull; {story.score} points{/if}
{#if Number(story.num_comments)}
&bull;
<a rel="prefetch" href="/{story.id}#comments">{story.num_comments}
comments</a>
{/if}

View File

@@ -1,57 +0,0 @@
<script>
import { getLogoUrl } from "../utils/logos.js";
import StoryInfo from "../components/StoryInfo.svelte";
export let stories;
const host = (url) => new URL(url).hostname.replace(/^www\./, "");
</script>
<style>
.story-item {
margin: 0.5rem 0 0;
padding-left: 1.2em;
}
.story-icon,
.story-title {
font-size: 1.2rem;
}
.story-icon {
margin-left: -1.2rem;
}
.story-source::before {
content: "(";
}
.story-source::after {
content: ")";
}
.story-item :global(a) {
text-decoration: none;
}
.story-item :global(a:hover) {
text-decoration: underline;
}
</style>
{#each stories as story}
<article class="story-item">
<header class="story-header">
<img
src={getLogoUrl(story)}
alt="logo"
class="story-icon"
style="height: 1rem; width: 1rem;" />
<a class="story-title" rel="prefetch" href="/{story.id}">
{@html story.title}
</a>
<a
class="story-source"
href={story.url || story.link}>{host(story.url || story.link)}</a>
</header>
<aside class="story-info">
<StoryInfo {story} />
</aside>
</article>
{/each}
<slot />

View File

@@ -1,30 +0,0 @@
<script>
export let story;
let host = new URL(story.url || story.link).hostname.replace(/^www\./, "");
</script>
<style>
ul {
margin: 0;
padding: 0;
}
li {
display: inline-block;
list-style-type: circle;
}
li:not(:first-of-type)::before {
content: " | ";
}
</style>
<ul>
{#if story.url}
<li>source: <a class="article-source" href={story.url}>{host}</a></li>
{/if}
{#if story.scraper && story.scraper_link}
<li>scraper: <a href={story.scraper_link}>{story.scraper}</a></li>
{:else if story.scraper}
<li>scraper: {story.scraper}</li>
{/if}
</ul>

View File

@@ -1,11 +0,0 @@
<script>
import fromUnixTime from "date-fns/fromUnixTime";
import formatDistanceToNow from "date-fns/formatDistanceToNow";
export let date;
let d = fromUnixTime(date);
let datetime = d.toISOString();
let title = d.toLocaleString();
let dateString = formatDistanceToNow(d, { addSuffix: true });
</script>
<time {datetime} {title}>{dateString}</time>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

View File

@@ -1,17 +0,0 @@
import fetch from 'isomorphic-fetch';
import { purify, purifyArray } from './_purify';
const API_URL = process.env.API_URL || 'http://localhost:33842';
export async function get(req, res) {
const response = await fetch(`${API_URL}/api/${req.params.id}`);
res.writeHead(response.status, { 'Content-Type': response.headers.get('Content-Type') });
if (!response.ok) {
return res.end(await response.text());
}
const data = await response.json();
data.story = purify(data.story);
data.related = purifyArray(data.related);
res.end(JSON.stringify(data));
}

View File

@@ -1,84 +0,0 @@
<script context="module">
export async function preload({ params }) {
const res = await this.fetch(`${params.id}.json`);
const data = await res.json();
if (res.status === 200) {
return { story: data.story, related: data.related };
} else {
this.error(res.status, data.message);
}
}
</script>
<script>
import fromUnixTime from "date-fns/fromUnixTime";
import Comment from "../components/Comment.svelte";
import Article from "../components/Article.svelte";
export let story;
export let related;
let others = related.filter(
(r) => r.id !== story.id && Number(r.num_comments)
);
let hasComments = related.some((r) => Number(r.num_comments));
</script>
<style>
.spacer {
margin: 3rem 0;
}
.single {
max-width: 56rem;
margin: 0 auto;
}
</style>
<svelte:head>
<title>{story.title}</title>
<meta property="og:title" content={story.title} />
<meta property="og:type" content="article" />
<meta
property="article:published_time"
content={fromUnixTime(story.date).toISOString()} />
<meta property="article:author" content={story.author || story.source} />
<meta property="og:description" content={story.excerpt || story.title} />
{#if story.image}
<meta property="og:image" content={story.image} />
{/if}
</svelte:head>
<section class="single">
<Article {story} />
{#if hasComments}
<hr class="spacer" />
<section id="comments">
<header>
<h2>Comments</h2>
{#if others.length}
<h3>
Other discussions:
{#each others as r}
{#if r.num_comments}
<a href="/{r.id}#comments" rel="prefetch">
{r.source}
({r.num_comments})
</a>
{/if}
{/each}
</h3>
{/if}
</header>
{#if story.comments.length}
<div class="comments">
{#each story.comments as comment}
<Comment {story} {comment} />
{/each}
</div>
{/if}
</section>
{/if}
</section>

View File

@@ -1,40 +0,0 @@
<script>
export let status;
export let error;
const dev = process.env.NODE_ENV === 'development';
</script>
<style>
h1, p {
margin: 0 auto;
}
h1 {
font-size: 2.8em;
font-weight: 700;
margin: 0 0 0.5em 0;
}
p {
margin: 1em auto;
}
@media (min-width: 480px) {
h1 {
font-size: 4em;
}
}
</style>
<svelte:head>
<title>{status}</title>
</svelte:head>
<h1>{status}</h1>
<p>{error.message}</p>
{#if dev && error.stack}
<pre>{error.stack}</pre>
{/if}

View File

@@ -1,21 +0,0 @@
<script>
import Nav from "../components/Nav.svelte";
export let segment;
</script>
<style>
main {
position: relative;
max-width: 64rem;
background-color: white;
padding: 0.5rem;
margin: 0 auto;
box-sizing: border-box;
}
</style>
<Nav {segment} />
<main>
<slot {segment} />
</main>

View File

@@ -1,25 +0,0 @@
import createDOMPurify from 'dompurify';
import { JSDOM } from 'jsdom';
export const purify = (story, DOMPurify) => {
if (!DOMPurify) {
DOMPurify = createDOMPurify(new JSDOM('').window);
}
if (story.title) {
story.title = DOMPurify.sanitize(story.title);
}
if (story.text) {
story.text = DOMPurify.sanitize(story.text);
}
return story;
};
export const purifyArray = (array, DOMPurify) => {
if (array instanceof Array) {
if (!DOMPurify) {
DOMPurify = createDOMPurify(new JSDOM('').window);
}
return array.map(story => purify(story, DOMPurify));
}
return array;
};

View File

@@ -1,20 +0,0 @@
import fetch from 'isomorphic-fetch';
import { purifyArray } from './_purify';
const API_URL = process.env.API_URL || 'http://localhost:33842';
export async function get(req, res) {
const { skip, limit } = {
skip: req.query.skip || 0,
limit: req.query.limit || 20,
};
const response = await fetch(`${API_URL}/api?skip=${skip}&limit=${limit}`);
res.writeHead(response.status, { 'Content-Type': response.headers.get('Content-Type') });
if (!response.ok) {
return res.end(await response.text());
}
const data = await response.json();
data.stories = purifyArray(data.stories);
res.end(JSON.stringify(data));
}

View File

@@ -1,33 +0,0 @@
<script context="module">
export async function preload(page) {
const { skip, limit } = {
skip: page.query.skip || 0,
limit: page.query.limit || 20,
};
const res = await this.fetch(`index.json?skip=${skip}&limit=${limit}`);
const data = await res.json();
if (res.status === 200) {
return { stories: data.stories, skip, limit };
} else {
this.error(res.status, data.message);
}
}
</script>
<script>
import StoryList from "../components/StoryList.svelte";
import Pagination from "../components/Pagination.svelte";
export let stories;
</script>
<svelte:head>
<title>QotNews</title>
<meta property="og:title" content="QotNews" />
<meta property="og:type" content="website" />
</svelte:head>
<StoryList {stories}>
<Pagination href="/" count={stories.length} />
</StoryList>

View File

@@ -1,20 +0,0 @@
import fetch from 'isomorphic-fetch';
import { purifyArray } from './_purify';
const API_URL = process.env.API_URL || 'http://localhost:33842';
export async function get(req, res) {
const { skip, limit } = {
skip: req.query.skip || 0,
limit: req.query.limit || 20,
};
const response = await fetch(`${API_URL}/api/search?q=${req.query.q}&skip=${skip}&limit=${limit}`);
res.writeHead(response.status, { 'Content-Type': response.headers.get('Content-Type') });
if (!response.ok) {
return res.end(await response.text());
}
const data = await response.json();
data.results = purifyArray(data.results);
res.end(JSON.stringify(data));
}

View File

@@ -1,42 +0,0 @@
<script context="module">
export async function preload(page) {
const { skip, limit, q } = {
skip: page.query.skip || 0,
limit: page.query.limit || 20,
q: page.query.q || "",
};
const res = await this.fetch(
`search.json?q=${q}&skip=${skip}&limit=${limit}`
);
const data = await res.json();
if (res.status === 200) {
return { stories: data.results, skip, limit };
} else {
this.error(res.status, data.message);
}
}
</script>
<script>
import { stores } from "@sapper/app";
import StoryList from "../components/StoryList.svelte";
import Pagination from "../components/Pagination.svelte";
export let stories;
const { page } = stores();
</script>
<svelte:head>
<title>QotNews</title>
<meta property="og:title" content="QotNews" />
<meta property="og:type" content="website" />
</svelte:head>
<StoryList {stories}>
<Pagination
href="/search"
search="q={$page.query.q}"
count={stories.length} />
</StoryList>

View File

@@ -1,17 +0,0 @@
import FormData from 'form-data';
import fetch from 'isomorphic-fetch';
import redirect from '@polka/redirect';
const API_URL = process.env.API_URL || 'http://localhost:33842';
export async function post(req, res) {
const body = new FormData();
body.append('url', req.body.url);
const response = await fetch(`${API_URL}/api/submit`, { method: "POST", body });
if (req.body.redirect) {
const { nid } = await response.json();
return redirect(res, 302, `/${nid}`);
}
res.writeHead(response.status, { 'Content-Type': response.headers.get('Content-Type') });
res.end(await response.text());
}

View File

@@ -1,147 +0,0 @@
<script>
import { onMount } from "svelte";
import { goto, prefetch } from "@sapper/app";
let input;
let handleSubmit;
let hasError;
let isLoading;
onMount(() => {
setTimeout(() => {
input && input.focus();
}, 0);
handleSubmit = async () => {
isLoading = true;
hasError = false;
const url = input.value;
const response = await fetch(`submit.json`, {
headers: { "Content-Type": "application/json" },
method: "POST",
body: JSON.stringify({ url }),
});
if (!response.ok) {
hasError = true;
isLoading = false;
return;
}
const { nid } = await response.json();
await prefetch(`/${nid}`);
await goto(`/${nid}`);
};
});
</script>
<style>
section {
max-width: 45rem;
margin: 5rem auto 0;
}
form {
text-align: center;
width: 95%;
border: solid 1px #aaa;
margin: 3.5rem auto;
border-radius: 5px;
overflow: hidden;
display: flex;
flex-direction: row;
}
form:focus-within {
box-shadow: 0 0 0.25rem rgba(0, 0, 0, 0.25);
}
input {
width: 85%;
box-sizing: border-box;
padding: 0.5rem;
margin: 0;
font-size: 1.25rem;
line-height: 1.5;
border: none;
border-radius: 0;
background: #fff;
vertical-align: middle;
}
form:has(input:focus) {
box-shadow: inset 0 0 0.2rem rgba(0, 0, 0, 0.2);
}
button {
width: 15%;
box-sizing: border-box;
padding: 0.5rem;
margin: 0;
font-size: 1.25rem;
line-height: 1.5;
border: none;
border-left: solid 1px #aaa;
border-radius: 0;
background: #f1f1f1;
vertical-align: middle;
}
.loading,
.is-loading form,
.is-loading .error {
display: none;
}
.is-loading .loading {
display: block;
margin: 3.5rem auto 0;
}
.error {
display: none;
}
.has-error .error {
box-sizing: border-box;
height: 3rem;
padding: 0;
margin: 0;
color: darkred;
display: block;
}
.has-error form {
margin-top: 5rem;
}
</style>
<svelte:head>
<title>QotNews</title>
<meta property="og:title" content="QotNews" />
<meta property="og:type" content="website" />
<link rel="preload" href="/loading.svg" as="image" />
</svelte:head>
<section class="{isLoading ? 'is-loading' : ''} {hasError ? 'has-error' : ''}">
<img
class="loading"
src="/loading.svg"
alt="loading..."
width="200"
height="200" />
<form
action="submit.json"
method="POST"
on:submit|preventDefault={handleSubmit}
autocomplete="off">
<input
type="text"
name="url"
placeholder="Enter article link"
pattern="^https?:\/\/(www\.)?.*"
value=""
bind:this={input}
required />
<button value="true" name="redirect" type="submit">Go</button>
</form>
<p class="error">Something went wrong.</p>
</section>

View File

@@ -1,20 +0,0 @@
import sirv from 'sirv';
import polka from 'polka';
import compression from 'compression';
import * as sapper from '@sapper/server';
import { json, urlencoded } from 'body-parser';
const { PORT, NODE_ENV } = process.env;
const dev = NODE_ENV === 'development';
polka()
.use(
json(),
urlencoded(),
compression({ threshold: 0 }),
sirv('static', { dev }),
sapper.middleware(),
)
.listen(PORT, err => {
if (err) console.log('error', err);
});

View File

@@ -1,86 +0,0 @@
import { timestamp, files, shell } from '@sapper/service-worker';
const ASSETS = `cache${timestamp}`;
// `shell` is an array of all the files generated by the bundler,
// `files` is an array of everything in the `static` directory
const to_cache = shell.concat(files);
const staticAssets = new Set(to_cache);
self.addEventListener('install', event => {
event.waitUntil(
caches
.open(ASSETS)
.then(cache => cache.addAll(to_cache))
.then(() => {
self.skipWaiting();
})
);
});
self.addEventListener('activate', event => {
event.waitUntil(
caches.keys().then(async keys => {
// delete old caches
for (const key of keys) {
if (key !== ASSETS) await caches.delete(key);
}
self.clients.claim();
})
);
});
/**
* Fetch the asset from the network and store it in the cache.
* Fall back to the cache if the user is offline.
*/
async function fetchAndCache(request) {
const cache = await caches.open(`offline${timestamp}`)
try {
const response = await fetch(request);
cache.put(request, response.clone());
return response;
} catch (err) {
const response = await cache.match(request);
if (response) return response;
throw err;
}
}
self.addEventListener('fetch', event => {
if (event.request.method !== 'GET' || event.request.headers.has('range')) return;
const url = new URL(event.request.url);
// don't try to handle e.g. data: URIs
const isHttp = url.protocol.startsWith('http');
const isDevServerRequest = url.hostname === self.location.hostname && url.port !== self.location.port;
const isStaticAsset = url.host === self.location.host && staticAssets.has(url.pathname);
const skipBecauseUncached = event.request.cache === 'only-if-cached' && !isStaticAsset;
if (isHttp && !isDevServerRequest && !skipBecauseUncached) {
event.respondWith(
(async () => {
// always serve static files and bundler-generated assets from cache.
// if your application has other URLs with data that will never change,
// set this variable to true for them and they will only be fetched once.
const cachedAsset = isStaticAsset && await caches.match(event.request);
// for pages, you might want to serve a shell `service-worker-index.html` file,
// which Sapper has generated for you. It's not right for every
// app, but if it's right for yours then uncomment this section
/*
if (!cachedAsset && url.origin === self.origin && routes.find(route => route.pattern.test(url.pathname))) {
return caches.match('/service-worker-index.html');
}
*/
return cachedAsset || fetchAndCache(event.request);
})()
);
}
});

View File

@@ -1,21 +0,0 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width,initial-scale=1.0">
<meta name="theme-color" content="#333333">
%sapper.base%
<link rel="stylesheet" href="global.css">
<link rel="manifest" href="manifest.json" crossorigin="use-credentials">
<link rel="icon" type="image/png" href="favicon.png">
%sapper.scripts%
%sapper.styles%
%sapper.head%
</head>
<body>
<div id="sapper">%sapper.html%</div>
</body>
</html>

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.5 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -1,28 +0,0 @@
@font-face {
font-family: 'Apparatus SIL';
src: url('AppSILR.ttf') format('truetype');
}
@font-face {
font-family: 'Apparatus SIL';
font-style: italic;
src: url('AppSILI.ttf') format('truetype');
}
@font-face {
font-family: 'Apparatus SIL';
font-weight: bold;
src: url('AppSILB.ttf') format('truetype');
}
@font-face {
font-family: 'Apparatus SIL';
font-weight: bold;
font-style: italic;
src: url('AppSILBI.ttf') format('truetype');
}
@font-face {
font-family: 'Icomoon';
src: url('icomoon.ttf') format('truetype');
}

Binary file not shown.

View File

@@ -1,29 +0,0 @@
body {
margin: 0;
font-family: Roboto, -apple-system, BlinkMacSystemFont, Segoe UI, Oxygen,
Ubuntu, Cantarell, Fira Sans, Droid Sans, Helvetica Neue, sans-serif;
font-size: 16px;
line-height: 1.5;
color: #333;
margin-bottom: 50vh;
}
a {
color: inherit;
}
pre,
code {
font-family: menlo, inconsolata, monospace;
font-size: calc(1em - 2px);
color: #555;
background-color: #f0f0f0;
padding: 0.2em 0.4em;
border-radius: 2px;
}
pre {
max-width: 100%;
overflow: auto;
}

View File

@@ -1,9 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg class="lds-double-ring" width="200px" height="200px" style="background:rgba(0, 0, 0, 0) none repeat scroll 0% 0%" preserveAspectRatio="xMidYMid" viewBox="0 0 100 100" xmlns="http://www.w3.org/2000/svg">
<circle cx="50" cy="50" r="45" fill="none" stroke="#000" stroke-dasharray="70.68583470577035 70.68583470577035" stroke-linecap="round" stroke-width="3" ng-attr-r="{{config.radius}}" ng-attr-stroke="{{config.c1}}" ng-attr-stroke-dasharray="{{config.dasharray}}" ng-attr-stroke-width="{{config.width}}">
<animateTransform attributeName="transform" begin="0s" calcMode="linear" dur="3.6s" keyTimes="0;1" repeatCount="indefinite" type="rotate" values="0 50 50;360 50 50"/>
</circle>
<circle cx="50" cy="50" r="41" fill="none" stroke="#000" stroke-dasharray="64.40264939859075 64.40264939859075" stroke-dashoffset="64.403" stroke-linecap="round" stroke-width="3" ng-attr-r="{{config.radius2}}" ng-attr-stroke="{{config.c2}}" ng-attr-stroke-dasharray="{{config.dasharray2}}" ng-attr-stroke-dashoffset="{{config.dashoffset2}}" ng-attr-stroke-width="{{config.width}}">
<animateTransform attributeName="transform" begin="0s" calcMode="linear" dur="3.6s" keyTimes="0;1" repeatCount="indefinite" type="rotate" values="0 50 50;-360 50 50"/>
</circle>
</svg>

Before

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -1,20 +0,0 @@
{
"background_color": "#ffffff",
"theme_color": "#333333",
"name": "Qot. news",
"short_name": "Qot. news",
"display": "minimal-ui",
"start_url": "/",
"icons": [
{
"src": "logo-192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "logo-512.png",
"sizes": "512x512",
"type": "image/png"
}
]
}

View File

@@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) 2014 Sam Herbert
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,29 +0,0 @@
<!-- By Sam Herbert (@sherb), for everyone. More @ http://goo.gl/7AJzbL -->
<svg width="55" height="80" viewBox="0 0 55 80" xmlns="http://www.w3.org/2000/svg" fill="#FFF">
<g transform="matrix(1 0 0 -1 0 80)">
<rect width="10" height="20" rx="3">
<animate attributeName="height"
begin="0s" dur="4.3s"
values="20;45;57;80;64;32;66;45;64;23;66;13;64;56;34;34;2;23;76;79;20" calcMode="linear"
repeatCount="indefinite" />
</rect>
<rect x="15" width="10" height="80" rx="3">
<animate attributeName="height"
begin="0s" dur="2s"
values="80;55;33;5;75;23;73;33;12;14;60;80" calcMode="linear"
repeatCount="indefinite" />
</rect>
<rect x="30" width="10" height="50" rx="3">
<animate attributeName="height"
begin="0s" dur="1.4s"
values="50;34;78;23;56;23;34;76;80;54;21;50" calcMode="linear"
repeatCount="indefinite" />
</rect>
<rect x="45" width="10" height="30" rx="3">
<animate attributeName="height"
begin="0s" dur="2s"
values="30;45;13;80;56;72;45;76;34;23;67;30" calcMode="linear"
repeatCount="indefinite" />
</rect>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 1.3 KiB

View File

@@ -1,47 +0,0 @@
<!-- By Sam Herbert (@sherb), for everyone. More @ http://goo.gl/7AJzbL -->
<!-- Todo: add easing -->
<svg width="57" height="57" viewBox="0 0 57 57" xmlns="http://www.w3.org/2000/svg" stroke="#fff">
<g fill="none" fill-rule="evenodd">
<g transform="translate(1 1)" stroke-width="2">
<circle cx="5" cy="50" r="5">
<animate attributeName="cy"
begin="0s" dur="2.2s"
values="50;5;50;50"
calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="cx"
begin="0s" dur="2.2s"
values="5;27;49;5"
calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="27" cy="5" r="5">
<animate attributeName="cy"
begin="0s" dur="2.2s"
from="5" to="5"
values="5;50;50;5"
calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="cx"
begin="0s" dur="2.2s"
from="27" to="27"
values="27;49;5;27"
calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="49" cy="50" r="5">
<animate attributeName="cy"
begin="0s" dur="2.2s"
values="50;50;5;50"
calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="cx"
from="49" to="49"
begin="0s" dur="2.2s"
values="49;5;27;49"
calcMode="linear"
repeatCount="indefinite" />
</circle>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 1.9 KiB

View File

@@ -1,52 +0,0 @@
<svg width="135" height="140" viewBox="0 0 135 140" xmlns="http://www.w3.org/2000/svg" fill="#fff">
<rect y="10" width="15" height="120" rx="6">
<animate attributeName="height"
begin="0.5s" dur="1s"
values="120;110;100;90;80;70;60;50;40;140;120" calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="y"
begin="0.5s" dur="1s"
values="10;15;20;25;30;35;40;45;50;0;10" calcMode="linear"
repeatCount="indefinite" />
</rect>
<rect x="30" y="10" width="15" height="120" rx="6">
<animate attributeName="height"
begin="0.25s" dur="1s"
values="120;110;100;90;80;70;60;50;40;140;120" calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="y"
begin="0.25s" dur="1s"
values="10;15;20;25;30;35;40;45;50;0;10" calcMode="linear"
repeatCount="indefinite" />
</rect>
<rect x="60" width="15" height="140" rx="6">
<animate attributeName="height"
begin="0s" dur="1s"
values="120;110;100;90;80;70;60;50;40;140;120" calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="y"
begin="0s" dur="1s"
values="10;15;20;25;30;35;40;45;50;0;10" calcMode="linear"
repeatCount="indefinite" />
</rect>
<rect x="90" y="10" width="15" height="120" rx="6">
<animate attributeName="height"
begin="0.25s" dur="1s"
values="120;110;100;90;80;70;60;50;40;140;120" calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="y"
begin="0.25s" dur="1s"
values="10;15;20;25;30;35;40;45;50;0;10" calcMode="linear"
repeatCount="indefinite" />
</rect>
<rect x="120" y="10" width="15" height="120" rx="6">
<animate attributeName="height"
begin="0.5s" dur="1s"
values="120;110;100;90;80;70;60;50;40;140;120" calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="y"
begin="0.5s" dur="1s"
values="10;15;20;25;30;35;40;45;50;0;10" calcMode="linear"
repeatCount="indefinite" />
</rect>
</svg>

Before

Width:  |  Height:  |  Size: 2.3 KiB

View File

@@ -1,56 +0,0 @@
<svg width="105" height="105" viewBox="0 0 105 105" xmlns="http://www.w3.org/2000/svg" fill="#000">
<circle cx="12.5" cy="12.5" r="12.5">
<animate attributeName="fill-opacity"
begin="0s" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="12.5" cy="52.5" r="12.5" fill-opacity=".5">
<animate attributeName="fill-opacity"
begin="100ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="52.5" cy="12.5" r="12.5">
<animate attributeName="fill-opacity"
begin="300ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="52.5" cy="52.5" r="12.5">
<animate attributeName="fill-opacity"
begin="600ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="92.5" cy="12.5" r="12.5">
<animate attributeName="fill-opacity"
begin="800ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="92.5" cy="52.5" r="12.5">
<animate attributeName="fill-opacity"
begin="400ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="12.5" cy="92.5" r="12.5">
<animate attributeName="fill-opacity"
begin="700ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="52.5" cy="92.5" r="12.5">
<animate attributeName="fill-opacity"
begin="500ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="92.5" cy="92.5" r="12.5">
<animate attributeName="fill-opacity"
begin="200ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
</svg>

Before

Width:  |  Height:  |  Size: 2.0 KiB

View File

@@ -1,20 +0,0 @@
<svg width="135" height="135" viewBox="0 0 135 135" xmlns="http://www.w3.org/2000/svg" fill="#fff">
<path d="M67.447 58c5.523 0 10-4.477 10-10s-4.477-10-10-10-10 4.477-10 10 4.477 10 10 10zm9.448 9.447c0 5.523 4.477 10 10 10 5.522 0 10-4.477 10-10s-4.478-10-10-10c-5.523 0-10 4.477-10 10zm-9.448 9.448c-5.523 0-10 4.477-10 10 0 5.522 4.477 10 10 10s10-4.478 10-10c0-5.523-4.477-10-10-10zM58 67.447c0-5.523-4.477-10-10-10s-10 4.477-10 10 4.477 10 10 10 10-4.477 10-10z">
<animateTransform
attributeName="transform"
type="rotate"
from="0 67 67"
to="-360 67 67"
dur="2.5s"
repeatCount="indefinite"/>
</path>
<path d="M28.19 40.31c6.627 0 12-5.374 12-12 0-6.628-5.373-12-12-12-6.628 0-12 5.372-12 12 0 6.626 5.372 12 12 12zm30.72-19.825c4.686 4.687 12.284 4.687 16.97 0 4.686-4.686 4.686-12.284 0-16.97-4.686-4.687-12.284-4.687-16.97 0-4.687 4.686-4.687 12.284 0 16.97zm35.74 7.705c0 6.627 5.37 12 12 12 6.626 0 12-5.373 12-12 0-6.628-5.374-12-12-12-6.63 0-12 5.372-12 12zm19.822 30.72c-4.686 4.686-4.686 12.284 0 16.97 4.687 4.686 12.285 4.686 16.97 0 4.687-4.686 4.687-12.284 0-16.97-4.685-4.687-12.283-4.687-16.97 0zm-7.704 35.74c-6.627 0-12 5.37-12 12 0 6.626 5.373 12 12 12s12-5.374 12-12c0-6.63-5.373-12-12-12zm-30.72 19.822c-4.686-4.686-12.284-4.686-16.97 0-4.686 4.687-4.686 12.285 0 16.97 4.686 4.687 12.284 4.687 16.97 0 4.687-4.685 4.687-12.283 0-16.97zm-35.74-7.704c0-6.627-5.372-12-12-12-6.626 0-12 5.373-12 12s5.374 12 12 12c6.628 0 12-5.373 12-12zm-19.823-30.72c4.687-4.686 4.687-12.284 0-16.97-4.686-4.686-12.284-4.686-16.97 0-4.687 4.686-4.687 12.284 0 16.97 4.686 4.687 12.284 4.687 16.97 0z">
<animateTransform
attributeName="transform"
type="rotate"
from="0 67 67"
to="360 67 67"
dur="8s"
repeatCount="indefinite"/>
</path>
</svg>

Before

Width:  |  Height:  |  Size: 1.9 KiB

View File

@@ -1,56 +0,0 @@
<svg width="105" height="105" viewBox="0 0 105 105" xmlns="http://www.w3.org/2000/svg" fill="#fff">
<circle cx="12.5" cy="12.5" r="12.5">
<animate attributeName="fill-opacity"
begin="0s" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="12.5" cy="52.5" r="12.5" fill-opacity=".5">
<animate attributeName="fill-opacity"
begin="100ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="52.5" cy="12.5" r="12.5">
<animate attributeName="fill-opacity"
begin="300ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="52.5" cy="52.5" r="12.5">
<animate attributeName="fill-opacity"
begin="600ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="92.5" cy="12.5" r="12.5">
<animate attributeName="fill-opacity"
begin="800ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="92.5" cy="52.5" r="12.5">
<animate attributeName="fill-opacity"
begin="400ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="12.5" cy="92.5" r="12.5">
<animate attributeName="fill-opacity"
begin="700ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="52.5" cy="92.5" r="12.5">
<animate attributeName="fill-opacity"
begin="500ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="92.5" cy="92.5" r="12.5">
<animate attributeName="fill-opacity"
begin="200ms" dur="1s"
values="1;.2;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
</svg>

Before

Width:  |  Height:  |  Size: 2.0 KiB

View File

@@ -1,18 +0,0 @@
<!-- By Sam Herbert (@sherb), for everyone. More @ http://goo.gl/7AJzbL -->
<svg width="140" height="64" viewBox="0 0 140 64" xmlns="http://www.w3.org/2000/svg" fill="#fff">
<path d="M30.262 57.02L7.195 40.723c-5.84-3.976-7.56-12.06-3.842-18.063 3.715-6 11.467-7.65 17.306-3.68l4.52 3.76 2.6-5.274c3.717-6.002 11.47-7.65 17.305-3.68 5.84 3.97 7.56 12.054 3.842 18.062L34.49 56.118c-.897 1.512-2.793 1.915-4.228.9z" fill-opacity=".5">
<animate attributeName="fill-opacity"
begin="0s" dur="1.4s"
values="0.5;1;0.5"
calcMode="linear"
repeatCount="indefinite" />
</path>
<path d="M105.512 56.12l-14.44-24.272c-3.716-6.008-1.996-14.093 3.843-18.062 5.835-3.97 13.588-2.322 17.306 3.68l2.6 5.274 4.52-3.76c5.84-3.97 13.592-2.32 17.307 3.68 3.718 6.003 1.998 14.088-3.842 18.064L109.74 57.02c-1.434 1.014-3.33.61-4.228-.9z" fill-opacity=".5">
<animate attributeName="fill-opacity"
begin="0.7s" dur="1.4s"
values="0.5;1;0.5"
calcMode="linear"
repeatCount="indefinite" />
</path>
<path d="M67.408 57.834l-23.01-24.98c-5.864-6.15-5.864-16.108 0-22.248 5.86-6.14 15.37-6.14 21.234 0L70 16.168l4.368-5.562c5.863-6.14 15.375-6.14 21.235 0 5.863 6.14 5.863 16.098 0 22.247l-23.007 24.98c-1.43 1.556-3.757 1.556-5.188 0z" />
</svg>

Before

Width:  |  Height:  |  Size: 1.3 KiB

View File

@@ -1,17 +0,0 @@
<!-- By Sam Herbert (@sherb), for everyone. More @ http://goo.gl/7AJzbL -->
<svg width="38" height="38" viewBox="0 0 38 38" xmlns="http://www.w3.org/2000/svg" stroke="#fff">
<g fill="none" fill-rule="evenodd">
<g transform="translate(1 1)" stroke-width="2">
<circle stroke-opacity=".5" cx="18" cy="18" r="18"/>
<path d="M36 18c0-9.94-8.06-18-18-18">
<animateTransform
attributeName="transform"
type="rotate"
from="0 18 18"
to="360 18 18"
dur="1s"
repeatCount="indefinite"/>
</path>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 694 B

View File

@@ -1,37 +0,0 @@
<!-- By Sam Herbert (@sherb), for everyone. More @ http://goo.gl/7AJzbL -->
<svg width="44" height="44" viewBox="0 0 44 44" xmlns="http://www.w3.org/2000/svg" stroke="#fff">
<g fill="none" fill-rule="evenodd" stroke-width="2">
<circle cx="22" cy="22" r="1">
<animate attributeName="r"
begin="0s" dur="1.8s"
values="1; 20"
calcMode="spline"
keyTimes="0; 1"
keySplines="0.165, 0.84, 0.44, 1"
repeatCount="indefinite" />
<animate attributeName="stroke-opacity"
begin="0s" dur="1.8s"
values="1; 0"
calcMode="spline"
keyTimes="0; 1"
keySplines="0.3, 0.61, 0.355, 1"
repeatCount="indefinite" />
</circle>
<circle cx="22" cy="22" r="1">
<animate attributeName="r"
begin="-0.9s" dur="1.8s"
values="1; 20"
calcMode="spline"
keyTimes="0; 1"
keySplines="0.165, 0.84, 0.44, 1"
repeatCount="indefinite" />
<animate attributeName="stroke-opacity"
begin="-0.9s" dur="1.8s"
values="1; 0"
calcMode="spline"
keyTimes="0; 1"
keySplines="0.3, 0.61, 0.355, 1"
repeatCount="indefinite" />
</circle>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 1.4 KiB

View File

@@ -1,42 +0,0 @@
<!-- By Sam Herbert (@sherb), for everyone. More @ http://goo.gl/7AJzbL -->
<svg width="45" height="45" viewBox="0 0 45 45" xmlns="http://www.w3.org/2000/svg" stroke="#fff">
<g fill="none" fill-rule="evenodd" transform="translate(1 1)" stroke-width="2">
<circle cx="22" cy="22" r="6" stroke-opacity="0">
<animate attributeName="r"
begin="1.5s" dur="3s"
values="6;22"
calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="stroke-opacity"
begin="1.5s" dur="3s"
values="1;0" calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="stroke-width"
begin="1.5s" dur="3s"
values="2;0" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="22" cy="22" r="6" stroke-opacity="0">
<animate attributeName="r"
begin="3s" dur="3s"
values="6;22"
calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="stroke-opacity"
begin="3s" dur="3s"
values="1;0" calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="stroke-width"
begin="3s" dur="3s"
values="2;0" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="22" cy="22" r="8">
<animate attributeName="r"
begin="0s" dur="1.5s"
values="6;1;2;3;4;5;6"
calcMode="linear"
repeatCount="indefinite" />
</circle>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 1.7 KiB

View File

@@ -1,55 +0,0 @@
<!-- By Sam Herbert (@sherb), for everyone. More @ http://goo.gl/7AJzbL -->
<svg width="58" height="58" viewBox="0 0 58 58" xmlns="http://www.w3.org/2000/svg">
<g fill="none" fill-rule="evenodd">
<g transform="translate(2 1)" stroke="#FFF" stroke-width="1.5">
<circle cx="42.601" cy="11.462" r="5" fill-opacity="1" fill="#fff">
<animate attributeName="fill-opacity"
begin="0s" dur="1.3s"
values="1;0;0;0;0;0;0;0" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="49.063" cy="27.063" r="5" fill-opacity="0" fill="#fff">
<animate attributeName="fill-opacity"
begin="0s" dur="1.3s"
values="0;1;0;0;0;0;0;0" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="42.601" cy="42.663" r="5" fill-opacity="0" fill="#fff">
<animate attributeName="fill-opacity"
begin="0s" dur="1.3s"
values="0;0;1;0;0;0;0;0" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="27" cy="49.125" r="5" fill-opacity="0" fill="#fff">
<animate attributeName="fill-opacity"
begin="0s" dur="1.3s"
values="0;0;0;1;0;0;0;0" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="11.399" cy="42.663" r="5" fill-opacity="0" fill="#fff">
<animate attributeName="fill-opacity"
begin="0s" dur="1.3s"
values="0;0;0;0;1;0;0;0" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="4.938" cy="27.063" r="5" fill-opacity="0" fill="#fff">
<animate attributeName="fill-opacity"
begin="0s" dur="1.3s"
values="0;0;0;0;0;1;0;0" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="11.399" cy="11.462" r="5" fill-opacity="0" fill="#fff">
<animate attributeName="fill-opacity"
begin="0s" dur="1.3s"
values="0;0;0;0;0;0;1;0" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="27" cy="5" r="5" fill-opacity="0" fill="#fff">
<animate attributeName="fill-opacity"
begin="0s" dur="1.3s"
values="0;0;0;0;0;0;0;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 2.7 KiB

View File

@@ -1,32 +0,0 @@
<!-- By Sam Herbert (@sherb), for everyone. More @ http://goo.gl/7AJzbL -->
<svg width="38" height="38" viewBox="0 0 38 38" xmlns="http://www.w3.org/2000/svg">
<defs>
<linearGradient x1="8.042%" y1="0%" x2="65.682%" y2="23.865%" id="a">
<stop stop-color="#fff" stop-opacity="0" offset="0%"/>
<stop stop-color="#fff" stop-opacity=".631" offset="63.146%"/>
<stop stop-color="#fff" offset="100%"/>
</linearGradient>
</defs>
<g fill="none" fill-rule="evenodd">
<g transform="translate(1 1)">
<path d="M36 18c0-9.94-8.06-18-18-18" id="Oval-2" stroke="url(#a)" stroke-width="2">
<animateTransform
attributeName="transform"
type="rotate"
from="0 18 18"
to="360 18 18"
dur="0.9s"
repeatCount="indefinite" />
</path>
<circle fill="#fff" cx="36" cy="18" r="1">
<animateTransform
attributeName="transform"
type="rotate"
from="0 18 18"
to="360 18 18"
dur="0.9s"
repeatCount="indefinite" />
</circle>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 1.3 KiB

View File

@@ -1,33 +0,0 @@
<!-- By Sam Herbert (@sherb), for everyone. More @ http://goo.gl/7AJzbL -->
<svg width="120" height="30" viewBox="0 0 120 30" xmlns="http://www.w3.org/2000/svg" fill="#fff">
<circle cx="15" cy="15" r="15">
<animate attributeName="r" from="15" to="15"
begin="0s" dur="0.8s"
values="15;9;15" calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="fill-opacity" from="1" to="1"
begin="0s" dur="0.8s"
values="1;.5;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="60" cy="15" r="9" fill-opacity="0.3">
<animate attributeName="r" from="9" to="9"
begin="0s" dur="0.8s"
values="9;15;9" calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="fill-opacity" from="0.5" to="0.5"
begin="0s" dur="0.8s"
values=".5;1;.5" calcMode="linear"
repeatCount="indefinite" />
</circle>
<circle cx="105" cy="15" r="15">
<animate attributeName="r" from="15" to="15"
begin="0s" dur="0.8s"
values="15;9;15" calcMode="linear"
repeatCount="indefinite" />
<animate attributeName="fill-opacity" from="1" to="1"
begin="0s" dur="0.8s"
values="1;.5;1" calcMode="linear"
repeatCount="indefinite" />
</circle>
</svg>

Before

Width:  |  Height:  |  Size: 1.5 KiB

View File

@@ -1,5 +0,0 @@
#!/bin/bash
#yarn run install
#yarn run build
yarn run start

View File

@@ -1,90 +0,0 @@
const webpack = require('webpack');
const WebpackModules = require('webpack-modules');
const path = require('path');
const config = require('sapper/config/webpack.js');
const pkg = require('./package.json');
const mode = process.env.NODE_ENV;
const dev = mode === 'development';
const alias = { svelte: path.resolve('node_modules', 'svelte') };
const extensions = ['.mjs', '.js', '.json', '.svelte', '.html'];
const mainFields = ['svelte', 'module', 'browser', 'main'];
const fileLoaderRule = {
test: /\.(png|jpe?g|gif)$/i,
use: [
'file-loader',
]
};
module.exports = {
client: {
entry: config.client.entry(),
output: config.client.output(),
resolve: { alias, extensions, mainFields },
module: {
rules: [
{
test: /\.(svelte|html)$/,
use: {
loader: 'svelte-loader',
options: {
dev,
hydratable: true,
hotReload: false // pending https://github.com/sveltejs/svelte/issues/2377
}
}
},
fileLoaderRule
]
},
mode,
plugins: [
// pending https://github.com/sveltejs/svelte/issues/2377
// dev && new webpack.HotModuleReplacementPlugin(),
new webpack.DefinePlugin({
'process.browser': true,
'process.env.NODE_ENV': JSON.stringify(mode)
}),
].filter(Boolean),
devtool: dev && 'inline-source-map'
},
server: {
entry: config.server.entry(),
output: config.server.output(),
target: 'node',
resolve: { alias, extensions, mainFields },
externals: Object.keys(pkg.dependencies).concat('encoding'),
module: {
rules: [
{
test: /\.(svelte|html)$/,
use: {
loader: 'svelte-loader',
options: {
css: false,
generate: 'ssr',
hydratable: true,
dev
}
}
},
fileLoaderRule
]
},
mode,
plugins: [
new WebpackModules()
],
performance: {
hints: false // it doesn't matter if server.js is large
}
},
serviceworker: {
entry: config.serviceworker.entry(),
output: config.serviceworker.output(),
mode
}
};

File diff suppressed because it is too large Load Diff

View File

@@ -4,12 +4,14 @@
"private": true,
"dependencies": {
"abort-controller": "^3.0.0",
"katex": "^0.16.25",
"localforage": "^1.7.3",
"moment": "^2.24.0",
"query-string": "^6.8.3",
"react": "^16.9.0",
"react-dom": "^16.9.0",
"react-helmet": "^5.2.1",
"react-latex-next": "^3.0.0",
"react-router-dom": "^5.0.1",
"react-router-hash-link": "^1.2.2",
"react-scripts": "3.1.1"

View File

@@ -8,6 +8,8 @@
content="{{ description }}"
/>
<meta content="{{ url }}" name="og:site_name">
<meta name="robots" content="{{ robots }}">
<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png">
@@ -26,26 +28,112 @@
work correctly both with client-side routing and a non-root public URL.
Learn how to configure a non-root public URL by running `npm run build`.
-->
<title>{{ title }} - QotNews</title>
<title>{{ title }}</title>
<style>
html {
overflow-y: scroll;
}
body {
background: #000;
}
.nojs {
color: white;
background: #eeeeee;
}
</style>
</head>
<body>
<div class="nojs">
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root">
<div class="container menu">
<p>
<a href="/">QotNews</a>
<br />
<span class="slogan">Hacker News, Reddit, Lobsters, and Tildes articles rendered in reader mode.</span>
</p>
</div>
{% if story %}
<div class="{% if show_comments %}container{% else %}article-container{% endif %}">
<div class="article">
<h1>{{ story.title }}</h1>
{% if show_comments %}
<div class="info">
<a href="/{{ story.id }}">View article</a>
</div>
{% else %}
<div class="info">
Source: <a class="source" href="{{ story.url or story.link }}">{{ url }}</a>
</div>
{% endif %}
<div class="info">
{{ story.score }} points
by <a href="{{ story.author_link }}">{{ story.author }}</a>
{{ story.date | fromnow }}
on <a href="{{ story.link }}">{{ story.source }}</a> |
<a href="/{{ story.id }}/c">
{{ story.num_comments }} comment{{ 's' if story.num_comments != 1 }}
</a>
</div>
{% if not show_comments and story.text %}
<div class="story-text">{{ story.text | safe }}</div>
{% elif show_comments %}
{% macro render_comment(comment, level) %}
<dt></dt>
<dd class="comment{% if level > 0 %} lined{% endif %}">
<div class="info">
<p>
{% if comment.author == story.author %}[OP] {% endif %}{{ comment.author or '[Deleted]' }} | <a href="#{{ comment.author }}{{ comment.date }}" id="{{ comment.author }}{{ comment.date }}">{{ comment.date | fromnow }}</a>
</p>
</div>
<div class="text">{{ (comment.text | safe) if comment.text else '<p>[Empty / deleted comment]</p>' }}</div>
{% if comment.comments %}
<dl>
{% for reply in comment.comments %}
{{ render_comment(reply, level + 1) }}
{% endfor %}
</dl>
{% endif %}
</dd>
{% endmacro %}
<dl class="comments">
{% for comment in story.comments %}{{ render_comment(comment, 0) }}{% endfor %}
</dl>
{% endif %}
</div>
<div class='dot toggleDot'>
<div class='button'>
<a href="/{{ story.id }}{{ '/c' if not show_comments else '' }}">
{{ '' if not show_comments else '' }}
</a>
</div>
</div>
</div>
{% elif stories %}
<div class="container">
{% for story in stories %}
<div class='item'>
<div class='title'>
<a class='link' href='/{{ story.id }}'>
<img class='source-logo' src='/logos/{{ story.source }}.png' alt='{{ story.source }}:' /> {{ story.title }}
</a>
<span class='source'>
(<a class='source' href='{{ story.url or story.link }}'>{{ story.hostname }}</a>)
</span>
</div>
<div class='info'>
{{ story.score }} points
by <a href="{{ story.author_link }}">{{ story.author }}</a>
{{ story.date | fromnow }}
on <a href="{{ story.link }}">{{ story.source }}</a> |
<a class="{{ 'hot' if story.num_comments > 99 else '' }}" href="/{{ story.id }}/c">
{{ story.num_comments }} comment{{ 's' if story.num_comments != 1 }}
</a>
</div>
</div>
{% endfor %}
</div>
{% endif %}
</div>
<div id="root"></div>
<!--
This HTML file is a template.
If you open it directly in the browser, you will see an empty page.

Some files were not shown because too many files have changed in this diff Show More