ShareGPT

2 readers
1 users here now

To share AI output.

founded 2 years ago
MODERATORS
1
 
 

The history of breakfast is a complex one, with different foods and customs evolving over time. Here are some key points from the search results:

  • In Ancient Egypt, peasants ate a daily meal, most likely in the morning, consisting of soup, beer, bread, and onions before they left for work in the fields or work commanded by the pharaohs[1].
  • Prior to the 16th century, breakfast in Great Britain typically included bread, cold meat or fish, and ale. Tea, chocolate and coffee were introduced to Great Britain in the 16th century, and by the 1550s, there were multiple sources that claimed breakfast was an important meal[1].
  • Historian Ian Mortimer suggests the Tudors invented modern breakfasts in the 16th century as a side-effect of inventing the concept of employment. As people increasingly came to work for an employer, rather than working for themselves on their own land, they lost control of their time, and had to work long, uninterrupted days without sustenance. A big breakfast allowed them to work longer days[2].
  • The Industrial Revolution and the move from farms to factories formalized the idea of breakfast further, and now it's normal for everyone to eat breakfast before going to work[2].
  • Cereal was invented by a man called James Kellogg in the 1890s as a healthy solution to complaints of weight gain and indigestion. But this is not the only food that has been associated with breakfast over time[6].
  • Historians tend to agree that breakfast became a daily, first thing in the morning institution once workers moved to cities and became employees who worked set schedules. In Europe, this first began in the 1600s, and breakfast achieved near ubiquity during the Industrial Revolution[3].
  • The term "breakfast" originated in 15th century Europe, but eating in the morning didn't become a daily ritual in Europe until the 1600s as this is when many people began going to work each day[5].
  • The modern era of breakfast begins with cereal. Before its invention, breakfast was not as standard or routine. "The Romans believed it was healthier to eat only one meal a day," food historian Caroline Yeldham wrote[3].
  • People of the Middle Ages shunned breakfast on roughly the same grounds that people of the current age glorify it; later, those navigating the collision of industrialization and the needs of the human body came to blame hearty breakfasts for indigestion and other ailments[4].

Overall, it seems that the concept of breakfast has evolved over time due to a variety of factors, including changes in work patterns, the introduction of new foods, and shifting cultural attitudes towards eating in the morning. While it's difficult to pinpoint one person or group who "popularized" breakfast, it's clear that the meal has a long and fascinating history.

Citations: [1] https://en.wikipedia.org/wiki/Breakfast [2] https://www.mashed.com/66838/secret-history-breakfast/ [3] https://priceonomics.com/how-breakfast-became-a-thing/ [4] https://www.theatlantic.com/entertainment/archive/2016/06/breakfast-the-most-contentious-meal-of-the-day/487220/ [5] https://www.fridaysocks.com/blogs/news/the-history-of-bacon-and-eggs [6] https://www.indiatimes.com/lifestyle/a-brief-history-of-breakfast-why-you-should-skip-it-370307.html

2
 
 
  1. Privacy is about protecting information: Privacy is not just about hiding information, but rather about protecting it. Even if you think you have nothing to hide, there may still be information that you would like to keep private, such as personal conversations, financial transactions, or medical records[1].

  2. Data can be used against you: Even seemingly insignificant pieces of personal data can be used in ways you may not anticipate. Companies can use your data to build detailed profiles about you, which can be used for targeted advertising, price discrimination, or even manipulation[1].

  3. Potential for abuse: When your data is collected and stored by companies or governments, there is always the risk of it being misused or falling into the wrong hands. Data breaches and leaks are common, and your personal information could be exposed, leading to identity theft or other forms of harm[1].

  4. Chilling effect on freedom: Mass surveillance and the erosion of privacy can have a chilling effect on freedom of expression and individual autonomy. When people feel constantly monitored, they may self-censor or refrain from engaging in certain activities, limiting their freedom[3].

  5. Power imbalances: The collection and analysis of vast amounts of personal data can lead to power imbalances between individuals and those who control the data. This can result in discrimination, manipulation, and the concentration of power in the hands of a few[3].

  6. Government overreach: Mass surveillance and the collection of personal data by governments can lead to abuses of power and violations of civil liberties. It is important to hold governments accountable and ensure that they respect individuals' right to privacy[3].

In summary, the "I have nothing to hide" argument overlooks the broader implications of data privacy and the potential risks and harms associated with the collection and use of personal data. Privacy is a fundamental right that should be protected, regardless of whether you have something to hide or not.

Citations: [1] https://spreadprivacy.com/three-reasons-why-the-nothing-to-hide-argument-is-flawed/ [2] https://www.reddit.com/r/privacy/comments/3hynvp/how_do_you_counter_the_i_have_nothing_to_hide/ [3] https://www.amnesty.org/en/latest/campaigns/2015/04/7-reasons-why-ive-got-nothing-to-hide-is-the-wrong-response-to-mass-surveillance/ [4] https://teachprivacy.com/is-there-a-good-response-to-the-nothing-to-hide-argument/ [5] https://www.techdirt.com/2011/05/26/privacy-is-not-secrecy-debunking-if-youve-got-nothing-to-hide-argument/ [6] https://www.wired.com/2013/06/why-i-have-nothing-to-hide-is-the-wrong-way-to-think-about-surveillance/

3
 
 

To enable Python type checking in Visual Studio Code, you can use the Microsoft extension Pylance[1]. Pylance provides a set of useful features powered with Pyright, the Microsoft static type checking tool. With the extension installed and enabled, you should now have better IntelliSense with typing information when you are calling some package function, for example. For the type checking analysis, it is not enabled by default, you need to configure it by yourself. In your settings.json file, add a new line with the following setting:

{ "python.analysis.typeCheckingMode": "basic" }

The default value for this line is off, meaning the type checking is disabled. You can also set the value to "strict" or "off" depending on your needs[1].

Alternatively, you can use the Python Type Hint extension from the Visual Studio Marketplace[2]. This extension provides type hint auto-completion for Python, with completion items for built-in types, classes, and the typing module. It also estimates the correct type to provide as a completion item and can search Python files in the workspace for type estimation purposes[2].

Another option is to use the Pyright static type checker for Python, which is a full-featured, standards-based static type checker for Python. It is designed for high performance and can be used with large Python source bases. Pyright includes both a command-line tool and an extension for Visual Studio Code[6].

To configure real-time static types checking in VS Code for Python projects, you can set up Python to behave like a static typed language. You can configure Venv and add GitLens extension, then install GitLens extension for VS Code. Next, add these settings in your local settings.json:

{
  "python.linting.mypyEnabled": true,
  "python.linting.mypyArgs": [
    "--ignore-missing-imports",
    "--follow-imports=silent",
    "--show-column-numbers",
    "--allow-untyped-defs",
    "--allow-subclassing-any"
  ]
}

This setup only shows errors when you explicitly add the type of variable or function parameter in your code while still allowing for normal variables[4].

Citations: [1] https://www.emmanuelgautier.com/blog/enable-vscode-python-type-checking [2] https://marketplace.visualstudio.com/items?itemName=njqdev.vscode-python-typehint [3] https://youtube.com/watch?v=hHBp0r4w86g [4] https://dev.to/jodaut/python-type-checking-with-visual-studio-code-46a7 [5] https://stackoverflow.com/questions/45829353/python-type-checking-in-vs-code [6] https://github.com/microsoft/pyright

4
 
 

The tingling and palpitations that have been spreading throughout the body can be caused by several conditions. Here are some possibilities based on the search results:

  • Episodic Neurologic Symptoms: Patients with this condition usually complain of severe apprehension, dizziness, chest discomfort, palpitations, or fear of "losing control." The onset is usually under age 40[1].

  • Heart Attack or Panic Attack: Both conditions can cause chest pain, shortness of breath, dizziness, sweating, passing out, tingling, or a sensation of impending doom. These shared symptoms of heart and panic attack may also be caused by other serious conditions such as blood clots, lung infection or collapse, or tear in the large vessels of the chest for patients with certain pre-existing risk factors[2].

  • Atrial Fibrillation: The most obvious symptom of this condition is heart palpitations, where the heart feels like it's pounding, fluttering, or beating irregularly, often for a few seconds or possibly a few minutes. Other symptoms include tiredness, feeling lethargic, chest pain, dizziness, and shortness of breath[3][6].

  • Peripheral Neuropathy: Symptoms of this condition include numbness and tingling in the feet or hands, burning, stabbing or shooting pain in affected areas, loss of balance and coordination, and muscle weakness, especially in the feet. These symptoms are usually constant but may come and go[4].

It's important to see a doctor if you experience any of these symptoms to determine the underlying cause and receive appropriate treatment.

Citations: [1] https://www.ncbi.nlm.nih.gov/books/NBK374/ [2] https://www.cedars-sinai.org/blog/is-it-a-heart-attack-or-a-panic-attack.html [3] https://www.nhs.uk/conditions/atrial-fibrillation/symptoms/ [4] https://www.nhsinform.scot/illnesses-and-conditions/brain-nerves-and-spinal-cord/peripheral-neuropathy [5] https://www.webmd.com/heart-disease/heart-palpitations-emergency [6] https://www.hopkinsmedicine.org/health/conditions-and-diseases/atrial-fibrillation/afib-symptoms

5
 
 

To download a website for offline browsing using wget, you can use the following command:

wget --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains example.com --no-parent http://www.example.com

Replace example.com with the website you want to download. This command will download the entire website, including all the elements that compose the page (images, CSS, etc.), and convert the links so that they work locally and offline[1].

Here's a brief explanation of the options used in the command:

  • --recursive: Download the entire website recursively.
  • --no-clobber: Don't overwrite existing files.
  • --page-requisites: Download all the elements required to display the page properly (images, CSS, etc.).
  • --html-extension: Save files with the .html extension.
  • --convert-links: Convert links so that they work locally and offline.
  • --restrict-file-names=windows: Modify filenames so that they work in Windows as well.
  • --domains example.com: Don't follow links outside the specified domain.
  • --no-parent: Don't follow links outside the specified directory.

If the website uses external resources like jQuery, you can try using the --span-hosts option to download resources from other domains as well:

wget --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains example.com --span-hosts --no-parent http://www.example.com

However, this may also download additional content from other domains that you might not need. To avoid downloading unnecessary content, you can manually download the required resources (like jQuery) and update the HTML files to use the local copies instead.

Keep in mind that some websites may have measures in place to prevent downloading their content using tools like wget. In such cases, you may need to adjust the command options or use alternative methods to download the website for offline browsing[6].

Citations: [1] https://www.linuxjournal.com/content/downloading-entire-web-site-wget [2] https://winaero.com/make-offline-copy-of-a-site-with-wget-on-windows-and-linux/amp/ [3] https://stackoverflow.com/questions/10842263/wget-download-for-offline-viewing-including-absolute-references [4] https://askubuntu.com/questions/391622/download-a-whole-website-with-wget-or-other-including-all-its-downloadable-con [5] https://superuser.com/questions/970323/using-wget-to-copy-website-with-proper-layout-for-offline-browsing [6] https://www.computerhope.com/unix/wget.htm [7] https://superuser.com/questions/1672776/download-whole-website-wget [8] https://gist.github.com/stvhwrd/985dedbe1d3329e68d70 [9] https://simpleit.rocks/linux/how-to-download-a-website-with-wget-the-right-way/ [10] https://www.guyrutenberg.com/2014/05/02/make-offline-mirror-of-a-site-using-wget/ [11] https://linuxreviews.org/Wget:_download_whole_or_parts_of_websites_with_ease [12] https://brain-dump.space/articles/how-to-get-full-offline-website-copy-using-wget-on-mac-os/ [13] https://dev.to/jjokah/how-to-download-an-entire-website-for-offline-usage-using-wget-2lli [14] https://alvinalexander.com/linux-unix/how-to-make-offline-mirror-copy-website-with-wget [15] https://askubuntu.com/questions/979655/using-wget-and-having-websites-working-properly-offline

6
 
 

To limit your Python requests to 1.39 requests per second, you can use the time.sleep() function from the time module to introduce a delay between requests. You can also use third-party libraries like pyrate-limiter, requests-ratelimiter, or ratelimiter to achieve rate limiting.

Here's an example using the time module:

import requests
import time

urls = [...]  # List of URLs to send requests to

for url in urls:
    response = requests.get(url)
    print(response.status_code)
    time.sleep(1 / 1.39)  # Introduce a delay to achieve 1.39 requests per second

Alternatively, you can use the pyrate-limiter library:

from pyrate_limiter import Duration, RequestRate, Limiter
import requests

rate_limit = RequestRate(1, Duration.SECOND / 1.39)  # 1 request per 1.39 seconds
limiter = Limiter(rate_limit)

urls = [...]  # List of URLs to send requests to

for url in urls:
    with limiter:
        response = requests.get(url)
        print(response.status_code)

Another option is to use the requests-ratelimiter library:

from requests import Session
from requests_ratelimiter import LimiterAdapter

session = Session()

# Apply a rate limit of 1.39 requests per second to all requests
adapter = LimiterAdapter(per_second=1.39)
session.mount('http://', adapter)
session.mount('https://', adapter)

urls = [...]  # List of URLs to send requests to

for url in urls:
    response = session.get(url)
    print(response.status_code)

These examples demonstrate different ways to limit your Python requests to 1.39 requests per second. Choose the method that best suits your needs and project requirements.

Citations: [1] https://stackoverflow.com/questions/26098711/limiting-number-of-http-requests-per-second-on-python [2] https://pypi.org/project/requests-ratelimiter/ [3] https://github.com/vutran1710/PyrateLimiter [4] https://akshayranganath.github.io/Rate-Limiting-With-Python/ [5] https://stackoverflow.com/questions/40748687/python-api-rate-limiting-how-to-limit-api-calls-globally [6] https://pypi.org/project/limiter/ [7] https://github.com/JWCook/requests-ratelimiter [8] https://levelup.gitconnected.com/implement-rate-limiting-in-python-d4f86b09259f [9] https://limits.readthedocs.io [10] https://github.com/encode/httpx/issues/815 [11] https://365datascience.com/tutorials/python-tutorials/limit-rate-requests-web-scraping/ [12] https://www.seelk.co/blog/efficient-client-side-handling-of-api-throttling-in-python-with-tenacity [13] https://www.cisco.com/c/en/us/support/docs/security/firepower-ngfw/217900-troubleshoot-firepower-threat-defense-an.html [14] https://scrapfly.io/blog/how-to-rate-limit-asynchronous-python-requests/ [15] https://dev.to/paymon123/the-easiest-way-to-rate-limit-a-python-api-3njc [16] https://cloud.google.com/python/docs/reference/storage/1.39.0/retry_timeout [17] https://medium.com/clover-platform-blog/conquering-api-rate-limiting-dcac5552714d [18] https://subscription.packtpub.com/book/web_development/9781838983994/9/ch09lvl1sec72/api-rate-limiting [19] https://towardsdatascience.com/speeding-up-python-code-fast-filtering-and-slow-loops-8e11a09a9c2f [20] https://katiekodes.com/python-wrap-requests-functions/ [21] https://www.reddit.com/r/Python/comments/12xahnb/i_built_a_simple_and_efficient_rate_limiter_for/ [22] https://docs.alcf.anl.gov/theta/performance-tools/craypat/ [23] https://coderpad.io/blog/development/a-guide-to-api-rate-limiting-in-django/

7
 
 

To get a list of all issues for https://github.com/LemmyNet/lemmy from the GitHub API, you can use the following steps:

  1. First, you need to obtain a personal access token from GitHub. You can follow the instructions given in this link to create a personal access token: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token

  2. Once you have obtained the personal access token, you can use it to authenticate your requests to the GitHub API. You can use any HTTP client library to make requests to the GitHub API. In this example, we will use the requests library in Python.

  3. To get a list of all issues for https://github.com/LemmyNet/lemmy, you can make a GET request to the following URL:

https://api.github.com/repos/LemmyNet/lemmy/issues
  1. You need to pass the personal access token as a header in the request. The header should be of the form:
Authorization: Bearer <personal-access-token>

Here's an example Python code snippet that demonstrates how to get a list of all issues for https://github.com/LemmyNet/lemmy using the GitHub API:

import requests

# Replace <personal-access-token> with your personal access token
headers = {
    'Authorization': 'Bearer <personal-access-token>',
}

url = 'https://api.github.com/repos/LemmyNet/lemmy/issues'

response = requests.get(url, headers=headers)

# Print the response content
print(response.content)

This code snippet makes a GET request to the GitHub API to get a list of all issues for https://github.com/LemmyNet/lemmy. The personal access token is passed as a header in the request. The response content is printed to the console. You can modify this code snippet to suit your needs.

Citations: [1] https://github.com/LemmyNet/lemmy [2] https://github.com/LemmyNet/lemmy/issues [3] https://github.com/LemmyNet/lemmy/issues/2937 [4] https://github.com/LemmyNet/lemmy-docs/issues [5] https://github.com/LemmyNet/lemmy/issues/3366 [6] https://github.com/LemmyNet/lemmy/issues/1368

8
1
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

Flatpak and AUR are two different methods for distributing and installing software on Linux systems. Here is a detailed comparison of their advantages and disadvantages:

Flatpak

Advantages:

  1. Cross-distribution compatibility: Flatpak packages can be installed on any Linux distribution that supports Flatpak, making it easier for developers to distribute their applications[^1^].
  2. Sandboxing and security: Flatpak applications are isolated from the host system, which enhances security and reduces the risk of system breakage[^2^].
  3. Delta updates: Flatpak allows updating only the modified parts of an application, resulting in faster updates and reduced bandwidth usage[^3^].
  4. No administrator privileges required: Flatpak applications can be installed without administrator privileges[^3^].

Disadvantages:

  1. Larger package sizes: Flatpak packages include all necessary dependencies, which can result in larger package sizes compared to native packages[^2^].
  2. Slower launch times: Flatpak applications may have slightly slower launch times compared to native applications[^2^].
  3. Limited to desktop applications: Flatpak currently does not support server applications[^4^].

AUR (Arch User Repository)

Advantages:

  1. Transparency: AUR is a community-driven repository, and its packages are generally clear and transparent[^5^].
  2. Smaller package sizes: AUR packages typically have smaller sizes compared to Flatpak packages[^6^].
  3. Higher number of packages: AUR has a vast number of packages available, often more than other repositories[^2^].

Disadvantages:

  1. Distribution-specific: AUR packages are designed specifically for Arch-based distributions, which may limit their compatibility with other Linux distributions[^1^].
  2. Potential security risks: AUR packages are maintained by volunteers, and their quality and security may vary. Users should exercise caution and verify the trustworthiness of packages before installation[^7^].

In summary, Flatpak offers better cross-distribution compatibility, sandboxing, and security, while AUR provides transparency, smaller package sizes, and a larger number of packages. The choice between Flatpak and AUR depends on your specific needs and preferences. If you prioritize cross-distribution compatibility and security, Flatpak might be a better choice. On the other hand, if you prefer smaller package sizes and a larger selection of packages, AUR could be more suitable.

Citations:

[^1^]: Reddit - Pros and Cons of Flatpaks [^2^]: Linux Addicts - Comparing Snap and Flatpak [^3^]: It's FOSS - What is Flatpak? [^4^]: It's FOSS - What is Flatpak? [^5^]: EndeavourOS Forum - Current Best Practice: Flatpak or AUR? [^6^]: Reddit - AUR vs Snap vs Flatpak [^7^]: Manjaro Forum - AUR or Snap App Version to Minimize Risk of Breakage

9
 
 

There are several reasons why someone might choose to use Tixati over qBittorrent, although qBittorrent is generally recommended for most people[1]. Here are some advantages of Tixati:

  1. No spyware, ads, or gimmicks: Tixati is known for being free of spyware, ads, and gimmicks, which can be appealing to users who prioritize privacy and a clean user experience[2].

  2. Detailed swarm views: Tixati offers detailed views of all aspects of the swarm, including peers, pieces, files, and trackers. This can be useful for users who want more control and information about their torrent downloads[2].

  3. Lightweight and fast performance: Tixati is known for its lightweight design and fast performance, which can be beneficial for users with limited system resources or those who want a more efficient torrent client[3].

  4. Individual event logging for all peers: Tixati provides individual event logging for all peers, which can be helpful for users who want to monitor the activity of their torrent downloads more closely[3].

  5. Support for magnet links: Tixati supports magnet links, eliminating the need to download .torrent files[2].

However, it's important to note that qBittorrent also offers many of these features and is generally considered a more popular and well-rounded torrent client[1][4]. Some advantages of qBittorrent include:

  1. Open-source and ad-free: qBittorrent is an open-source, ad-free torrent client, which allows users to verify that the software is secure and respects privacy[1].

  2. Built-in search engine: qBittorrent has a built-in search engine, making it easy to find and download torrents directly from the client[5].

  3. Remote torrent management: qBittorrent offers remote torrent management, allowing users to control their torrent downloads from a different device[6].

  4. Cross-platform compatibility: qBittorrent is available on all major desktop platforms, including Windows, macOS, and Linux[6].

Ultimately, the choice between Tixati and qBittorrent will depend on your personal preferences and needs. While Tixati has some unique features, qBittorrent is generally considered a more popular and well-rounded option[1][4].

Citations:

[1] https://www.slant.co/versus/3724/3726/~qbittorrent_vs_tixati

[2] https://www.tixati.com/discover/

[3] https://www.techradar.com/reviews/tixati

[4] https://www.saashub.com/compare-qbittorrent-vs-tixati

[5] https://moviemaker.minitool.com/moviemaker/qbittorrent-vs-utorrent.html

[6] https://www.cloudwards.net/qbittorrent-review/

10
 
 

Hello fellow Lemmy users and enthusiasts!

Today, we want to dive into the topic of balancing scores on Lemmy and discuss some of the different options that have been proposed. We'll cover the suggestions mentioned in the official GitHub repository[^1026], as well as some additional ideas that could contribute to a fair and relevant scoring system.

  1. Affinity to Admin: One of the proposed options is to increase the weight of votes based on the user's affinity to the admin[^1026]. This means that the content of the instance would be most relevant to the admin, incentivizing self-hosting Lemmy. This approach aims to prioritize the preferences of the admin, potentially resulting in a more tailored and focused community for that particular instance.

  2. Score Posts based on Community Size: Another suggestion put forward in the GitHub repository is to score posts based on the community size at the time of voting[^2794]. This approach takes into account the number of users in a community when determining the score of a post. It emphasizes the collective opinion of a larger community, potentially leading to a more democratic and representative scoring system.

  3. Balancing Scores based on Instance Size: This would prevent the dominance of big instances and promote a more diverse representation of instances in the feed. This approach would maintain the uniqueness and individuality of each instance while ensuring that posts from smaller instances have a fair chance of being seen and appreciated by users across the platform.

  4. Personalized Filter based on User Affinity: Introduce a personalized filter similar to the "Best" feature on Reddit. This filter would take into account the affinity between each user and the posts based on their voting history. By keeping a score of the upvotes and downvotes given by a user[^2370], Lemmy could analyze the user's preferences and provide a more customized feed that aligns with their interests. This personalized approach would enhance the user experience by ensuring that they see content that is more relevant and tailored to their individual preferences.

  5. User-Weighted Communities: Allow users to assign a weight to each community they are subscribed to, ranging from 0-100 points or represented as 0 to 5 stars. This weight would determine the proportion of posts from each community that appear in the user's feed. For example, if a user assigns a weight of 100 points to a community, they would see a higher number of posts from that community compared to others. If a user does not assign a weight, the system can automatically assign a weight to each community based on the user's interactions with posts in that community, such as the percentage of upvotes vs downvotes. This would ensure that communities that align more closely with a user's interests have a greater presence in their feed.

  6. User Engagement: Taking into account user engagement metrics such as comments, shares, and interactions when calculating the score of a post. This approach considers not only the number of votes but also the level of engagement generated by a post, which can provide a more comprehensive measure of its relevance and impact within the community.

  7. Quality Assessment: Introducing a mechanism to evaluate the quality of posts, either through manual moderation or automated algorithms. This could involve considering factors such as post length, readability, and adherence to community guidelines. By promoting high-quality content, the scoring system can prioritize posts that contribute meaningfully to the community.

It's important to note that finding the perfect balance for scoring on Lemmy is a complex task, and no single approach may suit every instance or community. However, by considering these options and engaging in constructive discussions, we can work towards a scoring system that promotes fairness, relevance, and community engagement.

We encourage you to share your thoughts, opinions, and any additional ideas you may have on this topic. Let's work together to shape Lemmy into a platform that truly reflects the values and needs of its diverse user base.

Thank you for being a part of the Lemmy community!

Sources:

[^3241]: Voting Affinity and Engagement Analysis

[^1026]: The rank of a post in the aggregated feed should be inversely proportional to the size of the community #1026

[^2794]: Score posts based on community size at the time of voting #2794

[^2370]: Keep a score of the upvotes and downvotes given to user. #2370

11
 
 

To repost all your YouTube subscription videos with above-average popularity on Lemmy using Python, you'll need to follow these steps:

  1. Get a YouTube API key[1].
  2. Use the YouTube API to fetch your subscription videos[2].
  3. Determine the popularity threshold (e.g., average views, likes, or comments).
  4. Filter the videos based on the popularity threshold.
  5. Use Pythorhead to interact with Lemmy and post the filtered videos[3].

Here's a sample Python script to achieve this:

import requests
from pythorhead import Lemmy

# Replace with your YouTube API key and Lemmy credentials
YOUTUBE_API_KEY = 'your_youtube_api_key'
LEMMY_USERNAME = 'your_lemmy_username'
LEMMY_PASSWORD = 'your_lemmy_password'

# Fetch your YouTube subscription videos
def get_youtube_subscriptions(api_key):
    # Replace with your YouTube channel ID
    channel_id = 'your_youtube_channel_id'
    url = f'https://www.googleapis.com/youtube/v3/subscriptions?part=snippet&channelId={channel_id}&maxResults=50&key={api_key}'
    response = requests.get(url)
    data = response.json()
    return data['items']

# Determine the popularity threshold
def get_popularity_threshold(videos):
    # Calculate the average views, likes, or comments of the videos
    # Replace this with your preferred popularity metric
    pass

# Filter videos based on the popularity threshold
def filter_videos(videos, threshold):
    # Filter the videos based on the popularity threshold
    # Replace this with your preferred popularity metric
    pass

# Post filtered videos on Lemmy using Pythorhead
def post_videos_on_lemmy(videos):
    lemmy = Lemmy("https://lemmy.dbzer0.com")
    lemmy.log_in(LEMMY_USERNAME, LEMMY_PASSWORD)
    community_id = lemmy.discover_community("your_lemmy_community")

    for video in videos:
        title = video['snippet']['title']
        url = f'https://www.youtube.com/watch?v={video["id"]}'
        lemmy.post.create(community_id, title, url)

# Main script
if __name__ == '__main__':
    videos = get_youtube_subscriptions(YOUTUBE_API_KEY)
    threshold = get_popularity_threshold(videos)
    filtered_videos = filter_videos(videos, threshold)
    post_videos_on_lemmy(filtered_videos)

Replace the placeholders with your YouTube API key, Lemmy credentials, and YouTube channel ID. You'll also need to implement the get_popularity_threshold and filter_videos functions based on your preferred popularity metric (e.g., views, likes, or comments).

Please note that this script is just a starting point, and you might need to modify it according to your specific requirements.

Citations:

[1] https://blog.hubspot.com/website/how-to-get-youtube-api-key

[2] https://gist.github.com/Yiannis128/4a9c016236edf41493176a59bb0a1be0

[3] https://github.com/db0/pythorhead

12
 
 

When we go swimming and spend extended time in water, it is not clear whether the little bugs living on our skin, such as Demodex mites, suffocate and die. Demodex mites live inside hair follicles and pores, feeding on sebum, the oily substance our skin produces[1]. They are difficult to remove by washing, as they reside deep within the pores[1]. While some insects can survive underwater for short durations[2], there is no specific information available on whether Demodex mites can survive extended periods underwater.

However, it is important to note that spending time in chlorinated water, such as in swimming pools, may have some effect on the mites. Chlorine is a disinfectant that can kill various microorganisms, but there is no direct evidence to confirm its effectiveness against Demodex mites specifically.

In conclusion, it is uncertain whether spending extended time in water, especially chlorinated water, will cause the little bugs living on our skin to suffocate and die. However, maintaining good hygiene and washing your face regularly can help manage the population of these mites on your skin[3].

Citations:

[1] https://www.npr.org/sections/health-shots/2019/05/21/725087824/meet-the-mites-that-live-on-your-face

[2] https://crops.extension.iastate.edu/blog/erin-hodgson/can-insects-breathe-water

[3] https://www.byrdie.com/how-to-get-rid-of-face-mites-5091865

13
 
 

To use Anime4K with mpv on Manjaro, you need to do the following steps:

  1. Install Anime4K: Anime4K is available on the AUR. You can install it using any AUR helper like yay or pamac. Here's how you can do it with yay:

    yay -S anime4k-git
    

    Source 0

  2. Install mpv: You can install mpv from the official Manjaro repositories using pacman:

    sudo pacman -S mpv
    

    Source 3

  3. Download the Anime4K shaders: Depending on your GPU, you can download the optimized shaders for lower-end GPU or higher-end GPU from the Anime4K Github page. Extract the shaders and place them in the shaders folder. Source 3

  4. Configure mpv to use Anime4K: You need to modify your mpv.conf file to use Anime4K. The mpv.conf file is usually located in ~/.config/mpv/mpv.conf. If it doesn't exist, you can create it. Below is a sample configuration:

    profile=gpu-hq
    scale=ewa_lanczossharp
    cscale=ewa_lanczossharp
    dscale=mitchell
    tscale=oversample
    correct-downscaling=yes
    sigmoid-upscaling=yes
    scaler-resizes-only=yes
    deband=yes
    

    Add the path to the Anime4K shaders with the glsl-shaders-append option. For example:

    glsl-shaders-append="~~/shaders/Anime4K_Upscale_CNN_M_x2_Denoise_Bilateral_Mode.glsl"
    

    Replace "~~/shaders/Anime4K_Upscale_CNN_M_x2_Denoise_Bilateral_Mode.glsl" with the path to your shaders. Source 6

  5. Use the correct mode for your content: Anime4K has 3 major modes: A, B, and C. Each mode is optimized for a different class of anime degradations. You can switch between these modes using CTRL+1 for Mode A (Optimized for 1080p Anime), CTRL+2 for Mode B (Optimized for 720p Anime), and CTRL+3 for Mode C (Optimized for 480p Anime). You can disable Anime4K using CTRL+0. Source 3

Remember that Anime4K is quite GPU-intensive, so ensure your hardware is capable of handling it. If you experience issues, try adjusting your mpv settings or use a lower-end GPU version of Anime4K.

14
1
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

Setting up a Lemmy instance behind Tor requires a few steps, including setting up a Lemmy instance, configuring it to work with Tor, and creating a Tor hidden service. Here's a step-by-step guide to help you achieve this:

  1. Set up a Lemmy instance: First, you need to set up a Lemmy instance on your server. You can follow the instructions provided in the Lemmy documentation to install Lemmy from scratch or use the official Docker Compose method.

  2. Install Tor: Follow the official Tor documentation to install Tor on your server.

  3. Configure Lemmy to work with Tor: According to the Lemmy documentation on running a Tor hidden service, Lemmy can be proxied through Tor, but federation tasks are still handled by HTTPS on the open internet. This means that your Lemmy instance will be accessible through the Tor network, but it will still federate with other instances using the clearnet.

  4. Create a Tor hidden service: Follow the Tor documentation on setting up an Onion Service to create a Tor hidden service for your Lemmy instance. This will allow users to access your Lemmy instance using a .onion address through the Tor network.

  5. Configure your web server: You may need to configure your web server (e.g., Nginx or Apache) to work with the Tor hidden service. The Tor documentation provides examples of how to set up an onionsite with Nginx and Apache on Debian.

Once you have completed these steps, your Lemmy instance should be accessible through the Tor network using a .onion address. Keep in mind that federation between instances will still occur over the clearnet, as mentioned in the Lemmy documentation[1].

Citations:

[1] https://join-lemmy.org/docs/administration/tor_hidden_service.html

15
 
 

What's the market worth of cultured vs normal meat

16
1
28/06/23 (lemmy.fmhy.ml)
submitted 2 years ago by [email protected] to c/[email protected]