Below is a script that can be used to convert all files in the directory to AV1
#!/bin/bash set -e find * -type f -iname '*.mkv' | grep -v av1 | while read -r mkv; do file=${mkv%.*} av1="${file}-av1.mkv" if [ -f "${av1}" ]; then echo "${av1} already there" # echo "${file} -> ${av1}" else echo "converting ${file} --> ${av1}" ffmpeg -i "${mkv}" -c:v libsvtav1 -crf 35 "${av1}" < /dev/null && echo "${av1}" >> completed.
Make required directories mkdir -p /userdata/mergerfs/VERSION_NUMBER && cd /userdata/mergerfs/VERSION_NUMBER
Download newest static mergerfs build, current built now can be found HERE
extract tar -xvzf FLIE_NAME.tgz
cd ../ && ln -s VERSION_NUMBER current
Create disk labels so we know what to add to the script, replace LABEL_NAME and /dev/sda1 with your disk info e2label LABEL_NAME /dev/sda1. Thankfully batocera mounts disks by label in the /media/ folder
Create script to mount nano mergerfs.
By default mergerfs appears to try and mount before ZFS is mounted which causes the mergerfs filesytem to fail. To fix this we just need to add x-systemd.requires=zfs-mount.service to the /etc/fstab entry
For example my /etc/fstab entry is below:
/hdd*/mergerfs /data fuse.mergerfs splice_read,threads=4,allow_other,cache.readdir=true,cache.files=off,fsname=mergerfs,use_ino,dropcacheonclose=true,link_cow=true,category.create=mfs,cache.entry=120,cache.attr=120,x-systemd.requires=zfs-mount.service 0 0
How to share more than one calendar via link.
First share the calendar like you usually would to get a public link. Then you can combine the keys like below:
These two are the individual calendars
https://next.my.domain/apps/calendar/p/hjfu37fhcneydyxh
https://next.my.domain/apps/calendar/p/2u487fiuwf22fe98
You just take the part after the last / (slash) and add a - (dash) between more than one like below
https://next.my.domain/apps/calendar/p/hjfu37fhcneydyxh-2u487fiuwf22fe98
This link will show all calendars. Just split any calendars you want to share with the - (dash)
Only copy specific file extentions in folder
rsync -a --include '*/' --include '*.mp3' --exclude '*' source/ target/
Speed up rsync over SSH without needed to change any configs. arcfour is faster, but no longer enabled by default meanwhile aes128-ctr is
rsync -avhP -e "ssh -c aes128-ctr" /src/ user@ip:/dst/
rsync ssh with non standard port
rsync -avhP -e "ssh -p number" /src/ user@ip:/dst/
rsync ssh with non standard port and show full progress
I based this post on HERE
Just for reference, the things I did to make it work:
git clone https://github.com/tailscale/tailscale-android.git
nano tailscale-android/cmd/tailscale/backend.go
change:
func (b *backend) Start(notify func(n ipn.Notify)) error { b.backend.SetNotifyCallback(notify) return b.backend.Start(ipn.Options{ StateKey: "ipn-android", }) } to:
func (b *backend) Start(notify func(n ipn.Notify)) error { b.backend.SetNotifyCallback(notify) prefs := ipn.NewPrefs() prefs.ControlURL = "https://myheadscale.domain.com" opts := ipn.Options{ StateKey: "ipn-android", UpdatePrefs: prefs, } return b.backend.Start(opts) } nano Dockerfile
Add the below to the file:
Install resolvconf
sudo apt install resolvconf
Edit the base file with what you want to always be in the file
sudo nano /etc/resolvconf/resolv.conf.d/base
Have resolvconf rebuild the base
sudo resolvconf -u
Show logs from when systemd service last restarted. (This needs systemd > v232)
journalctl _SYSTEMD_INVOCATION_ID=$(systemctl show -p InvocationID --value SERVICE_NAME.service) | head -n15
NFS Mount with NFS and /etc/fstab
From all of my reading over the years it’s always been said to add _netdev to the /etc/fstab mount, but that never worked for me. After more reading it appears that was for SystemV which is dead. I figured it out after much Googleing.
Connect via SSH client with a different user by default nano ~/.ssh/config
Add the following to the file above:
Host * User DEFAULT_USER Force Password auth ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no user@host
For UFW and Docker I use a program called UFW-Docker
To use it do the following:
wget -O /usr/local/bin/ufw-docker https://github.com/chaifeng/ufw-docker/raw/master/ufw-docker
chmod +x /usr/local/bin/ufw-docker
ufw-docker install
Allow tailscale VPN to all docker containers
This is based on the issue HERE
ufw route allow from 100.64.0.0/10 to any
You should now be good to accept anything from the tailscale network
This is how to get the Windows client with headscale. I’m happy to finaly get it working.
Headscales docs are HERE, but I’m adding some more info since I wasn’t able to get it to work the first time
If you’ve already installed tailscale on the machine make sure to delete the C:\Users\<USERNAME>\AppData\Local\Tailscale directory
Download the Official Windows Client HERE and install it.
You can either do option A or B Option A Manually edit the registry
If you’re wanting to stream HDHomeRun channel to your own OwnCast server
I’m using Debian like everything else I do
apt install -y ffmpeg
ffmpeg -i "http://IP_OF_HDHR:5004/auto/vCH.N" -c:v libx264 -c:a aac -b:v 512K -maxrate 512K -bufsize 1M -f flv rtmps://OWNCAST_URL:PORT/live/STREAM_KEY
You can now go to your owncast URL and it should be streaming
SystemD Service
nano /etc/systemd/system/hdhomerun-stream.service
[Unit] Description=HDHR Daemon After=network.target [Service] User=plex EnvironmentFile=-/etc/default/hdhomerun Group=plex Type=simple wExecStart=/usr/bin/ffmpeg -i "${CHANNEL}" -c:v libx264 -c:a aac -b:v 512K -maxrate 512K -bufsize 1M -f flv "${URL}:${PORT}/live/${KEY}" Restart=on-failure [Install] WantedBy=multi-user.
By default the prefix is Ctrl+B for tmux
How to save pane to file Use prefix + :
We need to puts those lines into a buffer by typing in capture-pane -S -150 | Replace -150 with however many lines you’d like to save, or - for all lines.
Hit return (enter)
Now we have to save the buffer to a file by doing the following prefix + :
Type in save-buffer filename.
Here is a bash script I use to update DDNS with CloudFlare, I could use ddclient, but I like this it works for me
apt -y install dnsutils jq curl
#!/usr/bin/env bash # A bash script to update a Cloudflare DNS A record with the external IP of the source machine # Used to provide DDNS service for my home # Needs the DNS record pre-creating on Cloudflare ## Based on https://gist.
RIGHT NOW GITEA KEEPS LOGGED IN AS FIRST USER SO IT’S NOT PERFECT, THERE’S A KNOWN ISSUE We need to update the logout button to the authentik logout URL: wget -O /var/lib/gitea/custom/templates/base/head_navbar.tmpl https://raw.githubusercontent.com/go-gitea/gitea/main/templates/base/head_navbar.tmpl
Replace the old logout URL with the new: sed -i 's#/user/logout#/akprox/sign_out#g' /var/lib/gitea/custom/templates/base/head_navbar.tmpl
I did notice when replacing the URL to logout it doesn’t directly log you out, but will be logged out next time you try to do anything Now it’s time to config gitea; nano /etc/gitea/app.
nano /etc/grafana/grafana.ini
[auth.proxy] # Defaults to false, but set to true to enable this feature enabled = true # HTTP Header name that will contain the username or email header_name = X-authentik-username # HTTP Header property, defaults to `username` but can also be `email` header_property = username # Set to `true` to enable auto sign up of users who do not exist in Grafana DB. Defaults to `true`. auto_sign_up = false # Define cache time to live in minutes # If combined with Grafana LDAP integration it is also the sync interval sync_ttl = 60 # Limit where auth proxy requests come from by configuring a list of IP addresses.
First you can download the intstaller for rpiboot for Windows from github at HERE
Then I always prefer Debian which can be found HERE
I’m using the DF Robot Router Board from HERE
Huge shoutout and thanks to Jeff Geerling for the board.
To get the CM4 into rpiboot mode you have to switch the little switch on the DF Robot Board labeled RPIBOOT to 1
Now you have to install the program, then open up rpiboot and let it do it’s thing then it’ll be mounted
Here’s a quick rundown of how usenet works:
The three things required are a server, indexer, and downloaders.
Server: Where you download the articles from. (Eweka, SuperNews)
Indexer: A search engine for the usenet servers. (NZBGeek, NZBCat, DogNZB)
Downloader: This is used to download and extract the files since they are put into RAR files. (NZBGet, SABnzbd)
Arr software searches via the indexer which then sends the .nzb file to the downloader.