When I was setting up Deluge to run headless on my linux server the
deluge-web wasn't saving any settings and nothing was working. Turned out to be an easy fix if you know how.
The problem was the that the web-ui wasn't auto connecting to the
deluged backend. This caused the connection manager to always pop up.
Anyhow, assuming your web-ui and deluged are running on the same machine edit the
web.conf file and make sure that
default_daemon is populated.
# /var/lib/deluge/config/web.conf -- your path will likely be different ... "default_daemon": "localhost:58846", ...
Just a quick note before I forget what little I understand. Something happened (or because I installed proxmox on top of an existing debian install, I honestly can't remember since it was like 4 weeks ago) during my install of proxmox. Long story short is that the nfs server doesn't work on reboot.
$ showmount -e clnt_create: RPC: Program not registered
So then if you manually run
$ rpc.mountd $ showmount -e Export list for proxmox-1 /tank/etc 192.168.1.0/24
But then you can't actually mount anything until you run
$ rpc.nfsd $ mount proxmox-1:/tank/etc /tmp/etc # totally works $ ps auxw |grep rpc.nfsd # no results
So I'm not sure what's going on. I do know that the half
systemd scripts are somehow buggy.
The really crappy thing is that until I figure out the real solution I can't safely reboot my boxes.
I've been making LXC containers in Proxmox like a fiend. I'm toally loving Proxmox, if you want several virtual machines I highly recommend it.
Anyhow, trying to run
avahi-daemon in the containers often fails. I'm not the first to notice this but the answers were unsatisfying until I found a suggestion to try running with
--no-rlimits. That seems to do the trick!
But how to get
systemd to run it that way? Very simply as it turns out.
systemctl edit avahi-daemon.service
Add then in the text editor that opens up, enter the following:
[Service] ExecStart= ExecStart=/usr/sbin/avahi-daemon -s --no-rlimits
see comment #2 for a script friendly way to do this
libnss-mdns sometimes doesn't install properly though. If you can't ping/lookup other
.local hosts then edit
/etc/nsswitch.conf and change…
hosts: files dns
hosts: files mdns4_minimal [NOTFOUND=return] dns
If anybody wants to write me an
ansible script to do that I would totally buy you a beer.
Too late, I had to write it myself, container.yml.
At work we have a large project that is comprised of several nested git repos so having your bash prompt get updated with some vital information such as repo, branch, etc makes life much easier.
Here's an example of my prompt and how it shows the current git repo:
[kurt@machine-1 ~/src/foo/bar/baz venv:foo git:bug_branch repo:bar] $
I've recently been wasting time on Empire of Code instead of doing productive and paying work.
Anyhow, the description of the Speed Boost puzzle Landing Holes was the most confusing thing I've ever read.
I'm pretty sure they're describing a Gatling gun. But then mentally
s/canon/breach/ and then the description starts to make sense.
As for the function, the first list is a list of barrels, the second is a list of working breaches. The puzzle is asking you for a list of how many times you can rotate the barrels so they align with working breaches.
Well, I finally released alkali into the wild.
Alkali is a simple database that makes it very easy to specify the on-disk format of your data. This makes it easy to use your existing data files as tables in a database. Plus the api is based off of Django models.
This is my first real project that I've released and it's a surprising amount of work. I have a new level of appreciation for all the libraries that I just blithely download and use without a second's thought.
For instance, it took about the same amount of time to write the docs as it did to write the actual code. Plus there are a lot of moving pieces to release open source software the right way.
- use git as your source code control
- write documentation, learn Sphinx and reStructuredText
- when you push to github, triggers are fired
- push release to PyPi, writing
setup.pyis very non-trivial, learn how that works
- try to do some marketing on Reddit. Given my zero karma I suck at marketing and/or programming.
So yeah… please go check out alkali!
While trying to proxy my main nginx instance to a GitLab docker container I wasted hours and hours and hours trying to fix the following error:
fatal: unable to access 'https://gitlab.burgundywall.com/kneufeld/myproject.git/': \ SSL read: error:00000000:lib(0):func(0):reason(0), errno 54
It turns out that nginx config option
ssl_session_cache is super f'n important to not screw up. I'm not totally sure what the problem is, but in my main
server clause i had
and I didn't have any such option in my
server gitlab stanza. So something something something I could not do any git commands via
And even with logging everything looked okay
GIT_CURL_VERBOSE=1 git clone https://gitlab.burgundywall.com/kneufeld/myproject.git Cloning into 'myproject'... * Couldn't find host gitlab.burgundywall.com in the .netrc file; using defaults * Trying 192.168.5.6... * Connected to gitlab.burgundywall.com (192.168.5.6) port 443 (#0) * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /usr/local/etc/openssl/cert.pem CApath: none * NPN, negotiated HTTP1.1 * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 * ALPN, server did not agree to a protocol * Server certificate: * subject: CN=gitlab.burgundywall.com * start date: Sep 3 16:53:00 2016 GMT * expire date: Dec 2 16:53:00 2016 GMT * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 * SSL certificate verify ok. > GET /kneufeld/myproject.git/info/refs?service=git-upload-pack HTTP/1.1 Host: gitlab.burgundywall.com User-Agent: git/2.9.3 Accept: */* Accept-Encoding: gzip Pragma: no-cache * SSL read: error:00000000:lib(0):func(0):reason(0), errno 54 * Closing connection 0 fatal: unable to access 'https://gitlab.burgundywall.com/kneufeld/myproject.git/': SSL read: error:00000000:lib(0):func(0):reason(0), errno 54
except it didn't work.
Anyhow, when I finally figured out that
ssl_session_cache was the issue and did some reading I just made sure that each ssl server has it's own cache.
I was trying to get Plex to run in a container on CoreOS and for the life of me I couldn't get it to start. I kept getting the following error:
Error: Unable to set up server: bind: Cannot assign requested address (N5boost16exception_detail10clone_implINS0_19error_info_injectorINS_6system12system_errorEEEEE)
It turns out that at some point I had enabled IPv6 and that caused the problem.
So edit your
Preferences.xml and disable IPv6 via
plex.service for completeness.
[Unit] Description=plex media server After=docker.service #After=docker-registry.service [Service] TimeoutStartSec=0 Restart=always KillMode=none EnvironmentFile=/media/metadata/plex/environment ExecStop=-/usr/bin/docker stop plex ExecStartPre=-/usr/bin/docker kill plex ExecStartPre=-/usr/bin/docker rm plex ExecStartPre=/usr/bin/docker pull timhaak/plex:latest ExecStart=/usr/bin/docker run --name plex --rm \ --net=host \ --env-file /media/metadata/plex/environment \ -v /home/plex:/config \ -v /home/media:/media \ timhaak/plex:latest