<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Mensah David Assigbi's personal IT blog]]></title><description><![CDATA[Mensah David Assigbi's personal IT blog]]></description><link>https://blog.davidassigbi.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 11:28:27 GMT</lastBuildDate><atom:link href="https://blog.davidassigbi.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[VirtualBox VM lagging when adding more CPUs]]></title><description><![CDATA[Initial brain dump
I noticed something a little counter intuitive for me…
My home computer has the following specs:

CPU: Ryzen 9 7900X : 12 Cores / 24 threads (2 CCDs) / Special NUMA architecture

So There is basically 6 cores per silicon die.


The...]]></description><link>https://blog.davidassigbi.com/virtualbox-vm-lagging-when-adding-more-cpus</link><guid isPermaLink="true">https://blog.davidassigbi.com/virtualbox-vm-lagging-when-adding-more-cpus</guid><category><![CDATA[virtualization]]></category><category><![CDATA[virtual machine]]></category><category><![CDATA[VirtualBox ]]></category><category><![CDATA[hypervisor]]></category><category><![CDATA[AMD processor]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 25 Jan 2026 13:03:34 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-initial-brain-dump">Initial brain dump</h2>
<p>I noticed something a little counter intuitive for me…</p>
<p>My home computer has the following specs:</p>
<ul>
<li><p>CPU: Ryzen 9 7900X : 12 Cores / 24 threads (2 CCDs) / Special NUMA architecture</p>
</li>
<li><p>So There is basically 6 cores per silicon die.</p>
</li>
</ul>
<p>The computer is running Windows 11.</p>
<p>I created an Kubuntu 24.04 with initially 4 CPUs assigned in VBox.</p>
<p>As I have overall 24 threads/vcpus available I bumped CPU count to 8 for the VM.</p>
<p>All of a sudden when logged in the VM, the UI was feeling sluggish/laggy as if there was not enough RAM…</p>
<p>Turns out because all the Cores are spread accross 2 dies/chips/sockets-ish, there is some cost when a program is running threads accross the 2 dies. So the OS needs to know not to do that…</p>
<p>Turns out VBox is not NUMA aware so when assigning 8 vpus, the 8 vcpus might spill to the other die which again is very costly when a single program has to make threads talk to each other accross the dies.</p>
<p>VMWare of KVM/Quemu(on linux) are NUMA aware so they know that iniherent achitecture of the CPUs so they know to prefrer scheduling the vcpus on a single die to prevent the overhead of cross-die communicating.</p>
<p>Also because there is only actually 12 real cores and there is also a hyperthreading cost so if one of the vcpus is actually a hyperthread, its actually going to also be a little slower (if a vcpus-hypertrhread is actually on another cpu die, it’s even worse)</p>
<p>So its best to only assign at most half of the real cores one has.</p>
<p>So for my processor, I only have 6 real cores, I shouldnt do anything that has some chance of spilling to the other cpu die.</p>
<p>But I still have 12 threads however, again VBox is not ccd/numa aware, and vbox all vcpus are equal which is not true at all. So even though VBox shows 24 vcpus,</p>
<p>Because 12 vcpus of those are on another ccd we shouldnt get above 12</p>
<p>Because 6 of those 12 vcpus per ccd are hyperthreaded we really shouldnt go above 6 as we are certainly be using vcpus which are hyperthreads or on another die which can also cause latency</p>
<p>But because we only really have 6 real cores usable per ccd, using 6 vcpus will likely result in some of those being hyperthreads which can also cause latency / co-scheduling issues</p>
<p>I though i should just go with 3 vcpus but nope Linux guests work best when number of cpus is a power of 2. so the best is 4 vcpus which leave in some room to keep all those vcpus on the same ccd, and not cause any hyperthread vcpus related issue…</p>
<p>However all this BS is just because VBox is not NUMA/CCD aware. If i were on linux using QUEMU/KVM or VMWare I think I wouldnt have these issues.</p>
<h2 id="heading-chatgpt-rewrite">ChatGPT rewrite</h2>
<h3 id="heading-title-a-counter-intuitive-virtualbox-performance-lesson-on-ryzen">Title: A Counter-Intuitive VirtualBox Performance Lesson on Ryzen</h3>
<h3 id="heading-why-my-linux-vm-got-slower-when-i-gave-it-more-cpus">Why My Linux VM Got Slower When I Gave It <em>More</em> CPUs</h3>
<p>I recently ran into a counter-intuitive performance issue while running a Linux VM.</p>
<p>My system uses a <strong>Ryzen 9 7900X</strong>:</p>
<ul>
<li><p>12 physical cores / 24 threads</p>
</li>
<li><p>2 CCDs (chiplets)</p>
</li>
<li><p>NUMA-like architecture (6 cores per CCD)</p>
</li>
</ul>
<p>The host OS is <strong>Windows 11</strong>.</p>
<hr />
<h3 id="heading-the-setup">The Setup</h3>
<p>I created a <strong>Kubuntu 24.04</strong> VM in <strong>VirtualBox</strong>, initially assigning <strong>4 vCPUs</strong>.<br />The VM felt smooth and responsive.</p>
<p>Since the CPU exposes <strong>24 logical threads</strong>, I increased the VM to <strong>8 vCPUs</strong>, expecting better performance.</p>
<p>Instead, the desktop became <strong>noticeably laggy</strong> — window movement stuttered and the UI felt unresponsive, despite normal RAM and CPU usage.</p>
<hr />
<h3 id="heading-the-cause">The Cause</h3>
<p>The Ryzen 7900X is not a single monolithic chip. Each CCD has its own L3 cache, and communication between CCDs has a real latency cost.</p>
<p>The problem is that <strong>VirtualBox is not NUMA or CCD aware</strong>. When assigning 8 vCPUs, VirtualBox may schedule them across <strong>both CCDs</strong>, even for a single workload like a desktop environment. This forces threads to communicate across CCDs, hurting latency-sensitive tasks such as GUI rendering.</p>
<p>Hypervisors like <strong>KVM/QEMU</strong> or <strong>VMware</strong> <em>are</em> topology-aware and usually try to keep vCPUs within the same CCD, avoiding this issue.</p>
<hr />
<h3 id="heading-smt-makes-it-worse">SMT Makes It Worse</h3>
<p>Out of the 24 logical CPUs:</p>
<ul>
<li><p>Only 12 are real physical cores</p>
</li>
<li><p>The rest are SMT (hyper-threaded) siblings</p>
</li>
</ul>
<p>VirtualBox treats all vCPUs as equal, even though SMT threads are slower and more prone to contention — especially when combined with cross-CCD scheduling.</p>
<hr />
<h3 id="heading-the-practical-limit">The Practical Limit</h3>
<p>To avoid cross-CCD traffic, vCPUs should stay well below the <strong>6 physical cores per CCD</strong>.<br />However, assigning all 6 risks SMT contention and scheduling latency.</p>
<p>Although 3 vCPUs seemed like the safest option, Linux guests tend to behave better with <strong>power-of-two CPU counts</strong>.</p>
<hr />
<h3 id="heading-the-sweet-spot">The Sweet Spot</h3>
<p><strong>4 vCPUs</strong> turned out to be ideal:</p>
<ul>
<li><p>Fits comfortably within one CCD</p>
</li>
<li><p>Leaves scheduling headroom</p>
</li>
<li><p>Minimizes SMT contention</p>
</li>
<li><p>Eliminates cross-CCD latency</p>
</li>
<li><p>Results in a smooth, responsive desktop</p>
</li>
</ul>
<p>Switching back to 4 vCPUs immediately fixed the lag.</p>
<hr />
<h3 id="heading-takeaway">Takeaway</h3>
<p>This issue exists <strong>because VirtualBox is not NUMA/CCD aware</strong>.</p>
<p>On the same hardware, <strong>KVM/QEMU or VMware</strong> would likely handle 8 vCPUs without issue.</p>
<p>The lesson is simple:</p>
<blockquote>
<p><strong>More vCPUs does not always mean better performance — especially on modern multi-CCD CPUs when using VirtualBox.</strong></p>
</blockquote>
<p>Sometimes, fewer CPUs really are faster.</p>
<h2 id="heading-resources">Resources:</h2>
<ul>
<li><p><a target="_blank" href="https://forums.virtualbox.org/viewtopic.php?t=110399&amp;start=15">https://forums.virtualbox.org/viewtopic.php?t=110399&amp;start=15</a></p>
</li>
<li><p><a target="_blank" href="https://automatestack.dev/the-ultimate-guide-to-right-sizing-cpu-and-memory-for-virtual-machines">https://automatestack.dev/the-ultimate-guide-to-right-sizing-cpu-and-memory-for-virtual-machines</a></p>
</li>
<li><p><a target="_blank" href="https://forums.virtualbox.org/viewtopic.php?t=94076">https://forums.virtualbox.org/viewtopic.php?t=94076</a></p>
</li>
<li><p><a target="_blank" href="https://www.xda-developers.com/how-many-cpus-vm/">https://www.xda-developers.com/how-many-cpus-vm/</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Installing Kubernetes (k3s) on Proxmox LXC container]]></title><description><![CDATA[Create a privileged container in Proxmox

Make sure container doesn’t start right after creation

Append extra config for to lxc container conf file
On the proxmox host edit the file /etc/pve/lxc/<container id>.conf and add the following content
lxc....]]></description><link>https://blog.davidassigbi.com/installing-kubernetes-k3s-on-proxmox-lxc-container</link><guid isPermaLink="true">https://blog.davidassigbi.com/installing-kubernetes-k3s-on-proxmox-lxc-container</guid><category><![CDATA[proxmox]]></category><category><![CDATA[k8s]]></category><category><![CDATA[LXC]]></category><category><![CDATA[k3s]]></category><category><![CDATA[metallb]]></category><category><![CDATA[Helm]]></category><category><![CDATA[nginx ingress]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sat, 05 Oct 2024 20:06:43 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-create-a-privileged-container-in-proxmox">Create a privileged container in Proxmox</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728214716729/18e877df-308f-4701-af57-daae112c3b38.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-make-sure-container-doesnt-start-right-after-creation">Make sure container doesn’t start right after creation</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728214729962/3895236e-b577-4267-b80e-59faa1e34b57.png" alt class="image--center mx-auto" /></p>
<p>Append extra config for to lxc container conf file</p>
<p>On the proxmox host edit the file <code>/etc/pve/lxc/&lt;container id&gt;.conf</code> and add the following content</p>
<pre><code class="lang-plaintext">lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw"
</code></pre>
<h2 id="heading-create-missing-devkmsg-file">Create missing /dev/kmsg file</h2>
<p>Start the container and inside the container create the file <code>/etc/rc.local</code> if it doesnt exist. Add the following content to the file and update the file permissions like so:</p>
<pre><code class="lang-bash">cat &lt;&lt;EOF &gt; /etc/rc.local
<span class="hljs-comment">#!/bin/sh -e</span>
<span class="hljs-keyword">if</span> [ ! -e /dev/kmsg ]; <span class="hljs-keyword">then</span>
    ln -s /dev/console /dev/kmsg
<span class="hljs-keyword">fi</span>
mount --make-rshared /
EOF

chmod +x /etc/rc.local
/etc/rc.local
</code></pre>
<h2 id="heading-install-k3s">Install K3s</h2>
<pre><code class="lang-bash">curl -sfL https://get.k3s.io | sh -s - --<span class="hljs-built_in">disable</span>=traefik --<span class="hljs-built_in">disable</span>=servicelb --node-name control.k8s

<span class="hljs-comment"># Setup kubectl for non-root user access</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">'export KUBECONFIG=~/.kube/config'</span> &gt;&gt; ~/.bashrc
<span class="hljs-built_in">echo</span> <span class="hljs-string">'source &lt;(kubectl completion bash)'</span> &gt;&gt;~/.bashrc
<span class="hljs-built_in">echo</span> <span class="hljs-string">'alias k=kubectl'</span> &gt;&gt;~/.bashrc
<span class="hljs-built_in">echo</span> <span class="hljs-string">'complete -o default -F __start_kubectl k'</span> &gt;&gt;~/.bashrc
<span class="hljs-built_in">source</span> ~/.bashrc
mkdir ~/.kube 2&gt; /dev/null
sudo k3s kubectl config view --raw &gt; <span class="hljs-string">"<span class="hljs-variable">$KUBECONFIG</span>"</span>
chmod 600 <span class="hljs-string">"<span class="hljs-variable">$KUBECONFIG</span>"</span>
<span class="hljs-comment"># Test to make sure non-root kubectl is working</span>
kubectl get nodes
</code></pre>
<h2 id="heading-install-helm">Install Helm</h2>
<pre><code class="lang-bash"><span class="hljs-comment"># Install Helm</span>
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg &gt; /dev/null
sudo apt-get install apt-transport-https --yes
<span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main"</span> | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
helm version
</code></pre>
<h2 id="heading-install-metallb">Install MetalLB</h2>
<pre><code class="lang-bash"><span class="hljs-comment"># Install MetalLB</span>
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb --namespace metallb-system --create-namespace

cat &lt;&lt;EOF &gt; metallb_config.yaml
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: nginx-ingress-pool
  namespace: metallb-system
spec:
  addresses:
  - 10.10.10.100/32
  autoAssign: <span class="hljs-literal">false</span>
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: regular-lb-pool
  namespace: metallb-system
spec:
  addresses:
  - 10.10.10.110-10.10.10.120
  autoAssign: <span class="hljs-literal">false</span>
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default-l2advertisement
  namespace: metallb-system
spec:
  ipAddressPools:
  - nginx-ingress-pool
  - regular-lb-pool
EOF

kubectl apply -f metallb_config.yaml
<span class="hljs-comment"># Ensure the address pools are well created</span>
kubectl describe ipaddresspools.metallb.io -n metallb-system
</code></pre>
<h2 id="heading-install-nginx-ingress-controller">Install Nginx Ingress Controller</h2>
<pre><code class="lang-bash"><span class="hljs-comment"># Install NGINX ingress controller</span>
cat &lt;&lt;EOF &gt; nginx-ingress_values.yaml
controller:
    service:
        annotations:
            metallb.universe.tf/address-pool: nginx-ingress-pool
EOF

helm install nginx-ingress-release --values app1/nginx-ingress_values.yaml --create-namespace --namespace nginx-ingress oci://ghcr.io/nginxinc/charts/nginx-ingress --version 1.4.0 --values app1/nginx-ingress_values.yaml <span class="hljs-comment"># --set-json 'controller.service.annotations={"test1": "de"}'</span>
helm upgrade nginx-ingress-release --values app1/nginx-ingress_values.yaml -n nginx-ingress oci://ghcr.io/nginxinc/charts/nginx-ingress

<span class="hljs-comment"># Ensure that the supplied values are taken into account</span>
helm get values -n nginx-ingress nginx-ingress-release
<span class="hljs-comment"># Uninstall to reinstall again if needed</span>
helm -n nginx-ingress uninstall nginx-ingress-release
<span class="hljs-comment"># Check the nginx-ingress controller service is provided an IP address by MetalLB</span>
kubectl describe svc -n nginx-ingress nginx-ingress-release-controller
</code></pre>
<h2 id="heading-install-cert-manager">Install cert-manager</h2>
<pre><code class="lang-bash">helm repo add jetstack https://charts.jetstack.io --force-update

helm install cert-manager jetstack/cert-manager --create-namespace --namespace cert-manager --version v1.16.0 --<span class="hljs-built_in">set</span> crds.enabled=<span class="hljs-literal">true</span> --<span class="hljs-built_in">set</span> <span class="hljs-string">'extraArgs={--dns01-recursive-nameservers-only,--dns01-recursive-nameservers=8.8.8.8:53\,1.1.1.1:53}'</span>
</code></pre>
<p><strong>Sources:</strong></p>
<ul>
<li><p><a target="_blank" href="https://kevingoos.medium.com/kubernetes-inside-proxmox-lxc-cce5c9927942">https://kevingoos.medium.com/kubernetes-inside-proxmox-lxc-cce5c9927942</a></p>
</li>
<li><p><a target="_blank" href="https://bobcares.com/blog/rancher-lxc-proxmox/">https://bobcares.com/blog/rancher-lxc-proxmox/</a></p>
</li>
<li><p><a target="_blank" href="https://docs.k3s.io/quick-start#install-script">https://docs.k3s.io/quick-start#install-script</a></p>
</li>
<li><p><a target="_blank" href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#bash">https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#bash</a></p>
</li>
<li><p><a target="_blank" href="https://devops.stackexchange.com/questions/16043/error-error-loading-config-file-etc-rancher-k3s-k3s-yaml-open-etc-rancher">https://devops.stackexchange.com/questions/16043/error-error-loading-config-file-etc-rancher-k3s-k3s-yaml-open-etc-rancher</a></p>
</li>
<li><p><a target="_blank" href="https://helm.sh/docs/intro/install/">https://helm.sh/docs/intro/install/</a></p>
</li>
<li><p><a target="_blank" href="https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-helm/">https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-helm/</a></p>
</li>
<li><p><a target="_blank" href="https://blog.mossroy.fr/2024/04/16/passage-sur-metallb-au-lieu-de-servicelb-sur-un-cluster-k3s/">https://blog.mossroy.fr/2024/04/16/passage-sur-metallb-au-lieu-de-servicelb-sur-un-cluster-k3s/</a></p>
</li>
<li><p><a target="_blank" href="https://metallb.universe.tf/installation/">https://metallb.universe.tf/installation/</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/metallb/metallb/issues/1964">https://github.com/metallb/metallb/issues/1964</a></p>
</li>
<li><p><a target="_blank" href="https://blog.mossroy.fr/2024/04/16/passage-sur-metallb-au-lieu-de-servicelb-sur-un-cluster-k3s/">https://blog.mossroy.fr/2024/04/16/passage-sur-metallb-au-lieu-de-servicelb-sur-un-cluster-k3s/</a></p>
</li>
<li><p><a target="_blank" href="https://cert-manager.io/docs/installation/helm/">https://cert-manager.io/docs/installation/helm/</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Fix proxmox error: Reset adapter unexpectedly/Detected Hardware Unit Hang]]></title><description><![CDATA[iface eno1 inet manual
    # ... existing lines
    post-up /sbin/ethtool -K $IFACE tso off

Resources:

https://forum.proxmox.com/threads/e1000e-reset-adapter-unexpectedly.87769/

https://serverfault.com/questions/616485/e1000e-reset-adapter-unexpec...]]></description><link>https://blog.davidassigbi.com/fix-proxmox-error-reset-adapter-unexpectedlydetected-hardware-unit-hang</link><guid isPermaLink="true">https://blog.davidassigbi.com/fix-proxmox-error-reset-adapter-unexpectedlydetected-hardware-unit-hang</guid><category><![CDATA[proxmox]]></category><category><![CDATA[error]]></category><category><![CDATA[reset]]></category><category><![CDATA[adapter]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sat, 02 Sep 2023 04:10:40 GMT</pubDate><content:encoded><![CDATA[<pre><code class="lang-bash">iface eno1 inet manual
    <span class="hljs-comment"># ... existing lines</span>
    post-up /sbin/ethtool -K <span class="hljs-variable">$IFACE</span> tso off
</code></pre>
<p>Resources:</p>
<ul>
<li><p><a target="_blank" href="https://forum.proxmox.com/threads/e1000e-reset-adapter-unexpectedly.87769/">https://forum.proxmox.com/threads/e1000e-reset-adapter-unexpectedly.87769/</a></p>
</li>
<li><p><a target="_blank" href="https://serverfault.com/questions/616485/e1000e-reset-adapter-unexpectedly-detected-hardware-unit-hang">https://serverfault.com/questions/616485/e1000e-reset-adapter-unexpectedly-detected-hardware-unit-hang</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Kasm Workspaces setup]]></title><description><![CDATA[Installation (Ubuntu 22.04)

https://kasmweb.com/docs/latest/install/single_server_install.html

apt update && apt upgrade -y && apt install curl -y
cd /tmp
curl -O https://kasm-static-content.s3.amazonaws.com/kasm_release_1.13.1.421524.tar.gz
tar -x...]]></description><link>https://blog.davidassigbi.com/kasm-workspaces-setup</link><guid isPermaLink="true">https://blog.davidassigbi.com/kasm-workspaces-setup</guid><category><![CDATA[kasm]]></category><category><![CDATA[workspaces]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sat, 22 Jul 2023 12:15:05 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-installation-ubuntu-2204">Installation (Ubuntu 22.04)</h2>
<ul>
<li><a target="_blank" href="https://kasmweb.com/docs/latest/install/single_server_install.html">https://kasmweb.com/docs/latest/install/single_server_install.html</a></li>
</ul>
<pre><code class="lang-bash">apt update &amp;&amp; apt upgrade -y &amp;&amp; apt install curl -y
<span class="hljs-built_in">cd</span> /tmp
curl -O https://kasm-static-content.s3.amazonaws.com/kasm_release_1.13.1.421524.tar.gz
tar -xf kasm_release_1.13.1.421524.tar.gz
bash kasm_release/install.sh --accept-eula --swap-size 8192 --admin-password admin --user-password user
</code></pre>
<h2 id="heading-startstop">Start/Stop</h2>
<pre><code class="lang-bash"><span class="hljs-comment"># restart all services on a server</span>
<span class="hljs-built_in">cd</span> /opt/kasm/bin
./stop
./start

<span class="hljs-comment"># restart individual components</span>
sudo docker restart kasm_agent
sudo docker restart kasm_api
sudo docker restart kasm_manager
sudo docker restart kasm_db
sudo docker restart kasm_proxy
</code></pre>
<h2 id="heading-uninstallation">Uninstallation</h2>
<ul>
<li><a target="_blank" href="https://kasmweb.com/docs/latest/install/uninstall.html">https://kasmweb.com/docs/latest/install/uninstall.html</a></li>
</ul>
<pre><code class="lang-bash">sudo /opt/kasm/current/bin/stop
sudo docker rm -f $(sudo docker container ls -qa --filter=<span class="hljs-string">"label=kasm.kasmid"</span>)
<span class="hljs-built_in">export</span> KASM_UID=$(id kasm -u)
<span class="hljs-built_in">export</span> KASM_GID=$(id kasm -g)
sudo -E docker compose -f /opt/kasm/current/docker/docker-compose.yaml rm
sudo docker network rm kasm_default_network
sudo docker volume rm kasm_db_1.13.1

sudo rm -rf /opt/kasm/
sudo deluser kasm_db
sudo deluser kasm
</code></pre>
<h2 id="heading-tuning">Tuning</h2>
<ul>
<li><a target="_blank" href="https://kasmweb.com/docs/latest/guide/kasm_performance.html">https://kasmweb.com/docs/latest/guide/kasm_performance.html</a></li>
</ul>
<p>Resources:</p>
<ul>
<li><p><a target="_blank" href="https://kasmweb.com/docs/latest/guide/kasm_performance.html">https://kasmweb.com/docs/latest/guide/kasm_performance.html</a></p>
</li>
<li><p><a target="_blank" href="https://kasmweb.com/docs/latest/guide/settings.html">https://kasmweb.com/docs/latest/guide/settings.html</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Kasm web: pass through Intel iGPU to workspaces]]></title><description><![CDATA[Follow this article to pass through the iGPU to the LXC container first.
In the Kasm admin panel go to Workspaces / <workspace : edit>

By default the iGPU is used with DRI3
Under “Docker Run Config Overide (JSON)” set:
{
  "environment": {
    "HW3D...]]></description><link>https://blog.davidassigbi.com/kasm-web-pass-through-intel-igpu-to-workspaces</link><guid isPermaLink="true">https://blog.davidassigbi.com/kasm-web-pass-through-intel-igpu-to-workspaces</guid><category><![CDATA[passthrough]]></category><category><![CDATA[GPU]]></category><category><![CDATA[workspaces]]></category><category><![CDATA[kasm]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sat, 22 Jul 2023 11:56:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1690027468576/1795bba4-dd78-4476-9614-16a148cf6902.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Follow this <a target="_blank" href="https://hashnode.com/post/clkdwe9xj000j0amo9ng6cnbp">article</a> to pass through the iGPU to the LXC container first.</p>
<p>In the Kasm admin panel go to <em>Workspaces / &lt;workspace : edit&gt;</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690026440497/941ac212-0d6c-4afc-8a45-35161a1e0823.png" alt class="image--center mx-auto" /></p>
<p>By default the iGPU is used with <strong>DRI3</strong></p>
<p>Under “Docker Run Config Overide (JSON)” set:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"environment"</span>: {
    <span class="hljs-attr">"HW3D"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"DRINODE"</span>: <span class="hljs-string">"/dev/dri/renderD128"</span>
  },
  <span class="hljs-attr">"devices"</span>: [
    <span class="hljs-string">"/dev/dri/card0:/dev/dri/card0:rwm"</span>,
    <span class="hljs-string">"/dev/dri/renderD128:/dev/dri/renderD128:rwm"</span>
  ]
}
</code></pre>
<p>Under “Docker Exec Config (JSON)” set:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"first_launch"</span>: {
    <span class="hljs-attr">"user"</span>: <span class="hljs-string">"root"</span>,
    <span class="hljs-attr">"cmd"</span>: <span class="hljs-string">"bash -c 'chown -R kasm-user:kasm-user /dev/dri/*'"</span>
  }
}
</code></pre>
<p>If the DRI3 method doe not work use VirtualGL method:</p>
<p>Under “Docker Run Config Overide (JSON)” set:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"environment"</span>: {
    <span class="hljs-attr">"KASM_EGL_CARD"</span>: <span class="hljs-string">"/dev/dri/card0"</span>,
    <span class="hljs-attr">"KASM_RENDERD"</span>: <span class="hljs-string">"/dev/dri/renderD128"</span>
  },
  <span class="hljs-attr">"devices"</span>: [
    <span class="hljs-string">"/dev/dri/card0:/dev/dri/card0:rwm"</span>,
    <span class="hljs-string">"/dev/dri/renderD128:/dev/dri/renderD128:rwm"</span>
  ]
}
</code></pre>
<p>and run the app with the following command:</p>
<pre><code class="lang-bash">vglrun -d <span class="hljs-variable">${KASM_EGL_CARD}</span> YOURCOMMANDHERE
</code></pre>
<p>To test the gpu is enabled, run the Ubuntu desktop image and within the terminal run:</p>
<pre><code class="lang-bash">glxinfo -B
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690026748046/8908947a-d872-4ea2-af21-08eccce88bb0.png" alt class="image--center mx-auto" /></p>
<p>Resources:</p>
<ul>
<li><p><a target="_blank" href="https://kasmweb.com/docs/latest/how_to/manual_intel_amd.html">https://kasmweb.com/docs/latest/how_to/manual_intel_amd.html</a></p>
</li>
<li><p><a target="_blank" href="https://kasmweb.com/docs/latest/how_to/gpu.html">https://kasmweb.com/docs/latest/how_to/gpu.html</a></p>
</li>
<li><p><a target="_blank" href="https://kasmweb.com/kasmvnc/docs/master/gpu_acceleration.html">https://kasmweb.com/kasmvnc/docs/master/gpu_acceleration.html</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Proxmox: Intel iGPU pass-through to unprivileged LXC containers for hardware transcoding]]></title><description><![CDATA[On proxmox host:
chmod 666 /dev/dri/renderD128

# For a persistent way to give the permissions (this worked for me the first time but later on stopped working)
cat > /etc/udev/rules.d/99-intel-chmod666.rules << 'EOF'
KERNEL=="renderD128", MODE="0666"...]]></description><link>https://blog.davidassigbi.com/proxmox-intel-igpu-pass-through-to-unprivileged-lxc-containers-for-hardware-transcoding</link><guid isPermaLink="true">https://blog.davidassigbi.com/proxmox-intel-igpu-pass-through-to-unprivileged-lxc-containers-for-hardware-transcoding</guid><category><![CDATA[proxmox]]></category><category><![CDATA[GPU]]></category><category><![CDATA[LXC]]></category><category><![CDATA[passthrough]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sat, 22 Jul 2023 10:59:26 GMT</pubDate><content:encoded><![CDATA[<p>On <em>proxmox</em> host:</p>
<pre><code class="lang-bash">chmod 666 /dev/dri/renderD128

<span class="hljs-comment"># For a persistent way to give the permissions (this worked for me the first time but later on stopped working)</span>
cat &gt; /etc/udev/rules.d/99-intel-chmod666.rules &lt;&lt; <span class="hljs-string">'EOF'</span>
KERNEL==<span class="hljs-string">"renderD128"</span>, MODE=<span class="hljs-string">"0666"</span>
KERNEL==<span class="hljs-string">"card0"</span>, MODE=<span class="hljs-string">"0666"</span>
EOF
<span class="hljs-comment"># Reboot proxmox host</span>

cat &gt;&gt; /etc/pve/lxc/xyz.conf &lt;&lt;EOF
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none <span class="hljs-built_in">bind</span>,optional,create=file 0, 0
lxc.mount.entry: /dev/dri dev/dri none <span class="hljs-built_in">bind</span>,optional,create=dir
EOF

apt install vainfo
vainfo
</code></pre>
<p>To check the usage of the iGPU one can run on the proxmox host the command:</p>
<pre><code class="lang-bash">sudo apt install intel-gpu-tools
sudo intel_gpu_top
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690028499703/88cba78c-fcb9-4896-8137-377fc405ac28.png" alt class="image--center mx-auto" /></p>
<p>Resources:</p>
<ul>
<li><p><a target="_blank" href="https://ketanvijayvargiya.com/302-hardware-transcoding-inside-an-unprivileged-lxc-container-on-proxmox/">https://ketanvijayvargiya.com/302-hardware-transcoding-inside-an-unprivileged-lxc-container-on-proxmox/</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/blakeblackshear/frigate/discussions/5773">https://github.com/blakeblackshear/frigate/discussions/5773</a></p>
</li>
<li><p><a target="_blank" href="https://bookstack.swigg.net/books/linux/page/lxc-gpu-access">https://bookstack.swigg.net/books/linux/page/lxc-gpu-access</a></p>
</li>
<li><p><a target="_blank" href="https://forum.proxmox.com/threads/mediated-device-passthrough-to-lxc-container.128636/">https://forum.proxmox.com/threads/mediated-device-passthrough-to-lxc-container.128636/</a></p>
</li>
<li><p><a target="_blank" href="https://forum.proxmox.com/threads/lxc-emby-dont-detect-hardware-acceleration.117173/">https://forum.proxmox.com/threads/lxc-emby-dont-detect-hardware-acceleration.117173/</a></p>
</li>
<li><p><a target="_blank" href="https://forum.proxmox.com/threads/lxc-i9-12900t-gpu-plex-passthrough.109439/">https://forum.proxmox.com/threads/lxc-i9-12900t-gpu-plex-passthrough.109439/</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How to use "git clone" with a custom SSH key]]></title><description><![CDATA[GIT_SSH_COMMAND="ssh -i ~/.ssh/id_rsa_custom" git clone git@github.com:user/repo.git your-folder-name

`~/.ssh/config`
Host github_ssh_connection
   HostName github.com
   IdentityFile ~/.ssh/id_rsa_custom

git clone git@github_ssh_connection:user/re...]]></description><link>https://blog.davidassigbi.com/how-to-use-git-clone-with-a-custom-ssh-key</link><guid isPermaLink="true">https://blog.davidassigbi.com/how-to-use-git-clone-with-a-custom-ssh-key</guid><category><![CDATA[Git]]></category><category><![CDATA[ssh-keys]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Mon, 05 Jun 2023 06:26:33 GMT</pubDate><content:encoded><![CDATA[<pre><code class="lang-bash">GIT_SSH_COMMAND=<span class="hljs-string">"ssh -i ~/.ssh/id_rsa_custom"</span> git <span class="hljs-built_in">clone</span> git@github.com:user/repo.git your-folder-name
</code></pre>
<p><strong>`~/.ssh/config`</strong></p>
<pre><code class="lang-apache"><span class="hljs-attribute">Host</span> github_ssh_connection
   <span class="hljs-attribute">HostName</span> github.com
   <span class="hljs-attribute">IdentityFile</span> ~/.ssh/id_rsa_custom
</code></pre>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> git@github_ssh_connection:user/repo.git your-folder-name
</code></pre>
<p>Resources:</p>
<ul>
<li><a target="_blank" href="https://ralphjsmit.com/git-custom-ssh-key">https://ralphjsmit.com/git-custom-ssh-key</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[pfSense Change keyboard into French]]></title><description><![CDATA[# echo "kbdcontrol -l /usr/share/syscons/keymaps/fr.iso.kbd" >> /root/.cshrc
Resources:

https://blogmotion.fr/internet/pfsense-clavier-azerty-16564]]></description><link>https://blog.davidassigbi.com/pfsense-change-keyboard-into-french</link><guid isPermaLink="true">https://blog.davidassigbi.com/pfsense-change-keyboard-into-french</guid><category><![CDATA[pfsense]]></category><category><![CDATA[keyboard]]></category><category><![CDATA[change]]></category><category><![CDATA[terminal]]></category><category><![CDATA[layout]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Fri, 14 Oct 2022 20:56:13 GMT</pubDate><content:encoded><![CDATA[<pre><code># echo <span class="hljs-string">"kbdcontrol -l /usr/share/syscons/keymaps/fr.iso.kbd"</span> &gt;&gt; <span class="hljs-regexp">/root/</span>.cshrc
</code></pre><p>Resources:</p>
<ul>
<li>https://blogmotion.fr/internet/pfsense-clavier-azerty-16564</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[LXC/LXD commands cheat sheet]]></title><description><![CDATA[$ sudo snap install lxd
$ lxd init
$ sudo usermod -aG user lxd



$ lxc remote list

$ lxc image list images:
$ lxc image list images: ubuntu amd64
$ lxc image info images:ubuntu/jammy
$ lxc image list local:
$ lxc image list 

$ lxc image copy image...]]></description><link>https://blog.davidassigbi.com/lxclxd-commands-cheat-sheet</link><guid isPermaLink="true">https://blog.davidassigbi.com/lxclxd-commands-cheat-sheet</guid><category><![CDATA[lxd]]></category><category><![CDATA[LXC]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 09 Oct 2022 15:20:52 GMT</pubDate><content:encoded><![CDATA[<pre><code class="lang-bash">$ sudo snap install lxd
$ lxd init
$ sudo usermod -aG user lxd



$ lxc remote list

$ lxc image list images:
$ lxc image list images: ubuntu amd64
$ lxc image info images:ubuntu/jammy
$ lxc image list <span class="hljs-built_in">local</span>:
$ lxc image list 

$ lxc image copy images:debian/12 <span class="hljs-built_in">local</span>: --auto-update --copy-aliases

$ lxc launch images:ubuntu/focal ubuntu
$ lxc launch images:alpine/3.16 alpine

$ lxc list 

$ lxc <span class="hljs-built_in">exec</span> ubuntu hostname
$ lxc <span class="hljs-built_in">exec</span> ubuntu -- hostname
$ lxc shell ubuntu
$ lxc console ubuntu <span class="hljs-comment"># to leave press ctrl-a and let go of ctrl and press q</span>

$ lxc file edit ubuntu/home/david/.bashrc

$ lxc start/stop/restart ubuntu

$ lxc snapshot ubuntu snap1
$ lxc snapshot ubuntu snap2

$ lxc info ubuntu
$ lxc delete ubuntu/snap1

$ lxc restore ubuntu snap2

$ lxc config <span class="hljs-built_in">set</span> ubuntu boot.autostart 1

$ lxc config <span class="hljs-built_in">set</span> ubuntu limits.memory 1Gb

$ lxc config <span class="hljs-built_in">set</span> ubuntu boot.autostart.delay 30

$ lxc config show ubuntu

$ lxc config <span class="hljs-built_in">set</span> ubuntu boot.autostart.order 2
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Install a VNC server on Debian based distros]]></title><description><![CDATA[TLDR:
$ sudo apt update
$ sudo apt install lightdm
$ sudo reboot
$ sudo apt install x11vnc

$ sudo nano /lib/systemd/system/x11vnc.service

!Copy and paste these commands - change the password
[Unit]
Description=x11vnc service
After=display-manager.s...]]></description><link>https://blog.davidassigbi.com/install-a-vnc-server-on-debian-based-distros</link><guid isPermaLink="true">https://blog.davidassigbi.com/install-a-vnc-server-on-debian-based-distros</guid><category><![CDATA[VNC]]></category><category><![CDATA[debian]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 09 Oct 2022 15:20:06 GMT</pubDate><content:encoded><![CDATA[<p>TLDR:</p>
<pre><code class="lang-bash">$ sudo apt update
$ sudo apt install lightdm
$ sudo reboot
$ sudo apt install x11vnc

$ sudo nano /lib/systemd/system/x11vnc.service

!Copy and paste these commands - change the password
[Unit]
Description=x11vnc service
After=display-manager.service network.target syslog.target

[Service]
Type=simple
ExecStart=/usr/bin/x11vnc -forever -display :0 -auth guess -passwd password
ExecStop=/usr/bin/killall x11vnc
Restart=on-failure

[Install]
WantedBy=multi-user.target

!Save file and run these commands:

$ systemctl daemon-reload
$ systemctl <span class="hljs-built_in">enable</span> x11vnc.service
$ systemctl start x11vnc.service
$ systemctl status x11vnc.service
</code></pre>
<p>Resources:</p>
<ul>
<li>https://www.crazy-logic.co.uk/projects/computing/how-to-install-x11vnc-vnc-server-as-a-service-on-ubuntu-20-04-for-remote-access-or-screen-sharing</li>
<li>https://youtu.be/3K1hUwxxYek - [Ubuntu VNC Server - David Bombal]</li>
<li>https://youtu.be/633OWaW3cyo - [Linux Desktop in the Cloud Tutorial | Create and Access From Anywhere - LearnLinuxTV - Linode]</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[LXC/LXD/Proxmox container, custom uid/gid mappings for FreeIPA users]]></title><description><![CDATA[How to allow lxc containers to connect to users created in FreeIPA server
Add these to the lxc files for the containers u want to allow connecting to the freeipa server
lxc.idmap = u 1000000 1000000 200000
lxc.idmap = g 1000000 1000000 200000
lxc.idm...]]></description><link>https://blog.davidassigbi.com/lxclxdproxmox-container-custom-uidgid-mappings-for-freeipa-users</link><guid isPermaLink="true">https://blog.davidassigbi.com/lxclxdproxmox-container-custom-uidgid-mappings-for-freeipa-users</guid><category><![CDATA[LXC]]></category><category><![CDATA[proxmox]]></category><category><![CDATA[freeipa]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 09 Oct 2022 15:19:07 GMT</pubDate><content:encoded><![CDATA[<p>How to allow lxc containers to connect to users created in FreeIPA server
Add these to the lxc files for the containers u want to allow connecting to the freeipa server</p>
<pre><code class="lang-bash">lxc.idmap = u 1000000 1000000 200000
lxc.idmap = g 1000000 1000000 200000
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536
</code></pre>
<p>Make sure to add these in <code>/etc/{subuid,subgid}</code></p>
<pre><code># FreeIPA ids
<span class="hljs-attr">root</span>:<span class="hljs-number">1000000</span>:<span class="hljs-number">2000000</span>
# FreeIPA ids
</code></pre><p>And make sure to install the ipa server with the options: <code>ipa-server-install --setup-dns --no-ntp --mkhomedir --idstart=1000000 --idmax=1999999</code></p>
<p>Resources:</p>
<ul>
<li>https://kiwix.ounapuu.ee/serverfault.com_en_all_2019-02/A/question/848620.html</li>
<li>https://kiwix.ounapuu.ee/serverfault.com_en_all_2019-02/A/question/777095.html</li>
<li>https://forum.proxmox.com/threads/can-i-ask-an-uid-range-not-to-be-mapped-in-an-unprivileged-container.49544/</li>
<li>https://forum.proxmox.com/threads/problems-using-a-mount-point-and-lxc-idmap.77370/</li>
<li>https://superuser.com/questions/1518783/how-do-i-take-advantage-of-freeipa-centralized-authentication-in-an-lxc-containe</li>
<li>https://ubuntu.com/blog/nested-containers-in-lxd</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How to copy a sparse file in over the network from linux]]></title><description><![CDATA[Simply $ rsync -aS <source> <destination> for rsync you need to have rsync on both machines
$ scp source destination [--sparse=always] @destination $ fallocate -d filename_on_destination_server
can also use dd to convert into or create sparse files
R...]]></description><link>https://blog.davidassigbi.com/how-to-copy-a-sparse-file-in-over-the-network-from-linux</link><guid isPermaLink="true">https://blog.davidassigbi.com/how-to-copy-a-sparse-file-in-over-the-network-from-linux</guid><category><![CDATA[sparse]]></category><category><![CDATA[rsync]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 09 Oct 2022 15:03:49 GMT</pubDate><content:encoded><![CDATA[<p>Simply <code>$ rsync -aS &lt;source&gt; &lt;destination&gt;</code> for rsync you need to have rsync on both machines</p>
<p><code>$ scp source destination [--sparse=always] @destination $ fallocate -d filename_on_destination_server</code></p>
<p>can also use dd to convert into or create sparse files</p>
<p>Resources:</p>
<ul>
<li>https://serverfault.com/questions/665335/what-is-fastest-way-to-copy-a-sparse-file-what-method-results-in-the-smallest-f</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How to change your hostname per network in Linux (Ubuntu desktop)]]></title><description><![CDATA[TLDR;
$ nmcli connection modify <connection-name> ipv4.dhcp.hostname=<new_name>]]></description><link>https://blog.davidassigbi.com/how-to-change-your-hostname-per-network-in-linux-ubuntu-desktop</link><guid isPermaLink="true">https://blog.davidassigbi.com/how-to-change-your-hostname-per-network-in-linux-ubuntu-desktop</guid><category><![CDATA[network-manager]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[dhcp]]></category><category><![CDATA[hostname]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 09 Oct 2022 15:00:45 GMT</pubDate><content:encoded><![CDATA[<p>TLDR;</p>
<pre><code class="lang-bash">$ nmcli connection modify &lt;connection-name&gt; ipv4.dhcp.hostname=&lt;new_name&gt;
</code></pre>
]]></content:encoded></item><item><title><![CDATA[XEN Hypervisor Project]]></title><description><![CDATA[For this extremely short tutorial, I suppose you are familiar with the HXen hypervisor and what you specifically want to do is turn one of your VMs or DomUs installation into a template you can use to install other VMs.

I needed to do exactly the sa...]]></description><link>https://blog.davidassigbi.com/xen-hypervisor-project</link><guid isPermaLink="true">https://blog.davidassigbi.com/xen-hypervisor-project</guid><category><![CDATA[xen]]></category><category><![CDATA[Linux]]></category><category><![CDATA[virtualization]]></category><category><![CDATA[hypervisor]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 09 Oct 2022 14:59:21 GMT</pubDate><content:encoded><![CDATA[<p>For this extremely short tutorial, I suppose you are familiar with the HXen hypervisor and what you specifically want to do is turn one of your VMs or DomUs installation into a template you can use to install other VMs.
<br /></p>
<p>I needed to do exactly the same some days ago, but I was facing some issues, so I am basically writing this article to help out those who may be having similar issues.</p>
<p>Basically the workflow to go from a domU to a template is the following:</p>
<ul>
<li>Power off the domU</li>
<li>Make the domU disk available to the host system as a folder (mount the domu disk into the dom0)</li>
<li>Create a tar of the folder</li>
<li>Then, use the tar file as the installation source of other domUs  </li>
</ul>
<ol>
<li>Power off the domU</li>
<li>Make the domU disk available to the host system as a folder (mount the domu disk into the dom0)</li>
<li>Create a tar of the folder</li>
<li>Then, use the tar file as the installation source of other domUs </li>
</ol>
<p>Resources:</p>
<ul>
<li>https://wiki.xenproject.org/wiki/Xen_Project_Beginners_Guide</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How to configure networking in Debian]]></title><description><![CDATA[Resources:

https://wiki.debian.org/NetworkConfiguration]]></description><link>https://blog.davidassigbi.com/how-to-configure-networking-in-debian</link><guid isPermaLink="true">https://blog.davidassigbi.com/how-to-configure-networking-in-debian</guid><category><![CDATA[Linux]]></category><category><![CDATA[debian]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 09 Oct 2022 14:57:15 GMT</pubDate><content:encoded><![CDATA[<p>Resources:</p>
<ul>
<li>https://wiki.debian.org/NetworkConfiguration</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How to bypass "Error: Nexthop has invalid gateway" when adding routes on Linux]]></title><description><![CDATA[I was having issues to add a route to my Linux home lab virtualization host, and it was not working and was instead outputting that error all the time.
I guess that had something with the actual complexity of my home lab architecture, especially how ...]]></description><link>https://blog.davidassigbi.com/how-to-bypass-error-nexthop-has-invalid-gateway-when-adding-routes-on-linux</link><guid isPermaLink="true">https://blog.davidassigbi.com/how-to-bypass-error-nexthop-has-invalid-gateway-when-adding-routes-on-linux</guid><category><![CDATA[routing]]></category><category><![CDATA[router]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 09 Oct 2022 14:53:58 GMT</pubDate><content:encoded><![CDATA[<p>I was having issues to add a route to my Linux home lab virtualization host, and it was not working and was instead outputting that error all the time.
I guess that had something with the actual complexity of my home lab architecture, especially how my VMs had access to the interned and how I was accessing them from the Internet.</p>
<pre><code class="lang-bash">davidassigbi@hp-elitebook:~$ ip route 
default via 10.188.0.1 dev wlo1 proto dhcp metric 600 
10.10.10.0/24 dev enp0s25 proto kernel scope link src 10.10.10.2 metric 100 
10.188.0.0/16 dev wlo1 proto kernel scope link src 10.188.197.57 metric 600 
169.254.0.0/16 dev mybridge scope link metric 1000 
192.168.1.0/24 via 192.168.1.5 dev mybridge 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 

davidassigbi@hp-elitebook:~$ sudo ip route add 100.96.1.0/24 via 192.168.1.1 dev mybridge 
Error: Nexthop has invalid gateway.

davidassigbi@hp-elitebook:~$ ip a show mybridge
4: mybridge: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ba:5a:99:6e:19:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.5/24 brd 192.168.1.255 scope global mybridge
       valid_lft forever preferred_lft forever
    inet6 fe80::b85a:99ff:fe6e:1987/64 scope link 
       valid_lft forever preferred_lft forever

davidassigbi@hp-elitebook:~$ sudo brctl show mybridge
bridge name    bridge id        STP enabled    interfaces
mybridge        8000.ba5a996e1987    no        vnet1
                            vnet4
                            vnet6
</code></pre>
<p>And the final solution was to add the <code>onlink</code> argument, which according to the <code>ip route</code> man page says it forces the kernel to assume the interface is directly connected to the system while it actually is not.</p>
<pre><code class="lang-bash">davidassigbi@hp-elitebook:~$ sudo ip route add 100.96.1.0/24 via 192.168.1.1 dev mybridge  onlink
</code></pre>
<p><strong>Resources:</strong></p>
<ul>
<li>https://man7.org/linux/man-pages/man8/ip-route.8.html : look up the onlink on the page</li>
<li>https://unix.stackexchange.com/a/644486 : a question on stackexchange discussing the same issue</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Keep your laptop up even when you close the lid on Linux]]></title><description><![CDATA[TLDR: You just have to edit the file /etc/systemd/logind.conf.
$ sudo vi /etc/systemd/logind.conf # and the change the parameter to ignore

[Login]
HandleLidSwitch=ignore
HandleLidSwitchDocked=ignore # This option specifically is the default on Debia...]]></description><link>https://blog.davidassigbi.com/keep-your-laptop-up-even-when-you-close-the-lid-on-linux</link><guid isPermaLink="true">https://blog.davidassigbi.com/keep-your-laptop-up-even-when-you-close-the-lid-on-linux</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 09 Oct 2022 14:53:13 GMT</pubDate><content:encoded><![CDATA[<p><strong>TLDR:</strong> You just have to edit the file <code>/etc/systemd/logind.conf</code>.</p>
<pre><code class="lang-bash">$ sudo vi /etc/systemd/logind.conf <span class="hljs-comment"># and the change the parameter to ignore</span>

[Login]
HandleLidSwitch=ignore
HandleLidSwitchDocked=ignore <span class="hljs-comment"># This option specifically is the default on Debian so if on your system did not modify that option you can just not write it </span>

$ sudo systemctl restart systemd-logind.service <span class="hljs-comment"># Or reboot for the change to take effect</span>
</code></pre>
<p>There are also a ton of other options in that file, so you can explore the content of the file just to have an idea of what you can do.</p>
<p>Resources:</p>
<ul>
<li>https://wiki.debian.org/Suspend </li>
<li>https://www.alphr.com/keep-laptop-when-closed/</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How to fix Windows drive in read only mode when on Linux]]></title><description><![CDATA[Resources:

https://www.linuxuprising.com/2019/01/fix-windows-10-or-8-partition-mounted.html]]></description><link>https://blog.davidassigbi.com/how-to-fix-windows-drive-in-read-only-mode-when-on-linux</link><guid isPermaLink="true">https://blog.davidassigbi.com/how-to-fix-windows-drive-in-read-only-mode-when-on-linux</guid><category><![CDATA[read-only]]></category><category><![CDATA[Windows]]></category><category><![CDATA[Linux]]></category><category><![CDATA[ntfs]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 09 Oct 2022 14:52:34 GMT</pubDate><content:encoded><![CDATA[<p>Resources:</p>
<ul>
<li>https://www.linuxuprising.com/2019/01/fix-windows-10-or-8-partition-mounted.html</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Enable nested virtualization]]></title><description><![CDATA[for hyper-v
PS > Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true

Virtual Box:
/home/<username> $ VBoxManage modifyvm <name> --nested-hw-virt on
PS C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe modifyvm <name> --nested-h...]]></description><link>https://blog.davidassigbi.com/enable-nested-virtualization</link><guid isPermaLink="true">https://blog.davidassigbi.com/enable-nested-virtualization</guid><category><![CDATA[nested]]></category><category><![CDATA[hyper-v]]></category><category><![CDATA[virtualization]]></category><category><![CDATA[KVM]]></category><category><![CDATA[VirtualBox ]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 09 Oct 2022 14:51:18 GMT</pubDate><content:encoded><![CDATA[<p>for hyper-v</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">PS</span> &gt; <span class="hljs-built_in">Set-VMProcessor</span> <span class="hljs-literal">-VMName</span> &lt;VMName&gt; <span class="hljs-literal">-ExposeVirtualizationExtensions</span> <span class="hljs-variable">$true</span>
</code></pre>
<p>Virtual Box:</p>
<pre><code class="lang-bash">/home/&lt;username&gt; $ VBoxManage modifyvm &lt;name&gt; --nested-hw-virt on
PS C:\Program Files\Oracle\VirtualBox&gt; .\VBoxManage.exe modifyvm &lt;name&gt; --nested-hw-virt on
</code></pre>
<p>for kvm Resources :</p>
<ul>
<li><p>https://docs.fedoraproject.org/en-US/quick-docs/using-nested-virtualization-in-kvm/</p>
</li>
<li><p>https://ostechnix.com/how-to-enable-nested-virtualization-in-kvm-in-linux/</p>
</li>
<li><p>https://techviewleo.com/how-to-enable-nested-virtualization-on-kvm-qemu/</p>
</li>
</ul>
<p>Virtual Box resources:</p>
<ul>
<li><p><a target="_blank" href="https://docs.oracle.com/en/virtualization/virtualbox/6.0/admin/nested-virt.html">https://docs.oracle.com/en/virtualization/virtualbox/6.0/admin/nested-virt.html</a></p>
</li>
<li><p><a target="_blank" href="https://forums.virtualbox.org/viewtopic.php?t=109174">https://forums.virtualbox.org/viewtopic.php?t=109174</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Linux How to free up space]]></title><description><![CDATA[$ sudo journalctl --vacuum-size=100M
Remove old snap versions that might be taking space
#!/bin/bash
# Removes old revisions of snaps
# CLOSE ALL SNAPS BEFORE RUNNING THIS
set -eu
snap list --all | awk '/disabled/{print $1, $3}' |
    while read snap...]]></description><link>https://blog.davidassigbi.com/linux-how-to-free-up-space</link><guid isPermaLink="true">https://blog.davidassigbi.com/linux-how-to-free-up-space</guid><category><![CDATA[freeup]]></category><category><![CDATA[Linux]]></category><category><![CDATA[space]]></category><dc:creator><![CDATA[Mensah David Assigbi]]></dc:creator><pubDate>Sun, 09 Oct 2022 06:31:29 GMT</pubDate><content:encoded><![CDATA[<p><code>$ sudo journalctl --vacuum-size=100M</code></p>
<p>Remove old snap versions that might be taking space</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
<span class="hljs-comment"># Removes old revisions of snaps</span>
<span class="hljs-comment"># CLOSE ALL SNAPS BEFORE RUNNING THIS</span>
<span class="hljs-built_in">set</span> -eu
snap list --all | awk <span class="hljs-string">'/disabled/{print $1, $3}'</span> |
    <span class="hljs-keyword">while</span> <span class="hljs-built_in">read</span> snapname revision; <span class="hljs-keyword">do</span>
        snap remove <span class="hljs-string">"<span class="hljs-variable">$snapname</span>"</span> --revision=<span class="hljs-string">"<span class="hljs-variable">$revision</span>"</span>
    <span class="hljs-keyword">done</span>
</code></pre>
<p>Resources:</p>
<ul>
<li>https://ubuntuhandbook.org/index.php/2020/12/clear-systemd-journal-logs-ubuntu/</li>
<li>https://itsfoss.com/free-up-space-ubuntu-linux/</li>
</ul>
]]></content:encoded></item></channel></rss>