<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[JT's thoughts]]></title><description><![CDATA[Thoughts, stories and ideas.]]></description><link>https://thoughts.jonathantey.com/</link><generator>Ghost 4.11</generator><lastBuildDate>Fri, 10 Apr 2026 16:08:47 GMT</lastBuildDate><atom:link href="https://thoughts.jonathantey.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Distillation vs. Quantization: Understanding the Trade-offs of LLM Compression using Images]]></title><description><![CDATA[<p>When researching LLMs, I found these two terms often used interchangeably &#x2014; <strong>distillation</strong> and <strong>quantization</strong>. After some thought, I began to see them like the process of storing a digital image.</p><p>When you take a picture of an object, you&#x2019;re not capturing the object itself but recording a</p>]]></description><link>https://thoughts.jonathantey.com/distillation-vs-quantization-understanding-the-trade-offs-of-llm-compression/</link><guid isPermaLink="false">68ea45bee6fd640001775c9f</guid><category><![CDATA[llm]]></category><category><![CDATA[distillation]]></category><category><![CDATA[quantization]]></category><category><![CDATA[machine learning]]></category><category><![CDATA[ai]]></category><category><![CDATA[ai compression]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Sat, 11 Oct 2025 12:05:29 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1591681354784-c684e483dae0?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDI0fHxjYW1lcmF8ZW58MHx8fHwxNzYwMTE0Njk5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1591681354784-c684e483dae0?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDI0fHxjYW1lcmF8ZW58MHx8fHwxNzYwMTE0Njk5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Distillation vs. Quantization: Understanding the Trade-offs of LLM Compression using Images"><p>When researching LLMs, I found these two terms often used interchangeably &#x2014; <strong>distillation</strong> and <strong>quantization</strong>. After some thought, I began to see them like the process of storing a digital image.</p><p>When you take a picture of an object, you&#x2019;re not capturing the object itself but recording a representation &#x2014; something that can later be projected or reconstructed. You decide how many pixels to keep, and for each pixel, how many bits to allocate. Every choice trades fidelity for practicality.</p><p><strong>Distillation</strong> is like reducing the pixel count. You retrain a smaller model to reproduce the behavior of a larger one, keeping structure and meaning while dropping fine details. It captures the &#x201C;shape&#x201D; of knowledge, not every contour. The result: faster, lighter, and usually good enough for the intended resolution.</p><p>Quantization is like lowering the bit depth. The architecture stays the same &#x2014; same number of layers, parameters, and connections &#x2014; but each weight or activation is stored with fewer bits. You keep the shape, but reduce the number of shades you can represent. Like playing a color movie on a black and white TV. </p><p>Both methods are forms of compression, but they act on different dimensions:</p><blockquote>Distillation trims the model&#x2019;s space &#x2014; fewer neurons and layers.<br>Quantization trims the model&#x2019;s depth &#x2014; fewer bits per value.</blockquote><p>Which to use depends on your target &#x201C;display.&#x201D; A small mobile device may need both: fewer pixels and lower bit depth. A server model with room to breathe might only quantize for efficiency.</p><p>Like photography, model optimization is about preserving what matters most for the final audience. Every reduction is a decision about what&#x2019;s worth keeping.</p>]]></content:encoded></item><item><title><![CDATA[WSL ssh-agent]]></title><description><![CDATA[<h2 id="introduction">Introduction</h2><p>WSL on Windows is great. After using it for some months, I have learned way more about how Windows and Linux systems interact. </p><p>One of the things that bothered me was that for some reason my SSH keys on the WSL system never works. On my previous machine running</p>]]></description><link>https://thoughts.jonathantey.com/wsl-ssh-agent/</link><guid isPermaLink="false">5f1168be439ce30001ba0dd4</guid><category><![CDATA[Ubuntu 18.04]]></category><category><![CDATA[Windows]]></category><category><![CDATA[WSL]]></category><category><![CDATA[SSH]]></category><category><![CDATA[SSH-agent]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Tue, 21 Jul 2020 00:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1580618432485-1e08c5039909?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://images.unsplash.com/photo-1580618432485-1e08c5039909?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="WSL ssh-agent"><p>WSL on Windows is great. After using it for some months, I have learned way more about how Windows and Linux systems interact. </p><p>One of the things that bothered me was that for some reason my SSH keys on the WSL system never works. On my previous machine running Ubuntu, the SSH Agent would automatically select the correct key to use in my ~/.ssh folder.</p><h2 id="discovery">Discovery</h2><p>I stumbled across this project <a href="https://github.com/rupor-github/wsl-ssh-agent">https://github.com/rupor-github/wsl-ssh-agent</a>, and it seemed promising, so I tried it out.</p><p>After downloading the binary, I extracted the contents to <code>C:\tools\wsl-ssh-agent</code>. Then from Powershell I ran <code>wsl-ssh-agent-gui.exe -socket c:\tools\wsl-ssh-agent\ssh-agent.sock</code>. From WSL terminal <code>EXPORT SSH_AUTH_SOCK=/c/tools/wsl-ssh-agent/ssh-agent.sock</code>.</p><p>Now that the SSH_AGENT is setup, we need to register our ssh keys with the Windows ssh-agent. From Powershell, run <code>ssh-add ${path to ssh private key}</code>. </p><p>Now when I run ssh from WSL, it will automatically run through the available ssh keys in my system to log into the remote machine. Win!</p><h2 id="persisting">Persisting</h2><p>Now we need a way to persist this setup across reboots.</p><p>First is to add <code>wsl-ssh-agent-gui.exe</code> to start up on boot, next is to add the environment variable in WSL to use the correct ssh-agent.</p><p>To add to startup, navigate to <code>C:\Users\[User Name]\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup</code>. From there, right click and create a new shortcut. Set Target to <code>C:\tools\wsl-ssh-agent\wsl-ssh-agent-gui.exe -socket C:\tools\wsl-ssh-agent\ssh-agent.sock</code> and Start in to <code>C:\tools\wsl-ssh-agent</code>. </p><p>In WSL, paste <code><code>EXPORT SSH_AUTH_SOCK=/c/tools/wsl-ssh-agent/ssh-agent.sock</code></code> in <code>~/.profile</code></p>]]></content:encoded></item><item><title><![CDATA[How to set up passthrough shared folder on KVM]]></title><description><![CDATA[<p>Create the shared folder</p><p>From virt-manager <code>Add Hardware</code> then choose FileSystem</p><figure class="kg-card kg-image-card"><img src="https://thoughts.jonathantey.com/content/images/2019/04/Screenshot-from-2019-04-20-12-09-42.png" class="kg-image" alt loading="lazy"></figure><p>The source path is the path to the folder on your KVM host</p><p>Target path refers to the mount point on your KVM guest</p><p>In this case, on the guest I want to mount it to /host</p><pre><code>mount -t</code></pre>]]></description><link>https://thoughts.jonathantey.com/how-to-set-up-passthrough-shared-folder-on-kvm/</link><guid isPermaLink="false">5cba9b4a0ab13100011e9128</guid><category><![CDATA[kvm]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Thu, 21 May 2020 04:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1519418400048-ebff930ec198?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1519418400048-ebff930ec198?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="How to set up passthrough shared folder on KVM"><p>Create the shared folder</p><p>From virt-manager <code>Add Hardware</code> then choose FileSystem</p><figure class="kg-card kg-image-card"><img src="https://thoughts.jonathantey.com/content/images/2019/04/Screenshot-from-2019-04-20-12-09-42.png" class="kg-image" alt="How to set up passthrough shared folder on KVM" loading="lazy"></figure><p>The source path is the path to the folder on your KVM host</p><p>Target path refers to the mount point on your KVM guest</p><p>In this case, on the guest I want to mount it to /host</p><pre><code>mount -t 9p -o trans=virtio,version=9p2000.L,rw /dev/backup /host</code></pre><p>If you try to create any file on <code>/host</code> now, you will get permission denied (even as root!)</p><p>You need to set the permission for the guest to write to folders on your host machine</p><p>On your host machine</p><p><code>sudo setfacl -m libvirt-qemu:rwx /files/backup/fm</code></p><blockquote>`/files/backup` is not referring to my backup location, it&apos;s just a convenient way for me to note that the contents in this folder needs to be backed up.</blockquote><p>Now you should be able to write files from your guest machine to the host</p><p>To mount the point on boot, add the following to <code>/etc/fstab</code></p><p><code>/host &#xA0; /dev/backup &#xA0; &#xA0;9p &#xA0;trans=virtio,version=9p2000.L,rw &#xA0; &#xA0;0 &#xA0; 0</code></p><p>*There are definitely security concerns with mounting your host disk on your kvm guest. I&apos;m doing this on my homelab, as I trust what I run and it makes sense for what I am trying to accomplish.</p>]]></content:encoded></item><item><title><![CDATA[Git file permissions on WSL]]></title><description><![CDATA[<p>Due to how docker volumes are mounted on Windows, I had to checkout git repositories on Ubuntu WSL to the Windows directory mount, eg. <code>/c/workspace/...</code></p><p>File permissions don&apos;t translate well from Linux to Windows, hence, the git files have their permissions &quot;changed&quot; to 755, which</p>]]></description><link>https://thoughts.jonathantey.com/git-file-permissions-on-wsl/</link><guid isPermaLink="false">5ebcbd00a94424000121816b</guid><category><![CDATA[Windows]]></category><category><![CDATA[Ubuntu 18.04]]></category><category><![CDATA[WSL]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Thu, 14 May 2020 04:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1574144611937-0df059b5ef3e?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1574144611937-0df059b5ef3e?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Git file permissions on WSL"><p>Due to how docker volumes are mounted on Windows, I had to checkout git repositories on Ubuntu WSL to the Windows directory mount, eg. <code>/c/workspace/...</code></p><p>File permissions don&apos;t translate well from Linux to Windows, hence, the git files have their permissions &quot;changed&quot; to 755, which is the default mounted permissions. Now when viewing changes from WSL, it is crowded by these &quot;changes&quot;, which makes it hard to do <code>git add .</code> without adding all the unchanged files. </p><p>The way is to add a git config on WSL to ignore the file permission changes</p><pre><code>git config --global core.filemode false</code></pre><p>If you need to change permission on a file in the future, eg. add an executable bit</p><pre><code>git update-index --chmod=+x &apos;scriptname.ext&apos;</code></pre><p>It takes an extra step to remember, but it doesn&apos;t happen too often so I&apos;m okay with it.</p>]]></content:encoded></item><item><title><![CDATA[Docker Quick Install]]></title><description><![CDATA[Copy and paste script to install docker on Ubuntu]]></description><link>https://thoughts.jonathantey.com/docker-install-commands/</link><guid isPermaLink="false">5cca88d5fcde200001a231d3</guid><category><![CDATA[docker]]></category><category><![CDATA[Ubuntu 18.04]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Thu, 02 May 2019 07:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1491847352009-6db18bb24656?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1491847352009-6db18bb24656?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Docker Quick Install"><p>Scripts to install docker on Ubuntu (with docker-compose)</p><pre><code>curl -fsSL get.docker.com -o get-docker.sh
sh get-docker.sh
rm get-docker.sh</code></pre><pre><code>COMPOSE_VERSION=`git ls-remote https://github.com/docker/compose | grep refs/tags | grep -oP &quot;[0-9]+\.[0-9][0-9]+\.[0-9]+$&quot; | tail -n 1`
sudo sh -c &quot;curl -L https://github.com/docker/compose/releases/download/${COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` &gt; /usr/local/bin/docker-compose&quot;
sudo chmod +x /usr/local/bin/docker-compose
sudo sh -c &quot;curl -L https://raw.githubusercontent.com/docker/compose/${COMPOSE_VERSION}/contrib/completion/bash/docker-compose &gt; /etc/bash_completion.d/docker-compose&quot;</code></pre>]]></content:encoded></item><item><title><![CDATA[Pitfalls to Avoid using RxJS]]></title><description><![CDATA[If you are using nested subscribe, you are doing it wrong]]></description><link>https://thoughts.jonathantey.com/pitfalls-to-avoid-using-rxjs/</link><guid isPermaLink="false">5cb5651b0ab13100011e9113</guid><category><![CDATA[RxJS 6]]></category><category><![CDATA[RxJS]]></category><category><![CDATA[ReactiveX]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[NEM Catapult]]></category><category><![CDATA[NEM]]></category><category><![CDATA[nested subscribe]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Tue, 16 Apr 2019 06:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1542046531233-d40a7d9c53fd?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://images.unsplash.com/photo-1542046531233-d40a7d9c53fd?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Pitfalls to Avoid using RxJS"><p>I first encountered ReactiveX while working on NEM2 blockchain, aka Catapult. On why ReactiveX was chosen rather than Promises, etc, there is a good writeup <a href="https://nemlog.nem.social/blog/2648">here</a>. (The original article is no longer available).</p>
<p>Writing Reactive apps requires a change in view in how the app works. The following is one of the most common mistake/frustration I encountered while working with RxJS.</p>
<h2 id="avoidnestedsubscribe">Avoid nested subscribe()</h2>
<blockquote>
<p>Functions referenced here are from <a href="https://github.com/nemtech/nem2-sdk-typescript-javascript">https://github.com/nemtech/nem2-sdk-typescript-javascript</a></p>
</blockquote>
<p><code>&lt;Observable&gt;.subscribe()</code> should only be used at the end. If you are using nested observables, most likely you are doing it wrong.</p>
<blockquote>
<p>One of the most common reason for using nested subscribe is that we want to chain observables yet not care for what the first observable returns.</p>
</blockquote>
<p>For example:</p>
<p>Having a list of transaction hashes, I want to be able to determine if the transaction that I announce is successful or not.</p>
<p>To do this, I can use <code>transactionHttp.getTransactionsStatuses(transactionHashes)</code> to retrieve the current status of the transactions on chain.</p>
<p>However, not all transactions are included in the same block, so I would run a while loop to keep checking the blockchain until all transactions have status success or failure.</p>
<p>Resulting in</p>
<pre><code>while (transactionHashes.length &gt; 0) {
	transactionHttp.getTransactionsStatuses(transactionHashes).subscribe(
	(transactionStatusResponses) =&gt; {
        for each transactionStatusResponse.hash 
        	if transactionStatusResponse is sucessful remove from transactionHashes
    })
}
</code></pre>
<p>Now, it is obvious that we can improve this.</p>
<p>Rather than fetching transaction status in a loop, we can do so only if a new block is harvested.</p>
<p>So our resulting function will look like this</p>
<pre><code>Listener.newBlock().subscribe(
	() =&gt; {
            transactionHttp.getTransactionsStatuses(transactionHashes).subscribe(
        (transactionStatusResponses) =&gt; {
            for each transactionStatusResponse.hash 
                if transactionStatusResponse is sucessful remove from transactionHashes
      	}
    )
})
</code></pre>
<p>Unfortunately there is no way for us to retrieve the <code>transactionStatusResponse</code>.  Also the newBlock observable will not know when to exit since it cannot read the status of the inner subscription.</p>
<p>We need a more reactive way to do this.</p>
<hr>
<p>What we need is to be able to listen to new blocks, then fetch the transactionStatuses within a single subscription.</p>
<p>First we start listening to new blocks</p>
<pre><code>const ob1 = Listener.newBlock()
</code></pre>
<p>Now, instead of subscribe, we will use <code>pipe()</code> to keep the reactiveness.</p>
<pre><code>Listener.newBlock().pipe(
	// Do stuff each time newBlock gives us a new block
)
</code></pre>
<p>Within <code>pipe()</code> we can specify operators that help us do stuff. However, if we subscribe to <code>Listener.newBlock()</code> right now, it will only return us <code>BlockInfo</code>. To be able to return a different value, we use the <code>switchMap</code> operator</p>
<pre><code>// Within pipe()
switchMap(
	() =&gt; {
return transactionHttp.getTransactionsStatuses(transactionHashes)
	}
),
</code></pre>
<p>Now, if we do <code>ob1.subscribe()</code> we will get <code>transactionStatus</code> objects.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Retrieving list of dynamically linked libraries from a binary]]></title><description><![CDATA[<p>Using docker to build binaries can cause the docker image to be really large as it includes all the source code, transient build files, linkers, etc. For production it is recommended to us docker multi-stage builds to produce a container that only has the binary and needed libraries to run.</p>]]></description><link>https://thoughts.jonathantey.com/dynamic-linked-library-list/</link><guid isPermaLink="false">5cad8fe20ab13100011e90e0</guid><category><![CDATA[docker]]></category><category><![CDATA[c++]]></category><category><![CDATA[executables]]></category><category><![CDATA[dll]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Wed, 10 Apr 2019 07:00:00 GMT</pubDate><media:content url="https://thoughts.jonathantey.com/content/images/2019/04/daniel-cheung-129839-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://thoughts.jonathantey.com/content/images/2019/04/daniel-cheung-129839-unsplash.jpg" alt="Retrieving list of dynamically linked libraries from a binary"><p>Using docker to build binaries can cause the docker image to be really large as it includes all the source code, transient build files, linkers, etc. For production it is recommended to us docker multi-stage builds to produce a container that only has the binary and needed libraries to run.</p><p>It is easy if the binary produced has all the needed libraries statically linked. But.. what if there are dynamically linked libraries? How do we find them?</p><h2 id="ldd">ldd</h2><p><code>ldd</code> would provide a list of dynamically linked libraries given a binary. However, it doesn&apos;t do multiple binaries at once. Hence pipes to the rescue.</p><pre><code>find * -type f -perm /a+x -exec ldd {} \; \
| grep so \
| sed -e &apos;/^[^\t]/ d&apos; \
| sed -e &apos;s/\t//&apos; \
| sed -e &apos;s/.*=..//&apos; \
| sed -e &apos;s/ (0.*)//&apos; \
| sort \
| uniq -c \
| sort -n</code></pre><p>The following will give us a list of dynamically linked libraries for all binaries found in the current folder. </p><p>Now we just have to copy the list of libraries with the binaries to a scratch docker image. #Win</p><p></p><p>Source: <a href="https://stackoverflow.com/a/50218/8827732">https://stackoverflow.com/a/50218/8827732</a></p>]]></content:encoded></item><item><title><![CDATA[Waking up Wi-Fi on Ubuntu 18.04]]></title><description><![CDATA[<p>I faced an issue with my Ubuntu boxes where the WiFi doesn&apos;t come back on when the machine wakes up. </p><p>After trying a few suggestions on stackoverflow, I finally found a command that works. </p><pre><code>sudo service network-manager restart</code></pre><p>And to automate it to work on wake. We follow</p>]]></description><link>https://thoughts.jonathantey.com/waking-up-wi-fi-on-ubuntu-18-04/</link><guid isPermaLink="false">5c9461eb5683fe0001e878e0</guid><category><![CDATA[Ubuntu 18.04]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Fri, 22 Mar 2019 04:25:55 GMT</pubDate><media:content url="https://thoughts.jonathantey.com/content/images/2019/03/annie-spratt-76930.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://thoughts.jonathantey.com/content/images/2019/03/annie-spratt-76930.jpg" alt="Waking up Wi-Fi on Ubuntu 18.04"><p>I faced an issue with my Ubuntu boxes where the WiFi doesn&apos;t come back on when the machine wakes up. </p><p>After trying a few suggestions on stackoverflow, I finally found a command that works. </p><pre><code>sudo service network-manager restart</code></pre><p>And to automate it to work on wake. We follow the steps here</p><p><a href="https://askubuntu.com/questions/741620/what-are-the-possible-commands-to-reset-a-wifi-connection">https://askubuntu.com/questions/741620/what-are-the-possible-commands-to-reset-a-wifi-connection</a></p>]]></content:encoded></item><item><title><![CDATA[Splitting a large csv file into chunks]]></title><description><![CDATA[Learning to split a csv file into smaller chunks using terminal]]></description><link>https://thoughts.jonathantey.com/split-a-large-file-into-chunks/</link><guid isPermaLink="false">5c73a46d8b1721000110afb9</guid><category><![CDATA[Ubuntu 18.04]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[command line]]></category><category><![CDATA[terminal]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Mon, 04 Mar 2019 14:00:00 GMT</pubDate><media:content url="https://thoughts.jonathantey.com/content/images/2019/03/jaxon-lott-535134-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://thoughts.jonathantey.com/content/images/2019/03/jaxon-lott-535134-unsplash.jpg" alt="Splitting a large csv file into chunks"><p>Recently I had to split a 30 thousand line csv file into smaller files of 1 thousand lines. Naturally I had to look for a tool that already does this and I found that this utility <code>split</code> comes installed on most Ubuntu versions (sweet)!</p><p>So I removed the first line (which is the header) and went to town. </p><p>On first try</p><!--kg-card-begin: markdown--><pre><code>split -l 1000 -d 30k-lines-file.csv 1k-lines-file-
</code></pre>
<!--kg-card-end: markdown--><p>Ok.. but it produced the following file names</p><pre><code>1k-lines-file-1
1k-lines-file-2
1k-lines-file-3
...</code></pre><p>Not good. how do I get back the <code>.csv</code> extension</p><p>A little more googling and I found a patch from the split mailing list from 2007! Apparently there is a parameter <code>--additional-suffix</code> </p><!--kg-card-begin: markdown--><pre><code>split -l 1000 -d --additional-suffix=.csv 30k-lines-file.csv 1k-lines-file-
</code></pre>
<!--kg-card-end: markdown--><p>Ahh... a little better</p><p>Now to add back the header to the first line of each file. </p><!--kg-card-begin: markdown--><pre><code>sed -i &apos;1 i header1, header2&apos; 1k-lines-file-*.csv
</code></pre>
<!--kg-card-end: markdown--><p>That&apos;s all folks!</p><hr><p>Cover photo by <a href="https://unsplash.com/photos/d91KFAekDtQ?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Jaxon Lott</a> on <a href="https://unsplash.com/search/photos/crack?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content:encoded></item><item><title><![CDATA[Running Portainer on Docker Toolbox]]></title><description><![CDATA[Where is docker.sock on my Windows machine?]]></description><link>https://thoughts.jonathantey.com/running-portainer-on-docker-toolbox/</link><guid isPermaLink="false">5c4fc1438b1721000110af88</guid><category><![CDATA[docker toolbox]]></category><category><![CDATA[docker]]></category><category><![CDATA[portainer]]></category><category><![CDATA[Windows]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Tue, 29 Jan 2019 03:30:00 GMT</pubDate><media:content url="https://thoughts.jonathantey.com/content/images/2019/01/mauro-licul-388509-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://thoughts.jonathantey.com/content/images/2019/01/mauro-licul-388509-unsplash.jpg" alt="Running Portainer on Docker Toolbox"><p>Can&apos;t seem to find where <code>docker.sock</code> is on your Windows machine? </p><p>After trying many docker management GUIs, I finally settled on Portainer. It is very simple to use and the ability to manage more than one docker environment makes it perfect to run small scale testing.</p><p>Unfortunately on my Windows machine, I can only run docker using Docker Toolbox, which uses VirtualBox to create a virtual environment for Docker. So where can I bind to <code>docker.sock</code>? </p><p>After days of unsuccessfully scouring the internet for a solution, I finally found it! Apparently all you need is to double backslash it. </p><p>So using the Portainer example from <a href="https://www.portainer.io/installation/">https://www.portainer.io/installation/</a></p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
</code></pre>
<!--kg-card-end: markdown--><p>becomes</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ docker run -d -p 9000:9000 -v //var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
</code></pre>
<!--kg-card-end: markdown--><p>Easy</p>]]></content:encoded></item><item><title><![CDATA[Adding swap space on Ubuntu 18.04]]></title><description><![CDATA[Simple copy-paste commands to create and assign swapfiles in Ubuntu 18.04]]></description><link>https://thoughts.jonathantey.com/creating-swapfile-on-ubuntu-18-04/</link><guid isPermaLink="false">5baef826d42d3b000128f3f6</guid><category><![CDATA[Ubuntu 18.04]]></category><category><![CDATA[swapfile]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Tue, 04 Sep 2018 00:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1504355080015-bba52674577b?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=54733a3c1965933aab5f6c9a509f4ec3" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1504355080015-bba52674577b?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=54733a3c1965933aab5f6c9a509f4ec3" alt="Adding swap space on Ubuntu 18.04"><p>If you are running low on RAM on your system, you might want to consider adding some swap space. The following will allocate and assign a 2GB swap space.</p><pre><code>sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo swapon --show</code></pre><p>It is possible to assign a second swap space. Just repeat the above commands but replace <code>/swapfile</code> with another name, eg. <code>swapfile2</code></p><h1 id="removing-swap-space">Removing swap space</h1><pre><code>sudo swapoff /swapfile
sudo rm /swapfile</code></pre><h2 id="to-make-permanent">To make permanent</h2><p>Add the following line to <code>/etc/fstab</code></p><pre><code>/swapfile swap swap defaults 0 0</code></pre>]]></content:encoded></item><item><title><![CDATA[Increasing ulimit and file descriptors limit on Ubuntu 18.04]]></title><description><![CDATA[What is ulimit and how to increase it]]></description><link>https://thoughts.jonathantey.com/increasing-ulimit-and-file-descriptors-limit-on-linux/</link><guid isPermaLink="false">5baef826d42d3b000128f3f5</guid><category><![CDATA[Ubuntu 18.04]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Mon, 03 Sep 2018 16:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1530912162784-514d437f58f7?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=21d4b15e18410a113ee89d7cef97e59f" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1530912162784-514d437f58f7?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=21d4b15e18410a113ee89d7cef97e59f" alt="Increasing ulimit and file descriptors limit on Ubuntu 18.04"><p>Most UNIX-like operating systems, including Linux and macOS, provide ways to limit and control the usage of system resources such as threads, files, and network connections on a per-process and per-user basis. These &#x201C;ulimits&#x201D; prevent single users from using too many system resources. (<a href="https://docs.mongodb.com/manual/reference/ulimit/">https://docs.mongodb.com/manual/reference/ulimit/</a>)</p><h2 id="how-to-increase-ulimit">How to increase ulimit</h2><pre><code># vi /etc/sysctl.conf
fs.file-max = 500000
#sysctl -p</code></pre><pre><code># vi /etc/security/limits.conf
* soft nofile 60000
* hard nofile 60000</code></pre><p><strong>Note: </strong>&apos;*&apos; would apply the limit to all users on your system (except for root). If you only want the limit to apply for a single user or for the root user, you would need to replace * with the user.</p><pre><code>reboot</code></pre><h3 id="resource">Resource</h3><p><a href="https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/">https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/</a></p>]]></content:encoded></item><item><title><![CDATA[Force redirect to https for Nginx and Apache]]></title><description><![CDATA[Easy copy and paste configuration to force use of https on your website.]]></description><link>https://thoughts.jonathantey.com/force-redirect-to-https-for-nginx-and-apache/</link><guid isPermaLink="false">5baef826d42d3b000128f3f7</guid><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Mon, 03 Sep 2018 09:35:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1480160734175-e2209654433c?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=08213f528056fd02bb992f5d5cfed9cf" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1480160734175-e2209654433c?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=08213f528056fd02bb992f5d5cfed9cf" alt="Force redirect to https for Nginx and Apache"><p>A simple way to set redirect to https for any site on your nginx/apache configuration.</p><h2 id="apache">Apache</h2><pre><code>RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R=301,L]</code></pre><h2 id="nginx">Nginx</h2><pre><code>server {
	listen 80;
	server_name ${DOMAIN};
	return 301 https://$server_name$request_uri;
}</code></pre>]]></content:encoded></item><item><title><![CDATA[Throttling Network Programmatically on Ubuntu 18.04]]></title><description><![CDATA[Network simulation in essence involves three parts
1) Latency
2) Bandwidth, and
3) Packet loss]]></description><link>https://thoughts.jonathantey.com/throttling-network-programmatically-on-ubuntu-18-04/</link><guid isPermaLink="false">5baef826d42d3b000128f3f4</guid><category><![CDATA[wondershaper]]></category><category><![CDATA[Ubuntu 18.04]]></category><category><![CDATA[network simulation]]></category><category><![CDATA[throttle bandwidth]]></category><category><![CDATA[limit network]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[iperf]]></category><category><![CDATA[bandwidth]]></category><category><![CDATA[test bandwidth]]></category><category><![CDATA[packet loss]]></category><dc:creator><![CDATA[Jonathan Tey]]></dc:creator><pubDate>Mon, 03 Sep 2018 09:20:23 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1521106047354-5a5b85e819ee?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=76b3624c13306ab8abea556bda0fb1f2" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1521106047354-5a5b85e819ee?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=76b3624c13306ab8abea556bda0fb1f2" alt="Throttling Network Programmatically on Ubuntu 18.04"><p>When testing distributed systems, sometimes it is necessary to find out the network tolerance before production deployment. </p><!--kg-card-begin: markdown--><p>Network simulation in essence involves 3 parts</p>
<ol>
<li>Latency</li>
<li>Bandwidth</li>
<li>Packet loss</li>
</ol>
<!--kg-card-end: markdown--><hr><h2 id="setup">Setup</h2><!--kg-card-begin: markdown--><p>The following assumes you have two nodes set up</p>
<ol>
<li>server</li>
<li>client</li>
</ol>
<!--kg-card-end: markdown--><h2 id="latency">Latency</h2><p>Before starting any tests, it is good to get a benchmark of your current setup.</p><p>From your <code>client</code> node run</p><pre><code>ping &lt;server-ip-address&gt;</code></pre><p>Now let&apos;s add a 500ms latency</p><pre><code># tc qdisc add dev eth0 root netem delay 500ms</code></pre><p>If you run <code>ping</code> again, you will get a different result</p><hr><p>Let&apos;s say we want to add some variance to the latency</p><p>Here we add a 10ms variance along a normal distribution in addition to the 500ms latency</p><p><em>Notice the use of &#xA0;<code>change</code> instead of <code>add</code></em></p><pre><code>tc qdisc add dev eth0 root netem delay 500ms 10ms distribution normal</code></pre><h2 id="packet-loss">Packet loss</h2><p>The following line adds a 10% packet loss in addition to a 250ms delay</p><pre><code>tc qdisc change dev eth0 root netem loss 10% delay 250ms</code></pre><p><strong>Note: </strong>These changes do not persist across restarts</p><h2 id="bandwidth">Bandwidth</h2><h3 id="using-wondershaper">Using Wondershaper</h3><p><a href="https://github.com/magnific0/wondershaper">https://github.com/magnific0/wondershaper</a></p><p>For bandwidth control, you can use wondershaper</p><p>After installation you can run the following</p><pre><code>wondershaper -a eth0 -u 30000 -d 10000
wondershaper -c -a eth0</code></pre><h3 id="test-bandwidth">Test bandwidth</h3><p>Use iperf to test if bandwidth limitations are applied</p><p>On the server run <code>iperf -s</code> to start the server</p><p>On the second node run <code>iperf -c &lt;server-ip-address&gt;</code> </p>]]></content:encoded></item></channel></rss>