From 9fe0b735c673bf13ddf120871d7bb61876f9542a Mon Sep 17 00:00:00 2001 From: Rohan Verma Date: Sat, 13 Apr 2019 21:12:31 +0530 Subject: [PATCH] fix: youtube and formatting in blogs --- .../2016-01-29-twitter-bots-using-tweepy.md | 2 + content/blog/2016-03-22-blip.md | 4 +- .../2016-04-15-foodify-app-hacknsit-2016.md | 2 +- ...ector-instructions-to-8051-architecture.md | 2 +- ...topological-sort-for-problems-using-dag.md | 81 ++++++------ ...-featured-in-sirajologys-youtube-videos.md | 8 +- ...days-git-tip-in-gitconfig-url-gitgithub.md | 2 +- ...onder-what-linus-torvalds-view-is-about.md | 1 + ...-11-07-a-tip-on-using-fsck-when-you-are.md | 2 +- ...-publications-require-you-to-put-author.md | 2 +- ...ay-thanks-to-https-www-feelthecitytours.md | 2 +- ...cently-corrupted-my-zsh-history-and-was.md | 2 +- .../blog/2016-11-29-octoshark-hackathon.md | 4 +- ...2-sorting-out-my-todo-list-for-the-next.md | 1 + content/blog/2017-01-02-.md | 12 -- content/blog/2017-01-06-.md | 12 -- content/blog/2017-01-07-.md | 12 -- ...01-12-.md => 2017-01-12-snu-data-limit.md} | 7 +- .../2017-02-04-i-used-to-use-the-l-flag.md | 2 +- content/blog/2017-02-09-.md | 13 -- .../2017-02-09-vorstellungsreprasentanz.md | 4 +- ...on-security-in-wireless-sensor-networks.md | 2 +- ...retrofitting-led-lamps-into-smart-lamps.md | 2 +- content/blog/2017-05-20-.md | 15 --- content/blog/2017-07-27-216.md | 3 +- content/blog/2017-10-03-.md | 12 -- ...formed​-​in-​​the-​​ancient.md | 11 +- .../2017-11-30-emotive-adsense-project.md | 3 +- .../2017-12-19-what-thefuck-is-wrong-with.md | 1 + ...017-12-20-setting-up-latex-on-spacemacs.md | 6 +- ...rough-the-lens-of-the-information-plane.md | 116 +++++++++++------- ...pacemacs-and-using-pyenv-to-use-python3.md | 5 +- ...featured-on-googles-instagram-instagram.md | 81 +----------- ...ract-filenames-without-their-extensions.md | 2 +- ...-companion-winner-dell-intern-hackathon.md | 2 +- .../2018-06-07-emacs-starts-a-bit-slow.md | 2 +- ...ons-for-testing-without-mocks-in-golang.md | 58 ++++----- ...-project-winner-ethindia-2018-hackathon.md | 12 +- ...ux-to-android-using-pulseaudio-over-lan.md | 6 +- ...t-and-similar-socket-options-in-go-1-11.md | 12 +- content/blog/2019-02-23-.md | 11 -- ...9-03-17-a-review-of-the-siempo-launcher.md | 6 +- layouts/index.html | 4 +- layouts/section/blog_list.html | 4 +- 44 files changed, 210 insertions(+), 343 deletions(-) delete mode 100644 content/blog/2017-01-02-.md delete mode 100644 content/blog/2017-01-06-.md delete mode 100644 content/blog/2017-01-07-.md rename content/blog/{2017-01-12-.md => 2017-01-12-snu-data-limit.md} (61%) delete mode 100644 content/blog/2017-02-09-.md delete mode 100644 content/blog/2017-05-20-.md delete mode 100644 content/blog/2017-10-03-.md delete mode 100644 content/blog/2019-02-23-.md diff --git a/content/blog/2016-01-29-twitter-bots-using-tweepy.md b/content/blog/2016-01-29-twitter-bots-using-tweepy.md index 16a385d..f6e51b9 100644 --- a/content/blog/2016-01-29-twitter-bots-using-tweepy.md +++ b/content/blog/2016-01-29-twitter-bots-using-tweepy.md @@ -10,6 +10,8 @@ categories: --- Unable to think what to tweet about? Have you ever faced a similar situation? + + Well, it’s very easy to create your own bots using python’s Tweepy module. You can use these skeletons I recently made for a workshop on the same topic. All you need to make your own bot is add some logic to these skeletons. * * * diff --git a/content/blog/2016-03-22-blip.md b/content/blog/2016-03-22-blip.md index d09e400..631c6a1 100644 --- a/content/blog/2016-03-22-blip.md +++ b/content/blog/2016-03-22-blip.md @@ -18,7 +18,7 @@ tags: We were inspired by the The Time Machine (2002) movie’s scene where the protagonist enters a museum in the future. - + During the hackathon we were able to make an app that relays RSSI values to our real time Database (rethink-db) that works on a pub-sub model, queries the real time database for its calculated position and receives contextual information relating to its predicted position inside the building where beacons have been set up. @@ -26,7 +26,7 @@ During the hackathon we were able to make an app that relays RSSI values to our Since, the final submission deadline was extended, we were able to reach back our campus at night and shoot a demo video at our university’s library. - + Finally, we were selected in the top 20 for the offline finals of IndiaHacks and went to Taj Vivanta, Bangalore. It was a nice experience where we got to improve our idea with the help of mentors that were available there. We tweaked the algorithm and the variables a bit for the demo room we made at the venue. We were surprised to be among the few student teams at the finale. diff --git a/content/blog/2016-04-15-foodify-app-hacknsit-2016.md b/content/blog/2016-04-15-foodify-app-hacknsit-2016.md index 0992bde..b8b9cca 100644 --- a/content/blog/2016-04-15-foodify-app-hacknsit-2016.md +++ b/content/blog/2016-04-15-foodify-app-hacknsit-2016.md @@ -29,7 +29,7 @@ Since we were a team of 4 composed of two python developers ([rhnvrm][2], [mrkar You can see the demo video here: - +   diff --git a/content/blog/2016-05-07-adding-support-for-vector-instructions-to-8051-architecture.md b/content/blog/2016-05-07-adding-support-for-vector-instructions-to-8051-architecture.md index b4c2f77..6baf861 100644 --- a/content/blog/2016-05-07-adding-support-for-vector-instructions-to-8051-architecture.md +++ b/content/blog/2016-05-07-adding-support-for-vector-instructions-to-8051-architecture.md @@ -12,4 +12,4 @@ This was a group project for the Computer Architecture course at SNU under Prof. [View Fullscreen][1] - [1]: /wp-content/plugins/pdfjs-viewer-shortcode/pdfjs/web/viewer.php?file=/wp-content/uploads/2016/12/8051_Vectorization.pdf&download=true&print=true&openfile=false \ No newline at end of file + [1]: /wp-content/uploads/2016/12/8051_Vectorization.pdf \ No newline at end of file diff --git a/content/blog/2016-08-06-topological-sort-for-problems-using-dag.md b/content/blog/2016-08-06-topological-sort-for-problems-using-dag.md index aac14df..19d525d 100644 --- a/content/blog/2016-08-06-topological-sort-for-problems-using-dag.md +++ b/content/blog/2016-08-06-topological-sort-for-problems-using-dag.md @@ -30,20 +30,18 @@ not have a topological sort. The proof for this can be found [here][1] Suppose we have the following graphs: -
-
<span class="n">graph1</span> <span class="o">=</span> <span class="p">{</span> <span class="s">"x"</span> <span class="p">:</span> <span class="p">[</span><span class="s">"y"</span><span class="p">],</span>
-                <span class="s">"z"</span> <span class="p">:</span> <span class="p">[</span><span class="s">"y"</span><span class="p">],</span>
-                <span class="s">"y"</span> <span class="p">:</span> <span class="p">[],</span>
-                <span class="s">"a"</span> <span class="p">:</span> <span class="p">[</span><span class="s">"b"</span><span class="p">],</span>
-                <span class="s">"b"</span> <span class="p">:</span> <span class="p">[</span><span class="s">"c"</span><span class="p">],</span>
-                <span class="s">"c"</span> <span class="p">:</span> <span class="p">[]</span> <span class="p">}</span>
-
-
- -
-
<span class="n">graph2</span> <span class="o">=</span> <span class="p">{</span><span class="s">"x"</span> <span class="p">:</span> <span class="p">[</span><span class="s">"y"</span><span class="p">],</span> <span class="s">"y"</span><span class="p">:</span> <span class="p">[</span><span class="s">"x"</span><span class="p">]}</span>
-
-
+```python +graph1 = { "x" : ["y"], + "z" : ["y"], + "y" : [], + "a" : ["b"], + "b" : ["c"], + "c" : [] } + +Python + +graph2 = {"x" : ["y"], "y": ["x"]} +``` Here, you can notice how graph1 has a toposort but for graph2, it does not exist. This is because of the fact there @@ -59,40 +57,39 @@ on calculating the indegree of all the vertices and using Queue (although it can Here is my implementation using Modified DFS and an array as a (kind-of) stack: -
-
<span class="k">def</span> <span class="nf">dfs_toposort</span><span class="p">(</span><span class="n">graph</span><span class="p">):</span>
-    <span class="n">L</span> <span class="o">=</span> <span class="p">[]</span>
-    <span class="n">color</span> <span class="o">=</span> <span class="p">{</span> <span class="n">u</span> <span class="p">:</span> <span class="s">"white"</span> <span class="k">for</span> <span class="n">u</span> <span class="ow">in</span> <span class="n">graph</span> <span class="p">}</span>
-    <span class="n">found_cycle</span> <span class="o">=</span> <span class="p">[</span><span class="bp">False</span><span class="p">]</span>
+```python
+def dfs_toposort(graph):
+    L = []
+    color = { u : "white" for u in graph }
+    found_cycle = [False]
     
-    <span class="k">for</span> <span class="n">u</span> <span class="ow">in</span> <span class="n">graph</span><span class="p">:</span>
-        <span class="k">if</span> <span class="n">color</span><span class="p">[</span><span class="n">u</span><span class="p">]</span> <span class="o">==</span> <span class="s">"white"</span><span class="p">:</span>
-            <span class="n">dfs_visit</span><span class="p">(</span><span class="n">graph</span><span class="p">,</span> <span class="n">u</span><span class="p">,</span> <span class="n">color</span><span class="p">,</span> <span class="n">L</span><span class="p">,</span> <span class="n">found_cycle</span><span class="p">)</span>
-        <span class="k">if</span> <span class="n">found_cycle</span><span class="p">[</span><span class="mi">0</span><span class="p">]:</span>
-            <span class="k">break</span>
+    for u in graph:
+        if color[u] == "white":
+            dfs_visit(graph, u, color, L, found_cycle)
+        if found_cycle[0]:
+            break
     
-    <span class="k">if</span> <span class="n">found_cycle</span><span class="p">[</span><span class="mi">0</span><span class="p">]:</span>
-        <span class="n">L</span> <span class="o">=</span> <span class="p">[]</span>
+    if found_cycle[0]:
+        L = []
     
-    <span class="n">L</span><span class="o">.</span><span class="n">reverse</span><span class="p">()</span>
-    <span class="k">return</span> <span class="n">L</span>
+    L.reverse()
+    return L
 
-<span class="k">def</span> <span class="nf">dfs_visit</span><span class="p">(</span><span class="n">graph</span><span class="p">,</span> <span class="n">u</span><span class="p">,</span> <span class="n">color</span><span class="p">,</span> <span class="n">L</span><span class="p">,</span> <span class="n">found_cycle</span><span class="p">):</span>
-    <span class="k">if</span> <span class="n">found_cycle</span><span class="p">[</span><span class="mi">0</span><span class="p">]:</span>
-        <span class="k">return</span>
-    <span class="n">color</span><span class="p">[</span><span class="n">u</span><span class="p">]</span> <span class="o">=</span> <span class="s">"gray"</span>
+def dfs_visit(graph, u, color, L, found_cycle):
+    if found_cycle[0]:
+        return
+    color[u] = "gray"
     
-    <span class="k">for</span> <span class="n">v</span> <span class="ow">in</span> <span class="n">graph</span><span class="p">[</span><span class="n">u</span><span class="p">]:</span>
-        <span class="k">if</span> <span class="n">color</span><span class="p">[</span><span class="n">v</span><span class="p">]</span> <span class="o">==</span> <span class="s">"gray"</span><span class="p">:</span>
-            <span class="n">found_cycle</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">=</span> <span class="bp">True</span>
-            <span class="k">return</span>
-        <span class="k">if</span> <span class="n">color</span><span class="p">[</span><span class="n">v</span><span class="p">]</span> <span class="o">==</span> <span class="s">"white"</span><span class="p">:</span>
-            <span class="n">dfs_visit</span><span class="p">(</span><span class="n">graph</span><span class="p">,</span> <span class="n">v</span><span class="p">,</span> <span class="n">color</span><span class="p">,</span> <span class="n">L</span><span class="p">,</span> <span class="n">found_cycle</span><span class="p">)</span>
+    for v in graph[u]:
+        if color[v] == "gray":
+            found_cycle[0] = True
+            return
+        if color[v] == "white":
+            dfs_visit(graph, v, color, L, found_cycle)
     
-    <span class="n">color</span><span class="p">[</span><span class="n">u</span><span class="p">]</span> <span class="o">=</span> <span class="s">"black"</span>
-    <span class="n">L</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">u</span><span class="p">)</span>
-
-
+ color[u] = "black" + L.append(u) +``` The function dfs_toposort returns an empty array if there exists a cycle in the graph. diff --git a/content/blog/2016-10-22-labeled-tweet-generator-and-galaxy-image-classifier-featured-in-sirajologys-youtube-videos.md b/content/blog/2016-10-22-labeled-tweet-generator-and-galaxy-image-classifier-featured-in-sirajologys-youtube-videos.md index 57c6cc1..9a67983 100644 --- a/content/blog/2016-10-22-labeled-tweet-generator-and-galaxy-image-classifier-featured-in-sirajologys-youtube-videos.md +++ b/content/blog/2016-10-22-labeled-tweet-generator-and-galaxy-image-classifier-featured-in-sirajologys-youtube-videos.md @@ -16,20 +16,20 @@ The first project I made was a Galaxy Image Classifier ( + And it was featured in the next video in the series: - + The second project was a Labeled Tweet Dataset Generator (). Using this project, a datascientist can open and type his query in the searchbox and look at the results and if he is happy with them he can click the download as csv button to save them and work on it. It was based on this video: - + and was featured in this one: - +   \ No newline at end of file diff --git a/content/blog/2016-11-01-todays-git-tip-in-gitconfig-url-gitgithub.md b/content/blog/2016-11-01-todays-git-tip-in-gitconfig-url-gitgithub.md index 3b37b2f..788b797 100644 --- a/content/blog/2016-11-01-todays-git-tip-in-gitconfig-url-gitgithub.md +++ b/content/blog/2016-11-01-todays-git-tip-in-gitconfig-url-gitgithub.md @@ -1,5 +1,5 @@ --- -title: 'Today’s git tip In gitconfig url git@github…' +title: 'Gitconfig tip for github' author: rhnvrm type: post date: 2016-11-01T09:45:37+00:00 diff --git a/content/blog/2016-11-02-i-wonder-what-linus-torvalds-view-is-about.md b/content/blog/2016-11-02-i-wonder-what-linus-torvalds-view-is-about.md index 0995372..02a7a8c 100644 --- a/content/blog/2016-11-02-i-wonder-what-linus-torvalds-view-is-about.md +++ b/content/blog/2016-11-02-i-wonder-what-linus-torvalds-view-is-about.md @@ -9,6 +9,7 @@ categories: tags: - git format: link +draft: true --- I wonder what Linus Torvald’s view is about “Gitless” diff --git a/content/blog/2016-11-07-a-tip-on-using-fsck-when-you-are.md b/content/blog/2016-11-07-a-tip-on-using-fsck-when-you-are.md index e285f07..534f02b 100644 --- a/content/blog/2016-11-07-a-tip-on-using-fsck-when-you-are.md +++ b/content/blog/2016-11-07-a-tip-on-using-fsck-when-you-are.md @@ -1,5 +1,5 @@ --- -title: A tip on using fsck when you are… +title: A tip on using fsck author: rhnvrm type: post date: 2016-11-07T22:24:09+00:00 diff --git a/content/blog/2016-11-09-some-journal-publications-require-you-to-put-author.md b/content/blog/2016-11-09-some-journal-publications-require-you-to-put-author.md index 1ea056c..018a659 100644 --- a/content/blog/2016-11-09-some-journal-publications-require-you-to-put-author.md +++ b/content/blog/2016-11-09-some-journal-publications-require-you-to-put-author.md @@ -1,5 +1,5 @@ --- -title: Some journal publications require you to put author… +title: Author Biography Alongside Pictures in Latex author: rhnvrm type: post date: 2016-11-09T16:59:35+00:00 diff --git a/content/blog/2016-11-13-toured-seville-today-thanks-to-https-www-feelthecitytours.md b/content/blog/2016-11-13-toured-seville-today-thanks-to-https-www-feelthecitytours.md index 1668494..403847e 100644 --- a/content/blog/2016-11-13-toured-seville-today-thanks-to-https-www-feelthecitytours.md +++ b/content/blog/2016-11-13-toured-seville-today-thanks-to-https-www-feelthecitytours.md @@ -1,5 +1,5 @@ --- -title: Toured Seville today thanks to https www feelthecitytours… +title: Toured Seville today thanks to FeelTheCityTours author: rhnvrm type: post date: 2016-11-13T22:12:21+00:00 diff --git a/content/blog/2016-11-25-i-recently-corrupted-my-zsh-history-and-was.md b/content/blog/2016-11-25-i-recently-corrupted-my-zsh-history-and-was.md index f094b0a..b36e1a7 100644 --- a/content/blog/2016-11-25-i-recently-corrupted-my-zsh-history-and-was.md +++ b/content/blog/2016-11-25-i-recently-corrupted-my-zsh-history-and-was.md @@ -1,5 +1,5 @@ --- -title: I recently corrupted my zsh history and was… +title: Fixing my zsh history author: rhnvrm type: post date: 2016-11-25T18:35:39+00:00 diff --git a/content/blog/2016-11-29-octoshark-hackathon.md b/content/blog/2016-11-29-octoshark-hackathon.md index 12a9c60..bb16b6b 100644 --- a/content/blog/2016-11-29-octoshark-hackathon.md +++ b/content/blog/2016-11-29-octoshark-hackathon.md @@ -29,11 +29,11 @@ The backend server of OctoShark on receiving a `GET` request on the `/create` ### Demo Video - + ### Presentation Video - + ### Future Work diff --git a/content/blog/2016-12-12-sorting-out-my-todo-list-for-the-next.md b/content/blog/2016-12-12-sorting-out-my-todo-list-for-the-next.md index ba1f3aa..ee744de 100644 --- a/content/blog/2016-12-12-sorting-out-my-todo-list-for-the-next.md +++ b/content/blog/2016-12-12-sorting-out-my-todo-list-for-the-next.md @@ -9,6 +9,7 @@ categories: tags: - misc format: status +draft: true --- Sorting out my todo list for the next 3 weeks. \ No newline at end of file diff --git a/content/blog/2017-01-02-.md b/content/blog/2017-01-02-.md deleted file mode 100644 index 330da05..0000000 --- a/content/blog/2017-01-02-.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Twenty Sixteen -author: rhnvrm -type: post -date: -001-11-30T00:00:00+00:00 -draft: true -url: blog/?p=126 -categories: - - uncategorized -format: status - ---- diff --git a/content/blog/2017-01-06-.md b/content/blog/2017-01-06-.md deleted file mode 100644 index a84e49c..0000000 --- a/content/blog/2017-01-06-.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Postmortem Week 1 – 2017 -author: rhnvrm -type: post -date: -001-11-30T00:00:00+00:00 -draft: true -url: blog/?p=128 -categories: - - uncategorized -format: status - ---- diff --git a/content/blog/2017-01-07-.md b/content/blog/2017-01-07-.md deleted file mode 100644 index 930bfc4..0000000 --- a/content/blog/2017-01-07-.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Tower of Hanoi -author: rhnvrm -type: post -date: -001-11-30T00:00:00+00:00 -draft: true -url: blog/?p=130 -categories: - - uncategorized -format: status - ---- diff --git a/content/blog/2017-01-12-.md b/content/blog/2017-01-12-snu-data-limit.md similarity index 61% rename from content/blog/2017-01-12-.md rename to content/blog/2017-01-12-snu-data-limit.md index d53d36d..dcf0f70 100644 --- a/content/blog/2017-01-12-.md +++ b/content/blog/2017-01-12-snu-data-limit.md @@ -2,9 +2,8 @@ title: SNU Datalimit Chrome Extension author: rhnvrm type: post -date: -001-11-30T00:00:00+00:00 -draft: true -url: blog/?p=144 +date: 2017-07-22T23:43:41+00:00 +url: blog/2016/03/22/snu-data-limit categories: - projects tags: @@ -15,6 +14,6 @@ tags: --- [][1] -The +I developed a chrome extension to track usage. You can view the code on [GitHub](https://github.com/rhnvrm/snu-data-limit/) [1]: https://chrome.google.com/webstore/detail/snudatalimit/mfjinloagcpmfacpjnlabcflnkbajidd \ No newline at end of file diff --git a/content/blog/2017-02-04-i-used-to-use-the-l-flag.md b/content/blog/2017-02-04-i-used-to-use-the-l-flag.md index bac0d19..f1ef087 100644 --- a/content/blog/2017-02-04-i-used-to-use-the-l-flag.md +++ b/content/blog/2017-02-04-i-used-to-use-the-l-flag.md @@ -1,5 +1,5 @@ --- -title: I used to use the ` L` flag… +title: SOCKS Proxy author: rhnvrm type: post date: 2017-02-04T18:21:25+00:00 diff --git a/content/blog/2017-02-09-.md b/content/blog/2017-02-09-.md deleted file mode 100644 index 6c4becd..0000000 --- a/content/blog/2017-02-09-.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: I’m taking a class on Psychoanalysis of Films… -author: rhnvrm -type: post -date: -001-11-30T00:00:00+00:00 -draft: true -url: blog/?p=177 -categories: - - uncategorized -format: status - ---- -I’m taking a class on Psychoanalysis of Films. One of the tasks of the course is to make a 10 page screenplay by the end of the course. I recently read about Lacan’s interpretation of Freud’s Vorstellungsrepräsentanz. \ No newline at end of file diff --git a/content/blog/2017-02-09-vorstellungsreprasentanz.md b/content/blog/2017-02-09-vorstellungsreprasentanz.md index 26e3598..6509bc1 100644 --- a/content/blog/2017-02-09-vorstellungsreprasentanz.md +++ b/content/blog/2017-02-09-vorstellungsreprasentanz.md @@ -14,7 +14,7 @@ tags: - sociology --- -
[][1]
Las Meninas
+
Las Meninas[1]
In Lacan’s seminars, he discussed the artists Cézanne, Holbein and Velasquez. In each case the fil rouge which connected Lacan’s thought was the idea of shifts in perspective leading to ways in which the artist had produced a work that evoked the experience of the “gaze”. In Seminar XIII, in discussing Velasquez’ Las Meninas, Lacan identifies the “picture within the picture” which we see Velasquez working on, as the Vorstellungsrepräsentanz , the representative of the representation. Lacan very clearly distinguished representation as being on the side of signification, whereas the “representative of representation” as being on the side of the signifier. In Las Meninas the “picture in the picture” is painted by Velasquez at the conjunction of two perspectives which are impossible in one space. Lacan said the “picture in the picture” as the “representative of representation” casts uncertainty on other “representations” in the painting. These other “objects” take on this disturbance of perspective in a domino effect, which allows many elements of the painting to take on this “representative of the representation” effect. This destabilizing of the visual space of the painting allows for displacements and condensations of images in the painting. An endless series of questions arise about the relations between the elements in the painting. People have talked about this painting for 350 years! What grounds the artist’s ability to do this is a masterful knowledge of his craft and an appreciation of a beyond of representation. With Las Meninas, it is Velasquez’ ability to construct an impossible melding of perspectives that keep the viewer is suspense. @@ -22,7 +22,7 @@ tags: **An example of this in cinema is the ending of the movie 2001: A Space Odyssey by which was written and directed by Stanley Kubrick.** - + The trip through the wormhole takes our protagonist to a particularly ambiguous environment, adorned with luxurious furnishings but maintaining a clinical or rather detached, oddly misunderstood and superficial facsimile of luxury. Here Dave runs through his life, in fast forward until he dies and is reborn in the form of the ‘Star Child’. The cuts we see here have Dave observing himself in the third person, then we switch over to the other Dave and follow him. This device is an ingenious way that Kubrick elegantly side steps the use of the montage technique, simultaneously progressing time without resorting to fades, whilst furthering the artificiality of the environment (with) a deliberate manipulation of time. diff --git a/content/blog/2017-02-14-survey-paper-on-security-in-wireless-sensor-networks.md b/content/blog/2017-02-14-survey-paper-on-security-in-wireless-sensor-networks.md index a0a70fc..38fe718 100644 --- a/content/blog/2017-02-14-survey-paper-on-security-in-wireless-sensor-networks.md +++ b/content/blog/2017-02-14-survey-paper-on-security-in-wireless-sensor-networks.md @@ -17,4 +17,4 @@ Wireless Sensor Network is an emerging area that shows great future prospects. T [View Fullscreen][1] - [1]: /wp-content/plugins/pdfjs-viewer-shortcode/pdfjs/web/viewer.php?file=http%3A%2F%2F13.232.63.7%2Fwp-content%2Fuploads%2F2017%2F07%2FTP_WSN2017_Group_15-1.pdf&download=true&print=true&openfile=false \ No newline at end of file + [1]: /wp-content/uploads/2017/07/TP_WSN2017_Group_15-1.pdf \ No newline at end of file diff --git a/content/blog/2017-04-20-retrofitting-led-lamps-into-smart-lamps.md b/content/blog/2017-04-20-retrofitting-led-lamps-into-smart-lamps.md index 5f66457..661c014 100644 --- a/content/blog/2017-04-20-retrofitting-led-lamps-into-smart-lamps.md +++ b/content/blog/2017-04-20-retrofitting-led-lamps-into-smart-lamps.md @@ -18,4 +18,4 @@ Objective of this project was to show as a proof of concept that we can pick up   - [1]: /wp-content/plugins/pdfjs-viewer-shortcode/pdfjs/web/viewer.php?file=http%3A%2F%2F13.232.63.7%2Fwp-content%2Fuploads%2F2017%2F07%2FWSN-Project-Report.pdf&download=true&print=true&openfile=false \ No newline at end of file + [1]: /wp-content/uploads/2017/07/WSN-Project-Report.pdf \ No newline at end of file diff --git a/content/blog/2017-05-20-.md b/content/blog/2017-05-20-.md deleted file mode 100644 index 4e9b0ea..0000000 --- a/content/blog/2017-05-20-.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: "2016" -author: rhnvrm -type: post -date: -001-11-30T00:00:00+00:00 -draft: true -url: blog/?p=192 -categories: - - uncategorized -format: status - ---- -2017 has been a dull year. It has felt even more dull after the fast paced year that 2016 was. - -  \ No newline at end of file diff --git a/content/blog/2017-07-27-216.md b/content/blog/2017-07-27-216.md index b8f29a4..add94c5 100644 --- a/content/blog/2017-07-27-216.md +++ b/content/blog/2017-07-27-216.md @@ -1,8 +1,9 @@ --- +title: Death of Ivan Illyich author: rhnvrm type: post date: 2017-07-27T20:40:51+00:00 -url: blog/2017/07/27/216/ +url: blog/2017/07/27/death-of-ivn-illtich/ categories: - uncategorized tags: diff --git a/content/blog/2017-10-03-.md b/content/blog/2017-10-03-.md deleted file mode 100644 index 135e894..0000000 --- a/content/blog/2017-10-03-.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: I’m moving back to Firefox -author: rhnvrm -type: post -date: -001-11-30T00:00:00+00:00 -draft: true -url: blog/?p=221 -categories: - - uncategorized -format: status - ---- diff --git a/content/blog/2017-10-16-was-codification-of-odissi-successful-in-capturing-the-true-essence-of-the-dance-as-it-was-prevalent-or-even-as-it-was-performed​-​in-​​the-​​ancient.md b/content/blog/2017-10-16-was-codification-of-odissi-successful-in-capturing-the-true-essence-of-the-dance-as-it-was-prevalent-or-even-as-it-was-performed​-​in-​​the-​​ancient.md index 061d9ea..5304ce6 100644 --- a/content/blog/2017-10-16-was-codification-of-odissi-successful-in-capturing-the-true-essence-of-the-dance-as-it-was-prevalent-or-even-as-it-was-performed​-​in-​​the-​​ancient.md +++ b/content/blog/2017-10-16-was-codification-of-odissi-successful-in-capturing-the-true-essence-of-the-dance-as-it-was-prevalent-or-even-as-it-was-performed​-​in-​​the-​​ancient.md @@ -10,10 +10,13 @@ tags: - odissi --- -Was codification of Odissi successful in capturing the true essence of the dance as it was prevalent or even as it was performed​ ​in ​​the ​​ancient ​​era? -  +The transfer of knowledge required for the continued existence of any performance art requires intense and deliberate training from both the Guru and the Shishya. Through codification and written text, the need to rely on this tradition to study the art form decreases but the difficulty to master increases due to standardization. In my study of the readings by Anita Cherian and the Odissi Renaissance along with my understanding of linguistics and language theory I wish to answer the question of how the codification of Odissi Dance, a performance art, has resulted in the birth of a modern classical dance form, far from what was probably performed by the ancients. I will firstly give a background to the situation before independence to befit someone unbeknownst to the scenario. I will assert here that the true essence of the dance was lost and it was only after the revival of classical forms due to nationalistic planning that modern Odissi was born. I then look upon how the institutionalization of performance art in India was necessitated by the Sangeet Natak Akademi (SNA) and India’s cultural planning and the impact it had on the local art forms of the new nation state and how it lead to the codification of Odissi and argue that it was indeed the policy framework and patronage that pushed the Gotipuas to, in essence, codify and revive the Classical Odissi according to the Natyashastra. With the emergence of this totally new dance form, differing from both ancient and actually practiced forms during the times and evolving to the present day, I discuss that the codification, using a lens of linguistics, demonstrates how similar to spoken languages, dynamic art performances are not truly captured by codified grammar. +Indian Classical Dances, such as Odissi are disseminated to students via oral tradition and usually adhere to no written syllabus, other than actual toil, blood and sweat of the disciple with the Guru. Natyashastra, the ancient document on dance, music and drama mentions Odramagadhi style of dance. Through this it is concluded that Odissi dance did exist within a classical framework since 2000 years. The evidence was compounded by the sculptures of dance poses found in temples and archeological sites. The art was suppressed by the Islamic rule and the British rule that followed. The maharis, who were the temple dancers who held the knowledge of the original form stopped practicing the dance due to this suppression. The dance form was continued upon by the Gotipuas, who were boys, aged between 9-14 years dressed in drag. They continued the dance form in their own style. Hence, due to the lack of writing by the mahris there were not many written records about the dance in the recent era. Also, the original temple dance was lost and only survived by the archeological remnants and the Gotipuas. Here, we can see that since the original Gurus of these forms were lost there was no way to continue on the tradition and hence, in fact the essence was indeed lost. However, with the independence of India, a new wave of cultural revival spawned along with a passion for identity amongst the Aanchalis to assert their Odissi style of music and dance amongst the Indian Diaspora. -[View Fullscreen][1] - [1]: /wp-content/plugins/pdfjs-viewer-shortcode/pdfjs/web/viewer.php?file=http%3A%2F%2F13.232.63.7%2Fwp-content%2Fuploads%2F2017%2F11%2Fdoc.pdf&download=true&print=true&openfile=false \ No newline at end of file +Institutionalization and Standardization of the arts by the SNA was a huge influence and motivation behind the codification of Odissi. As mentioned by Anita Cherian, the Theatre was indirectly controlled by the SNA which was influenced by the Government and its idea of culture and cultural unification. The theatre is where the middle class went and for any performer to showcase their art, it was clear that they would have to conform to the SNA’s ideals. Not only this, but the awards and scholarships were also directly controlled by the SNA. A clear example of this is in fact Odissi, which was not accepted to be classical enough until it was reformed with Natyashastra and the Abinaya Darpana by Guru Mayadhar Raut as mentioned in the Odissi Renaissance. The SNA had replaced the patronage of the royalty of India. And there was no way to not be in tandem with the SNA if you were an artiste in India. For example, it was mentioned in the lectures that there was no evidence for the usage of the now ubiquitously associated silver jewellery with Odissi in the ancient era. But, upon delving further, we find that one of the biggest patrons of Odissi was a Silversmith. Later, the SNA took upon this role of being the nourisher of the arts. Dhirendranath Patnaik comments that the state of Odissi was poor with poorly developed music, costumes and repertory. It is therefore, clear to see the reasons why the Jayantika Association composed of practicing gurus, dancers and scholars got together to rebuild the repertory of the form. The combined form, composed of various practiced forms were incorporated into the mutually agreed Jayantika Association codes, styles, and repertoire influenced by the sculpturesque poses along with the mudras. Only after this and a few performances by the troupes which were highly appraised, the Central Sangeet Natak Akademi accepted Odissi dance as a classical school of dance. +The premise that the classical arts are not living arts has lead to individual performance pieces or choreographies that do not follow a certain style to be categorized as not Odissi enough. For example, the Ramli Ibrahim style of Odissi is often remarked to be so. People who study modern linguistics consider spoken language to be the true form of language. Spoken language is the primary language while written language is an imperfect reflection of spoken language, conveyed through an imperfect technology, that is writing. Spoken language comes naturally to all normal human children. Similar to language, Dance is also an expression of the human mind. Normal human children naturally develop rhythm and can perform basic movements to rhythms at an early age. There have been numerous cases where spoken languages, without a script have been codified, such as Korean. Considering a case familiar to us Indians, would be of “Indian English” which is not yet codified or accepted to even exist. Although, considering the fact that according to linguistics, it is a real phenomenon. Here, the written language fails to capture the dynamic and changing language. Similar to grammar for languages, Odissi has been codified and is composed of motifs, movements and abhinaya. The codified structure serves as the grammar and helps the Gurus to make their choreographies. The sculpturesque poses, which are considered to be an essential quality in modern Odissi, were nowhere to be seen in the dances of the Gotipuas or the Maharis. The evidence for the dynamic nature of Odissi or say any other dance form is in the flourishing Odissi Paddhatis (or Gharanas). This system allows the disciples to work upon the work of their Guru and add to the style of their Gharana while at the same time being limited to the style and formalities of the Gharana. We can see that four flavors of Odissi prevail in the modern times. All of them slightly vary from what the Jayantika Association codified earlier and in actuality, Odissi exists as a dynamic art. + + +As discussed above, I find that codification of Odissi has served the purpose for which it was done. Although, it is sufficient to look upon the arguments presented above to assert that classical Odissi is far from how it was practiced in the ancient era and moreover did not even capture how it was performed while being coded. The institutionalization of the arts in India played a major role and its influence was enhanced by the fact that the state was the only major patron, providing the stage, the awards and the recognition for talent. Emphasis should be placed on the fact that one can easily study and find that performance art is clearly dynamic and not static and is uniquely tied with the socio-political scenario at any given moment in time. Therefore, although the Odissi we know today is unlike what was practised years ago, we find that the codification has helped in adequate preservation and revival of the form. diff --git a/content/blog/2017-11-30-emotive-adsense-project.md b/content/blog/2017-11-30-emotive-adsense-project.md index fb4fcf2..dfa80ca 100644 --- a/content/blog/2017-11-30-emotive-adsense-project.md +++ b/content/blog/2017-11-30-emotive-adsense-project.md @@ -8,13 +8,12 @@ categories: - projects --- - ## Objective Use Facial Expressions to find segments of the video where engagement is above a threshold and display advertisements during those segments. -  + ## Domain Background diff --git a/content/blog/2017-12-19-what-thefuck-is-wrong-with.md b/content/blog/2017-12-19-what-thefuck-is-wrong-with.md index 68127a9..8b65b44 100644 --- a/content/blog/2017-12-19-what-thefuck-is-wrong-with.md +++ b/content/blog/2017-12-19-what-thefuck-is-wrong-with.md @@ -9,6 +9,7 @@ client-modified: categories: - uncategorized format: aside +draft: true --- **What thefuck is wrong with my zsh?** diff --git a/content/blog/2017-12-20-setting-up-latex-on-spacemacs.md b/content/blog/2017-12-20-setting-up-latex-on-spacemacs.md index b09a6cf..0f313db 100644 --- a/content/blog/2017-12-20-setting-up-latex-on-spacemacs.md +++ b/content/blog/2017-12-20-setting-up-latex-on-spacemacs.md @@ -25,9 +25,11 @@ Then, all that needs to be done is press, `SPC-m-b` to build and `SPC-m-v` to vi Although, by default, Emacs will open it in your default PDF viewer. Emacs also provides another layer, `pdf-tools`, briefly mentioned above, which allows rendering PDF files inside Emacs itself. Adding this layer to your config, you can add the following to your config file to set PDF tools to be your default PDF viewer inside Emacs. -
(setq TeX-view-program-selection '((output-pdf "PDF Tools"))
+```lisp
+(setq TeX-view-program-selection '((output-pdf "PDF Tools"))
   TeX-view-program-list '(("PDF Tools" TeX-pdf-tools-sync-view))
   TeX-source-correlate-start-server t
-)
+) +``` Similarly, we can also setup syncing between TeX and the PDF which I will cover sometime later when the need arises. \ No newline at end of file diff --git a/content/blog/2017-12-21-deep-learning-through-the-lens-of-the-information-plane.md b/content/blog/2017-12-21-deep-learning-through-the-lens-of-the-information-plane.md index b53d20a..54e5f77 100644 --- a/content/blog/2017-12-21-deep-learning-through-the-lens-of-the-information-plane.md +++ b/content/blog/2017-12-21-deep-learning-through-the-lens-of-the-information-plane.md @@ -37,34 +37,34 @@ A Markov process is a “memory-less” (also called “Markov Property ### **2.2 KL Divergence** -KL divergence measures how one probability distribution {p}diverges from a second expected probability distribution {q}. It is asymmetric. [5] +KL divergence measures how one probability distribution {p}diverges from a second expected probability distribution {q}. It is asymmetric. [5] -D_{KL}(p \| q) = \sum_x p(x) \log \frac{p(x)}{q(x)} dx  = - \sum_x p(x)\log q(x) + \sum_x p(x)\log p(x)  = H(P, Q) - H(P)  +D_{KL}(p \| q) = \sum_x p(x) \log \frac{p(x)}{q(x)} dx  = - \sum_x p(x)\log q(x) + \sum_x p(x)\log p(x)  = H(P, Q) - H(P)  - {D_{KL}}achieves the minimum zero when {p(x) == q(x)}everywhere. + {D_{KL}}achieves the minimum zero when {p(x) == q(x)}everywhere. ### **2.3 Mutual Information** Mutual information measures the mutual dependence between two variables. It quantifies the “amount of information” obtained about one random variable through the other random variable. Mutual information is symmetric. [5] -I(X;Y) = D_{KL}\left[~p(x,y) ~\|~ p(x)p(y)~\right]  = \sum_{x \in X, y \in Y} p(x, y) \log\left(\frac{p(x, y)}{p(x)p(y)}\right)  = \sum_{x \in X, y \in Y} p(x, y) \log\left(\frac{p(x|y)}{p(x)}\right)  = H(X) - H(X|Y)  +I(X;Y) = D_{KL}\left[~p(x,y) ~\|~ p(x)p(y)~\right]  = \sum_{x \in X, y \in Y} p(x, y) \log\left(\frac{p(x, y)}{p(x)p(y)}\right)  = \sum_{x \in X, y \in Y} p(x, y) \log\left(\frac{p(x|y)}{p(x)}\right)  = H(X) - H(X|Y)  ### **2.4 Data Processing Inequality** -For any markov chain: {X \rightarrow Y \rightarrow Z}, we would have [5] +For any markov chain: {X \rightarrow Y \rightarrow Z}, we would have [5]

- \displaystyle I(X; Y) \geq I(X; Z) \ \ \ \ \ (1) + \displaystyle I(X; Y) \geq I(X; Z) \ \ \ \ \ (1)

A deep neural network can be viewed as a Markov chain, and thus when we are moving down the layers of a DNN, the mutual information between the layer and the input can only decrease. ### **2.5 Reparameterization Invariance** -For two invertible functions {\phi}, {\psi}, the mutual information still holds: +For two invertible functions {\phi}, {\psi}, the mutual information still holds:

- \displaystyle I(X; Y) = I(\phi(X); \psi(Y)) \ \ \ \ \ (2) + \displaystyle I(X; Y) = I(\phi(X); \psi(Y)) \ \ \ \ \ (2)

  @@ -73,30 +73,30 @@ For example, if we shuffle the weights in one layer of DNN, it would not affect ### **2.6 The Asymptotic Equipartition Property** -This theorem is a simple consequence of the weak law of large numbers. It states that if a set of values {X_1, X_2, ..., X_n}is drawn independently from a random variable X distributed according to {P(x)}, then the joint probability {P(X_1,...,X_n)}satisfies [5] +This theorem is a simple consequence of the weak law of large numbers. It states that if a set of values {X_1, X_2, ..., X_n}is drawn independently from a random variable X distributed according to {P(x)}, then the joint probability {P(X_1,...,X_n)}satisfies [5]

- \displaystyle \frac{-1}{n} \log_{2}{P(X_1,X_2,...,X_n)} \rightarrow H(X) \ \ \ \ \ (3) + \displaystyle \frac{-1}{n} \log_{2}{P(X_1,X_2,...,X_n)} \rightarrow H(X) \ \ \ \ \ (3)

-where {H(X)}is the entropy of the random variable {X}. +where {H(X)}is the entropy of the random variable {X}. -Although, this is out of bounds of the scope of this work, for the sake of completeness I would like to mention how the authors of [2] use this to argue that for a typical hypothesis class the size of {X}is approximately {2^{H(X)}}. Considering an {\epsilon}-partition, {T_\epsilon}, on {X}, the cardinality of the hypothis class, {|H_\epsilon|}, can be written as {|H_\epsilon| \sim 2^{|X|} \rightarrow 2^{|T_\epsilon|}}and therefore we have, +Although, this is out of bounds of the scope of this work, for the sake of completeness I would like to mention how the authors of [2] use this to argue that for a typical hypothesis class the size of {X}is approximately {2^{H(X)}}. Considering an {\epsilon}-partition, {T_\epsilon}, on {X}, the cardinality of the hypothis class, {|H_\epsilon|}, can be written as {|H_\epsilon| \sim 2^{|X|} \rightarrow 2^{|T_\epsilon|}}and therefore we have,

- \displaystyle \vert T_\epsilon \vert \sim \frac{2^{H(X)}}{2^{H(X \vert T_\epsilon)}} = 2^{I(T_\epsilon; X)} \ \ \ \ \ (4) + \displaystyle \vert T_\epsilon \vert \sim \frac{2^{H(X)}}{2^{H(X \vert T_\epsilon)}} = 2^{I(T_\epsilon; X)} \ \ \ \ \ (4)

Then the input compression bound,

- \displaystyle \epsilon^2 < \frac{\log|H_\epsilon| + \log{1/\delta}}{2m} \ \ \ \ \ (5) + \displaystyle \epsilon^2 < \frac{\log|H_\epsilon| + \log{1/\delta}}{2m} \ \ \ \ \ (5)

becomes,

- \displaystyle \epsilon^2 < \frac{2^{I(T_\epsilon; X)} + \log{1/\delta}}{2m} \ \ \ \ \ (6) + \displaystyle \epsilon^2 < \frac{2^{I(T_\epsilon; X)} + \log{1/\delta}}{2m} \ \ \ \ \ (6)

The authors then further develop this to provide a general bound on learning by combining it with the Information Bottleneck theory [6]. @@ -105,33 +105,46 @@ The authors then further develop this to provide a general bound on learning by ### **3.1 DNN Layers as Markov Chain** -In supervised learning, the training data contains sampled observations from the joint distribution of {X}and {Y}. The input variable {X}and weights of hidden layers are all high-dimensional random variable. The ground truth target {Y}and the predicted value {\hat{Y}}are random variables of smaller dimensions in the classification settings. Moreover, we want to efficiently learn such representations from an empirical sample of the (unknown) joint distribution {P(X,Y)}, in a way that provides good generalization. +In supervised learning, the training data contains sampled observations from the joint distribution of {X}and {Y}. The input variable {X}and weights of hidden layers are all high-dimensional random variable. The ground truth target {Y}and the predicted value {\hat{Y}}are random variables of smaller dimensions in the classification settings. Moreover, we want to efficiently learn such representations from an empirical sample of the (unknown) joint distribution {P(X,Y)}, in a way that provides good generalization. -
The structure of a deep neural network, which consists of the target label {Y}, input layer {X}, hidden layers {h_1,\dots,h_m}and the final prediction {\hat{Y}}. (Image Source: Tishby 2015)[3]
If we label the hidden layers of a DNN as {h_1,h_2,...,h_m}as in Figure above, we can view each layer as one state of a Markov Chain: {h_i \rightarrow h_{i+1}}. According to DPI, we would have: +
The structure of a deep neural network, which consists of the target label {Y}, input layer {X}, hidden layers {h_1,\dots,h_m}and the final prediction {\hat{Y}}. (Image Source: Tishby 2015)[3]
-H(X) \geq I(X; h_1) \geq I(X; h_2) \geq ... \geq I(X; h_m) \geq I(X; \hat{Y})  I(X; Y) \geq I(h_1; Y) \geq I(h_2; Y) \geq ... \geq I(h_m; Y) \geq I(\hat{Y}; Y)  -A DNN is designed to learn how to describe {X}to predict {Y}and eventually, to compress {X}to only hold the information related to {Y}. Tishby describes this processing as “successive refinement of relevant information” [3]. +If we label the hidden layers of a DNN as {h_1,h_2,...,h_m}as in Figure above, we can view each layer as one state of a Markov Chain: {h_i \rightarrow h_{i+1}}. -
The DNN layers form a Markov chain of successive internal representations of the input layer {X}. (Image Source: Schwartz-Ziv and Tishby 2017 [2])
As long as these transformations on {X}in {Y}about {\hat{Y}}preserve information, we don’t really care which individual neurons within the layers encode which features of the input. This can be captured by finding the mutual information of {T}with respect to {X}and {\hat{Y}}. Schwartz-Ziv and Tishby (2017) treat the whole layer, {T}, as a single random variable, charachterized by {P(T|X)}and {P(Y|T)}, the encoder and decoder distributions respectively, and use the Reparameterization Invariance given in [(2)][1] to argue that since layers related by invertible re-parameterization appear in the same point, each information path in the plane corresponds to many different DNN’s, with possibly very different architectures. [3] +According to DPI, we would have: -I(X; Y) \geq I(T_1; Y) \geq I(T_2; Y) \geq ... \geq I(T_k; Y) \geq I(\hat{Y}; Y)  H(X) \geq I(X; T_1) \geq I(X; T_2) \geq ... \geq I(X; T_k) \geq I(X; \hat{Y})  +H(X) \geq I(X; h_1) \geq I(X; h_2) \geq ... \geq I(X; h_m) \geq I(X; \hat{Y})  I(X; Y) \geq I(h_1; Y) \geq I(h_2; Y) \geq ... \geq I(h_m; Y) \geq I(\hat{Y}; Y)  -This is to say that after training, when the trained network, the new input passes through the layers which form a Markov Chain, to the predicted output {\hat{Y}}. The information plane has been discussed further in Section [3][2]. +A DNN is designed to learn how to describe {X}to predict {Y}and eventually, to compress {X}to only hold the information related to {Y}. Tishby describes this processing as “successive refinement of relevant information” [3]. + +
The DNN layers form a Markov chain of successive internal representations of the input layer {X}. (Image Source: Schwartz-Ziv and Tishby 2017 [2])
+ + +As long as these transformations on {X}in {Y}about {\hat{Y}}preserve information, we don’t really care which individual neurons within the layers encode which features of the input. This can be captured by finding the mutual information of {T}with respect to {X}and {\hat{Y}}. Schwartz-Ziv and Tishby (2017) treat the whole layer, {T}, as a single random variable, charachterized by {P(T|X)}and {P(Y|T)}, the encoder and decoder distributions respectively, and use the Reparameterization Invariance given in [(2)][1] to argue that since layers related by invertible re-parameterization appear in the same point, each information path in the plane corresponds to many different DNN’s, with possibly very different architectures. [3] + +I(X; Y) \geq I(T_1; Y) \geq I(T_2; Y) \geq ... \geq I(T_k; Y) \geq I(\hat{Y}; Y)  H(X) \geq I(X; T_1) \geq I(X; T_2) \geq ... \geq I(X; T_k) \geq I(X; \hat{Y})  + +This is to say that after training, when the trained network, the new input passes through the layers which form a Markov Chain, to the predicted output {\hat{Y}}. The information plane has been discussed further in Section [3][2]. ### **3.2 The Information Plane** -Using the representation in Fig. [3][3], the encoder and decoder distributions; the encoder can be seen as a representation of {X}, while the decoder translates the information in the current layer to the target output {Y}. +Using the representation in Fig. [3][3], the encoder and decoder distributions; the encoder can be seen as a representation of {X}, while the decoder translates the information in the current layer to the target output {Y}. + +The information can be interpreted and visualized as a plot between the encoder mutual information {I(X;T_{i})}and the decoder mutual information {I(T_{i};Y)}; -The information can be interpreted and visualized as a plot between the encoder mutual information {I(X;T_{i})}and the decoder mutual information {I(T_{i};Y)}; +
The encoder vs decoder mutual information of DNN hidden layers of 50 experiments. Different layers are color-coded, with green being the layer right next to the input and the orange being the furthest. There are three snapshots, at the initial epoch, 400 epochs and 9000 epochs respectively. (Image source: Shwartz-Ziv and Tishby, 2017) [2])
-
The encoder vs decoder mutual information of DNN hidden layers of 50 experiments. Different layers are color-coded, with green being the layer right next to the input and the orange being the furthest. There are three snapshots, at the initial epoch, 400 epochs and 9000 epochs respectively. (Image source: Shwartz-Ziv and Tishby, 2017) [2])
Each dot in Fig. [3][4]. marks the encoder/ decoder mutual information of one hidden layer of one network simulation (no regularization is applied; no weights decay, no dropout, etc.). They move up as expected because the knowledge about the true labels is increasing (accuracy increases). At the early stage, the hidden layers learn a lot about the input X, but later they start to compress to forget some information about the input. Tishby believes that “the most important part of learning is actually forgetting”. [7] + +Each dot in Fig. [3][4]. marks the encoder/ decoder mutual information of one hidden layer of one network simulation (no regularization is applied; no weights decay, no dropout, etc.). They move up as expected because the knowledge about the true labels is increasing (accuracy increases). At the early stage, the hidden layers learn a lot about the input X, but later they start to compress to forget some information about the input. Tishby believes that “the most important part of learning is actually forgetting”. [7] Early on the points shoot up and to the right, as the hidden layers learn to retain more mutual information both with the input and also as needed to predict the output. But after a while, a phase shift occurs, and points move more slowly up and to the left. -
The evolution of the layers with the training epochs in the information plane, for different training samples. On the left – 5% of the data, middle – 45% of the data, and right – 85% of the data. The colors indicate the number of training epochs with Stochastic Gradient Descent. (Image source: Shwartz-Ziv and Tishby, 2017) [2])
Schwartz-Ziv and Tishby name these two phases Empirical eRror Minimization (ERM) and the phase that follows as the Representation Compression Phase. Here the gradient means are much larger than their standard deviations, indicating small gradient stochasticity (high SNR). The increase in {I_Y}is what we expect to see from cross-entropy loss minimization. The second diffusion phase minimizes the mutual information {I(X;T_i)}– in other words, we’re discarding information in X that is irrelevant to the task at hand. +
The evolution of the layers with the training epochs in the information plane, for different training samples. On the left – 5% of the data, middle – 45% of the data, and right – 85% of the data. The colors indicate the number of training epochs with Stochastic Gradient Descent. (Image source: Shwartz-Ziv and Tishby, 2017) [2])
+ +Schwartz-Ziv and Tishby name these two phases Empirical eRror Minimization (ERM) and the phase that follows as the Representation Compression Phase. Here the gradient means are much larger than their standard deviations, indicating small gradient stochasticity (high SNR). The increase in {I_Y}is what we expect to see from cross-entropy loss minimization. The second diffusion phase minimizes the mutual information {I(X;T_i)}– in other words, we’re discarding information in X that is irrelevant to the task at hand. A consequence of this is pointed out by Schwartz-Ziv and Tishby indicating that there is a huge number of different networks with essentially optimal performance, and attempts to interpret single weights or even single neurons in such networks can be meaningless due to the randomised nature of the final weights of the DNN. [2] @@ -145,54 +158,63 @@ Variations were made to the activation function to Rectified Linear Unit (ReLu) ### **4.2. Results**
-
Loss Function observed with a network having layers of 12-10-7-5-4-3-2 widths when trained with tanh as activation function. The X-Axis represents training losses and the Y-Axis represents steps
Information Plane observed with a network having layers of 12-10-7-5-4-3-2 widths when trained with tanh as activation function. The X-Axis represents {I(X;T)}and the Y-Axis represents {I(T;Y)}
+
Loss Function observed with a network having layers of 12-10-7-5-4-3-2 widths when trained with tanh as activation function. The X-Axis represents training losses and the Y-Axis represents steps
Information Plane observed with a network having layers of 12-10-7-5-4-3-2 widths when trained with tanh as activation function. The X-Axis represents {I(X;T)}and the Y-Axis represents {I(T;Y)}
The results were plotted using the experimental setup and tanh as the activation function. It is important to note that it’s the lowest layer which appears in the top-right of this plot (maintains the most mutual information), and the top-most layer which appears in the bottom-left (has retained almost no mutual information before any training). So the information path being followed goes from the top-right corner to the bottom-left traveling down the slope. Early on the points shoot up and to the right, as the hidden layers learn to retain more mutual information both with the input and also as needed to predict the output. But after a while, a phase shift occurs, and points move more slowly up and to the left.
-
Loss Function observed with a network having layers of 12-10-7-5-4-3-2 widths when trained with ReLu as activation function. The X-Axis on the left represents training losses and the Y-Axis represents steps. The X-Axis represents for the figure on the right {I(X;T)}and the Y-Axis represents {I(T;Y)}
Information Plane observed with a network having layers of 12-10-7-5-4-3-2 widths when trained with ReLu as activation function. The X-Axis on the left represents training losses and the Y-Axis represents steps. The X-Axis represents for the figure on the right {I(X;T)}and the Y-Axis represents {I(T;Y)}
Information Plane observed with a network having layers of 12-10-7-5-4-3-2 widths when trained with Sigmoid as activation function. The X-Axis on the left represents training losses and the Y-Axis represents steps. The X-Axis represents for the figure on the right {I(X;T)}and the Y-Axis represents {I(T;Y)}
Loss Function observed with a network having layers of 12-10-7-5-4-3-2 widths when trained with Sigmoid as activation function. The X-Axis on the left represents training losses and the Y-Axis represents steps. The X-Axis represents for the figure on the right {I(X;T)}and the Y-Axis represents {I(T;Y)}
+
Loss Function observed with a network having layers of 12-10-7-5-4-3-2 widths when trained with ReLu as activation function. The X-Axis on the left represents training losses and the Y-Axis represents steps. The X-Axis represents for the figure on the right {I(X;T)}and the Y-Axis represents {I(T;Y)}
Information Plane observed with a network having layers of 12-10-7-5-4-3-2 widths when trained with ReLu as activation function. The X-Axis on the left represents training losses and the Y-Axis represents steps. The X-Axis represents for the figure on the right {I(X;T)}and the Y-Axis represents {I(T;Y)}
Information Plane observed with a network having layers of 12-10-7-5-4-3-2 widths when trained with Sigmoid as activation function. The X-Axis on the left represents training losses and the Y-Axis represents steps. The X-Axis represents for the figure on the right {I(X;T)}and the Y-Axis represents {I(T;Y)}
Loss Function observed with a network having layers of 12-10-7-5-4-3-2 widths when trained with Sigmoid as activation function. The X-Axis on the left represents training losses and the Y-Axis represents steps. The X-Axis represents for the figure on the right {I(X;T)}and the Y-Axis represents {I(T;Y)}
### **4.3. Analysis** -The results of using the hyperbolic tan function (tanh) as the choice for activation function corresponds with results obtained by Schwartz-Ziv and Tishby (2017) [2]. Although, the same can’t be said about the results obtained when ReLu or Sigmoid function was used as the activation function. The network seems to stabilize much faster when trained with ReLu but does not show any of the charachteristics mentioned by Schwartz-Ziv and Tishby (2017) such as compression and diffusion in the information plane. This is in line with [4], although the authors have commented in the open review [4] that they have used other strategies for binning during MI calculation which give correct results. The compression and diffusion phases can be clearly seen in Fig. [4][5]. The corresponding plot of the loss function also shows that the DNN actually learned the input variable {X}with respect to the ground truth {Y}. +The results of using the hyperbolic tan function (tanh) as the choice for activation function corresponds with results obtained by Schwartz-Ziv and Tishby (2017) [2]. Although, the same can’t be said about the results obtained when ReLu or Sigmoid function was used as the activation function. The network seems to stabilize much faster when trained with ReLu but does not show any of the charachteristics mentioned by Schwartz-Ziv and Tishby (2017) such as compression and diffusion in the information plane. This is in line with [4], although the authors have commented in the open review [4] that they have used other strategies for binning during MI calculation which give correct results. The compression and diffusion phases can be clearly seen in Fig. [4][5]. The corresponding plot of the loss function also shows that the DNN actually learned the input variable {X}with respect to the ground truth {Y}. ## References -[1] Y. LeCun, Y. Bengio, and G. E. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. [Online]. Available: http://sci-hub.tw/10.1038/nature14539 +1. Y. LeCun, Y. Bengio, and G. E. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. [Online]. Available: http://sci-hub.tw/10.1038/nature14539 -[2] R. Shwartz-Ziv and N. Tishby, “Opening the black box of deep neural networks via information,” CoRR, vol. abs/1703.00810, 2017. [Online]. Available: http://arxiv.org/abs/1703.00810 +2. R. Shwartz-Ziv and N. Tishby, “Opening the black box of deep neural networks via information,” CoRR, vol. abs/1703.00810, 2017. [Online]. Available: http://arxiv.org/abs/1703.00810 -[3] N. Tishby and N. Zaslavsky, “Deep learning and the information bottleneck principle,” CoRR, vol. abs/1503.02406, 2015. [Online]. Available: http://arxiv.org/abs/1503.02406 +3. N. Tishby and N. Zaslavsky, “Deep learning and the information bottleneck principle,” CoRR, vol. abs/1503.02406, 2015. [Online]. Available: http://arxiv.org/abs/1503.02406 -[4] Anonymous, “On the information bottleneck theory of deep learning,” International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=ry WPG-A- +4. Anonymous, “On the information bottleneck theory of deep learning,” International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=ry WPG-A- -[5] T. M. Cover and J. A. Thomas, Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, 2006. +5. T. M. Cover and J. A. Thomas, Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, 2006. -[6] N. Tishby, F. C. N. Pereira, and W. Bialek, “The information bottleneck method,” CoRR, vol. physics/0004057, 2000. [Online]. Available: http://arxiv.org/abs/physics/0004057 +6. N. Tishby, F. C. N. Pereira, and W. Bialek, “The information bottleneck method,” CoRR, vol. physics/0004057, 2000. [Online]. Available: http://arxiv.org/abs/physics/0004057 -[7] L.Weng. Anatomize deep learning with informa-tion theory. [Online]. Available: https://lilianweng.github.io/lillog/2017/09/28/anatomize-deep-learning-with-information-theory.html +7. L.Weng. Anatomize deep learning with informa-tion theory. [Online]. Available: https://lilianweng.github.io/lillog/2017/09/28/anatomize-deep-learning-with-information-theory.html -[8] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015, software available from tensorflow.org. [Online]. Available: https://www.tensorflow.org/ +8. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015, software available from tensorflow.org. [Online]. Available: https://www.tensorflow.org/ -[9] E. Jones, T. Oliphant, P. Peterson et al., “SciPy: Open source scientific tools for Python,” 2001–, [Online; accessed ¡today¿]. [Online]. Available: http://www.scipy.org/ +9. E. Jones, T. Oliphant, P. Peterson et al., “SciPy: Open source scientific tools for Python,” 2001–, [Online; accessed ¡today¿]. [Online]. Available: http://www.scipy.org/ -[10] S. Prabh. Prof. shashi prabh homepage. [Online]. Available: https://sites.google.com/a/snu.edu.in/shashi-prabh/home +10. S. Prabh. Prof. shashi prabh homepage. [Online]. Available: https://sites.google.com/a/snu.edu.in/shashi-prabh/home -[11] N. Wolchover. New theory cracks open the black box of deep learning — quanta magazine. Quanta Magazine. [On-line]. Available: https://www.quantamagazine.org/new-theory-cracks- +11. N. Wolchover. New theory cracks open the black box of deep learning — quanta magazine. Quanta Magazine. [On-line]. Available: https://www.quantamagazine.org/new-theory-cracks- open-the-black-box-of-deep-learning-20170921/ -[12] Machine learning subreddit. [Online]. Available: https://www.reddit.com/r/MachineLearning/ +12. Machine learning subreddit. [Online]. Available: https://www.reddit.com/r/MachineLearning/ -This work has been undertaken in the Course Project component for the elective titled “Information Theory (Fall 2017)” [https://sites.google.com/a/snu.edu.in/shashi-prabh/teaching/information-theory-2017] at Shiv Nadar University under the guidance of Prof. Shashi Prabh +This work has been undertaken in the Course Project component for the elective titled “Information Theory (Fall 2017)” [https://sites.google.com/a/snu.edu.in/shashi-prabh/teaching/information-theory-2017] at Shiv Nadar University under the guidance of Prof. Shashi Prabh     - [1]: #RepInv - [2]: #ssecIP - [3]: #encdec - [4]: #infoplane - [5]: #FigTanhIP \ No newline at end of file + [1]: #1 + [2]: #2 + [3]: #3 + [4]: #4 + [5]: #5 + [6]: #6 + [7]: #7 + [8]: #8 + [9]: #9 + [10]: #10 + [11]: #11 + [12]: #12 + + \ No newline at end of file diff --git a/content/blog/2017-12-21-setting-up-python-on-spacemacs-and-using-pyenv-to-use-python3.md b/content/blog/2017-12-21-setting-up-python-on-spacemacs-and-using-pyenv-to-use-python3.md index f2f1f17..fdb060d 100644 --- a/content/blog/2017-12-21-setting-up-python-on-spacemacs-and-using-pyenv-to-use-python3.md +++ b/content/blog/2017-12-21-setting-up-python-on-spacemacs-and-using-pyenv-to-use-python3.md @@ -20,9 +20,10 @@ Getting started with the setup along with the basic packages etc. was easy by ju The problem I mentioned above about the python versions came into my view when I ran a simple print function, which gave an error as I did not have any shebang on top of the file. This made me realize a potential problem in the future as Python development heavily depends upon virtual environments. Thankfully, the python layer had already added pyvenv and pyenv. Although, pyenv only listed one `system` version, and that too it was of python2. So to solve this, I ran the following: -
pyenv virtualenv -p /usr/bin/python2 venv2
+```bash
+pyenv virtualenv -p /usr/bin/python2 venv2
 pyenv virtualenv -p /usr/bin/python3 venv3
-
+```   diff --git a/content/blog/2018-02-23-featured-on-googles-instagram-instagram.md b/content/blog/2018-02-23-featured-on-googles-instagram-instagram.md index 37e022f..53d9596 100644 --- a/content/blog/2018-02-23-featured-on-googles-instagram-instagram.md +++ b/content/blog/2018-02-23-featured-on-googles-instagram-instagram.md @@ -13,81 +13,8 @@ categories: format: aside --- -Featured on Google’s Instagram - - +Featured on Google’s Instagram Page. -
- -
\ No newline at end of file +
+ + \ No newline at end of file diff --git a/content/blog/2018-03-18-extract-filenames-without-their-extensions.md b/content/blog/2018-03-18-extract-filenames-without-their-extensions.md index ca9188f..877db58 100644 --- a/content/blog/2018-03-18-extract-filenames-without-their-extensions.md +++ b/content/blog/2018-03-18-extract-filenames-without-their-extensions.md @@ -16,4 +16,4 @@ format: aside --- Extract filenames without their extensions and put it in the clipboard -
ls -C | awk -F"." '{print $1}' | xclip -selection c
\ No newline at end of file +```ls -C | awk -F"." '{print $1}' | xclip -selection c``` \ No newline at end of file diff --git a/content/blog/2018-05-11-genie-the-voice-enabled-coding-companion-winner-dell-intern-hackathon.md b/content/blog/2018-05-11-genie-the-voice-enabled-coding-companion-winner-dell-intern-hackathon.md index b715f16..6eb6999 100644 --- a/content/blog/2018-05-11-genie-the-voice-enabled-coding-companion-winner-dell-intern-hackathon.md +++ b/content/blog/2018-05-11-genie-the-voice-enabled-coding-companion-winner-dell-intern-hackathon.md @@ -13,6 +13,6 @@ tags: - python --- - + Genie is a Voice Assistant made up of three agents who talk to you and help you automate software engineering tasks. Watch the video to understand what it can do for you. \ No newline at end of file diff --git a/content/blog/2018-06-07-emacs-starts-a-bit-slow.md b/content/blog/2018-06-07-emacs-starts-a-bit-slow.md index 6f3bc08..59c773f 100644 --- a/content/blog/2018-06-07-emacs-starts-a-bit-slow.md +++ b/content/blog/2018-06-07-emacs-starts-a-bit-slow.md @@ -13,4 +13,4 @@ format: aside --- Emacs starts a bit slow but it can be started as a daemon -
emacsclient -c -n -e '(switch-to-buffer nil)'
\ No newline at end of file +`emacsclient -c -n -e '(switch-to-buffer nil)'` \ No newline at end of file diff --git a/content/blog/2018-07-30-functional-options-for-testing-without-mocks-in-golang.md b/content/blog/2018-07-30-functional-options-for-testing-without-mocks-in-golang.md index 47e076d..92ed0c6 100644 --- a/content/blog/2018-07-30-functional-options-for-testing-without-mocks-in-golang.md +++ b/content/blog/2018-07-30-functional-options-for-testing-without-mocks-in-golang.md @@ -18,39 +18,39 @@ Usually, structs are created with Option structs which hold parameters which are Another way is to use Functional Options, for example -

-    type Server struct {
-    	logger *logrus.Logger // optional
-    	store databaste.Store // required
-    }
-    
-    type ServerOption func(Server) Server
-    
-    func WithLogger(logger *logrus.Logger) ServerOption {
-    	return func(s Server) Server {
-    		s.logger = logger
-    		return s
-    	}
-    }
-    
-    func NewServer(store database.Store, options ...ServerOption) *Server {
-    	s := Server{store: store}
-    	for _, option := range options {
-    		s = option(s)
-    	}
-    	return &s
-    }
-    
-    func main() {
-    	myServer := NewServer(myStore, WithLogger(myLogger))
-    }
-
+```go +type Server struct { + logger *logrus.Logger // optional + store databaste.Store // required +} + +type ServerOption func(Server) Server + +func WithLogger(logger *logrus.Logger) ServerOption { + return func(s Server) Server { + s.logger = logger + return s + } +} + +func NewServer(store database.Store, options ...ServerOption) *Server { + s := Server{store: store} + for _, option := range options { + s = option(s) + } + return &s +} + +func main() { + myServer := NewServer(myStore, WithLogger(myLogger)) +} +``` In the above example, we can set the logger without having to depend on config structs and obfuscating the API. Now that we have potentially solved configuration issues, we can move on to testing. To avoid writing mock functions, we can inject a function that actually performs the request. This way, the default method will be to use the actual implementation but the test can inject a function which simply returns the data we want to check in a way that would be easier for us to test with. -

+```go
 // app.go
 // WithRequestSender sets the RequestSender for MyStruct.
 func WithRequestSender(fn func([]byte, *MyStruct)) Option {
@@ -94,7 +94,7 @@ func TestMyStruct_save(t *testing.T) {
     })
   })
 }
-
+``` The above way, enables us to check data that might be coming to us in some convoluted way without ever having to write complicated unreadable code or having to modify much of the actual implementation. diff --git a/content/blog/2018-09-25-whistle-project-winner-ethindia-2018-hackathon.md b/content/blog/2018-09-25-whistle-project-winner-ethindia-2018-hackathon.md index fa80036..88caa75 100644 --- a/content/blog/2018-09-25-whistle-project-winner-ethindia-2018-hackathon.md +++ b/content/blog/2018-09-25-whistle-project-winner-ethindia-2018-hackathon.md @@ -12,21 +12,15 @@ tags: - hackathon --- -
- -
- -
Demo Video
+ Recently I took part in EthIndia Hackathon that took place in Bengaluru. This time I was participating without a team after a long time and made a team on the day of the event. All three of us (Ronak, Ayush and I) had a different idea of what we should work on but we finally came to a consensus on an idea that I had got from my current workplace’s CTO (Kailash Nadh). He had discussed a problem statement where he wanted to distribute asset holding information of people who have demised to their family members. This is a common task called the Dead Mans Switch which has been covered in a lot of movies as well as various experimental ideas. This was a big problem to solve, not only in size but also in the number of question marks it raises. After a lot of discussion with various mentors from the Ethereum community we decided and implemented upon the following idea by reducing the scope (instead of covering all assets, stick to only sending videos through IPFS) and deciding to skip the big issues like (missed heartbeats) Whistle – A platform to empower Whistleblowers and those who live under constant fear of death. Using smart contracts and the NuCypher proxy re-encryption MockNet we store the re-encrypted ipfs hash of the recorded video on the smart contract which can be interacted with using our heartbeat function interface which resets the decryption timer to a future date. In case a heartbeat is missed, the contract triggers emails containing the decrypted ipfs hash containing the video which can be streamed by anyone else. -The best part about the event was the mentorship which guided us throughout the duration of the hackathon. We learnt that any good product, needs a few use cases which it is trying to solve and it should solve those perfectly. Based on those lines, we did a bit of research and found a bit more about this issue. Recently, Latifa Al Maktoum, a woman belonging to the royal family of Dubai, ran away and came to India as she was being tortured and drugged. She released a video on youtube, where she tells her viewers that if they are watching this, she might already be dead!
+The best part about the event was the mentorship which guided us throughout the duration of the hackathon. We learnt that any good product, needs a few use cases which it is trying to solve and it should solve those perfectly. Based on those lines, we did a bit of research and found a bit more about this issue. Recently, Latifa Al Maktoum, a woman belonging to the royal family of Dubai, ran away and came to India as she was being tortured and drugged. She released a video on youtube, where she tells her viewers that if they are watching this, she might already be dead! -
- -
The full video
+ Using a unique combination of heartbeat transactions and the NuCypher MockNet, we can enable them to allow decryption of the video only after their demise. We also integrated a small platform on top, through which whistleblowers can assign receipients such as news agencies. Then the recipients stored on the contract can be sent emails with the link of the data stored on IPFS once the video’s hash stored on the contract is decrypted using our method. A few other examples are people who may be related to influential families or groups, ex-members of cults, people stuck in legal loopholes, or someone who is just afraid that they may die before publishing their findings, such as a whistleblower. In India, there are multitudes of cases, one such example is the Vyapam scam where “[more than 40 people associated with the scam have died since the story broke in 2013][1]” many of whom were critical witnesses and whistleblowers whose testimony was lost due to their murder. Our platform, Whistle, hence enables users of our application, to anonymously, store information until their demise. diff --git a/content/blog/2018-11-19-streaming-audio-from-linux-to-android-using-pulseaudio-over-lan.md b/content/blog/2018-11-19-streaming-audio-from-linux-to-android-using-pulseaudio-over-lan.md index 2a9b71c..2548ceb 100644 --- a/content/blog/2018-11-19-streaming-audio-from-linux-to-android-using-pulseaudio-over-lan.md +++ b/content/blog/2018-11-19-streaming-audio-from-linux-to-android-using-pulseaudio-over-lan.md @@ -18,15 +18,15 @@ PulseAudio provides streaming via SimpleProtocol on TCP via a simple command. Al You can find the source by running this command: -
pactl list | grep "Monitor Source"
+```pactl list | grep "Monitor Source"``` After this, you can run: -
pactl load-module module-simple-protocol-tcp rate=48000 format=s16le channels=2 source=<SOURCE> record=true port=<PORT (eg 8000)>
+```pactl load-module module-simple-protocol-tcp rate=48000 format=s16le channels=2 source=<SOURCE> record=true port=<PORT (eg 8000)>``` Next, you will need to download PulseDroid, the apk can be found in the Github repository or you can use the following command to download it using wget: -
wget https://github.com/dront78/PulseDroid/raw/master/bin/PulseDroid.apk
+```wget https://github.com/dront78/PulseDroid/raw/master/bin/PulseDroid.apk``` Just enter the IP address of your machine (you can find it by running ifconfig) and the port you chose and press the Start button. \ No newline at end of file diff --git a/content/blog/2019-01-08-setting-so_reuseport-and-similar-socket-options-in-go-1-11.md b/content/blog/2019-01-08-setting-so_reuseport-and-similar-socket-options-in-go-1-11.md index d2f3dc9..59adc15 100644 --- a/content/blog/2019-01-08-setting-so_reuseport-and-similar-socket-options-in-go-1-11.md +++ b/content/blog/2019-01-08-setting-so_reuseport-and-similar-socket-options-in-go-1-11.md @@ -17,7 +17,8 @@ By reading how support for this has been added, we can get an idea about how to Let us see how one would start a UDP reader that performs a callback on receiving a packet. -
type UDPOptions struct {
+```go
+type UDPOptions struct {
 	Address         string
 	MinPacketLength int
 	MaxPacketLength int
@@ -41,11 +42,13 @@ func StartUDPReader(opt UDPOptions, callback func([]byte)) {
 			callback(packet)
 		}
 	}
-}
+} +``` This is how the reader would look after adding SO_REUSEPORT using the new way. -
func StartUDPReader(opt UDPOptions, callback func([]byte)) {
+```go
+func StartUDPReader(opt UDPOptions, callback func([]byte)) {
 	lc := net.ListenConfig{
 		Control: func(network, address string, c syscall.RawConn) error {
 			var opErr error
@@ -77,7 +80,8 @@ This is how the reader would look after adding SO_REUSEPORT using the new way.
 			callback(packet)
 		}
 	}
-}
+} +``` Using this approach we can reuse the port and have zero downtime, between restarts by starting the new reader before stopping the currently running reader. diff --git a/content/blog/2019-02-23-.md b/content/blog/2019-02-23-.md deleted file mode 100644 index 44f1a84..0000000 --- a/content/blog/2019-02-23-.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: t -author: rhnvrm -type: post -date: -001-11-30T00:00:00+00:00 -draft: true -url: blog/?p=459 -categories: - - notes - ---- diff --git a/content/blog/2019-03-17-a-review-of-the-siempo-launcher.md b/content/blog/2019-03-17-a-review-of-the-siempo-launcher.md index 7a450f3..559ca62 100644 --- a/content/blog/2019-03-17-a-review-of-the-siempo-launcher.md +++ b/content/blog/2019-03-17-a-review-of-the-siempo-launcher.md @@ -16,11 +16,9 @@ Last December, I decided to start an experiment and adopt a new launcher called After surveying all the options, the only fully featured launcher (that was usable) I found was [Siempo][1]. An other notable mention was the Minimal Launcher but it did not have a free dark mode or even proper app search, making it unusable apart from phone calls and messages. I did not want to go to the extreme with this experiment so Siempo seemed to be the best option out there for Android. A few notable features of this app based on my experience are mentioned below. But before that, I must mention what I guess mostly the ideas on which the app is based on. -Tristan Harris, a Former Design Ethicist at Google had around 2-3 years ago started a movement called _[Time Well Spent][2]_ [now called][2] _[Humane Tech][2]._ Nothing better to explain this than his TED Talk on “How a handful of tech companies control billions of minds every day”
+Tristan Harris, a Former Design Ethicist at Google had around 2-3 years ago started a movement called _[Time Well Spent][2]_ [now called][2] _[Humane Tech][2]._ Nothing better to explain this than his TED Talk on “How a handful of tech companies control billions of minds every day” -
- -
+ A few notable things from the website are copied below for reference diff --git a/layouts/index.html b/layouts/index.html index e0beaeb..4ce020a 100644 --- a/layouts/index.html +++ b/layouts/index.html @@ -44,12 +44,12 @@
categories: {{range ($.Site.GetPage "taxonomyTerm" "categories").Pages }} - {{.Title}} + {{lower .Title}} {{end}}

tags: {{range ($.Site.GetPage "taxonomyTerm" "tags").Pages }} - {{.Title}} + {{lower .Title}} {{end}}
diff --git a/layouts/section/blog_list.html b/layouts/section/blog_list.html index 8ced6c8..c0e58fb 100644 --- a/layouts/section/blog_list.html +++ b/layouts/section/blog_list.html @@ -1,5 +1,5 @@ {{ define "title" -}} - {{ .Site.Title }} + Blog List | {{ .Site.Title }} {{- end }} {{ define "header" }} {{ partial "masthead.html" . }} @@ -12,8 +12,10 @@
    {{range .Site.RegularPages}} + {{if .Date}}
  • {{.Date.Format "2006-01-02"}} {{.Title}}
  • {{end}} + {{end}}