All Posts

Here is the main logic for jaeger agent and jaeger collector. (Based on jaeger 1.13.1)

Thanks for the articles I list at the end of this post, I understand how transformers works. These posts are comprehensive, but there are some points that confused me. First, this is the graph that was referenced by almost all of the post related to Transformer. Transformer consists of these parts: Input, Encoder*N, Output Input, Decoder*N, Output. I’ll explain them step by step. Input The input word will map to 512 dimension vector.

$$s_t$$ and $$h_i$$ are source hidden states and target hidden state, the shape is (n,1). $$c_t$$ is the final context vector, and $$\alpha_{t,s}$$ is alignment score. \begin{aligned} c_t&=\sum_{i=1}^n \alpha_{t,s}h_i \\ \alpha_{t,s}&= \frac{\exp(score(s_t,h_i))}{\sum_{i=1}^n \exp(score(s_t,h_i))} \end{aligned} Global(Soft) VS Local(Hard) Global Attention takes all source hidden states into account, and local attention only use part of the source hidden states. Content-based VS Location-based Content-based Attention uses both source hidden states and target hidden states, but location-based attention only use source hidden states.

Load separate files data.Field parameters is here. INPUT = data.Field(lower=True, batch_first=True) TAG = data.Field(batch_first=True, unk_token=None, is_target=True) train, val, test = data.TabularDataset.splits(path=base_dir.as_posix(), train='train_data.csv', validation='val_data.csv', test='test_data.csv', format='tsv', fields=[(None, None), ('input', INPUT), ('tag', TAG)]) Load single file all_data = data.TabularDataset(path=base_dir / 'gossip_train_data.csv', format='tsv', fields=[('text', TEXT), ('category', CATEGORY)]) train, val, test = all_data.split([0.7, 0.2, 0.1]) Create iterator train_iter, val_iter, test_iter = data.BucketIterator.splits( (train, val, test), batch_sizes=(32, 256, 256), shuffle=True, sort_key=lambda x: x.input) Load pretrained vector vectors = Vectors(name='cc.

After Inoreader change the free plan, which limit the max subscription to 150, I begin to find an alternative. Finally, I found Tiny Tiny RSS. It has a nice website and has the fever API Plugin which was supported by most of the RSS reader APP, so you can read RSS on all of you devices. This post will tell you how to deploy it on your server. Prerequisite You need to install Docker and Docker Compose before using docker-compose.

Using the right Emacs Version I failed to preview LaTeX with emacs-plus. If you have installed d12frosted/emacs-plus, uninstall it and use emacs-mac. brew tap railwaycat/emacsmacport brew install emacs-mac If you like the fancy spacemacs icon, install it with cask: brew cask install emacs-mac-spacemacs-icon Install Tex Download and install BasicTeX.pkg here. Add /Library/TeX/texbin to PATH. Install dvisvgm by sudo tlmgr update --self && sudo tlmgr install dvisvgm collection-fontsrecommended Emacs settings Add TeX related bin to path: (setenv "PATH" (concat (getenv "PATH") ":/Library/TeX/texbin")) Tell Org Mode to create svg images: (setq org-latex-create-formula-image-program 'dvisvgm) Now you can see the rendered LaTeX equation by calling org-preview-latex-fragment or using shortcut ,Tx.

PyTorch provide a simple DQN implementation to solve the cartpole game. However, the code is incorrect, it diverges after training (It has been discussed here). The official code’s training data is below, it’s high score is about 50 and finally diverges. There are many reason that lead to divergence. First it use the difference of two frame as input in the tutorial, not only it loss the cart’s absolute information(This information is useful, as game will terminate if cart moves too far from centre), but also confused the agent when difference is the same but the state is varied.

Recently, I found a really good example code for Python circular import, and I’d like to record it here. Here is the code: 1 2 3 4 5 6 7 8 # X.py def X1(): return "x1" from Y import Y2 def X2(): return "x2" 1 2 3 4 5 6 7 8 # Y.py def Y1(): return "y1" from X import X1 def Y2(): return "y2" Guess what will happen if you run python X.

Overview CPython allocation memory to save dictionary, the initial table size is 8, entries are saved as <hash,key,value> in each slot(The slot content changed after Python 3.6). When a new key is added, python use i = hash(key) & mask where mask=table_size-1 to calculate which slot it should be placed. If the slot is occupied, CPython using a probing algorithm to find the empty slot to store new item.

PyTorch is a really powerful framework to build the machine learning models. Although some features is missing when compared with TensorFlow (For example, the early stop function, History to draw plot), its code style is more intuitive. Torchtext is a NLP package which is also made by pytorch team. It provide a way to read text, processing and iterate the texts. Google Colab is a Jupyter notebook environment host by Google, you can use free GPU and TPU to run your modal.