After a fair chunk of research I’ve landed on getting a Thelio Mira from System76. It ships with a flavor of Ubuntu called “PopOS!” and it seems to be pretty darn good. This blog-post is the first thing I’ve done with the new machine but it won’t be the last :D.
]]>Wordle
. With it you guess a word and it highlights which letters are
in the word and in the correct location or it will signal if the letter is
correct but isn’t in the proper location. After watching a few game rounds I
decided to write something to solve it.
The code is on github.
The demo is self hosted.
]]>Any
trait this week and it got me to thinking on how it
could be used to stored different kinds of data at runtime. In this post
we’ll look at this magical trait and get a better understanding of whats
possible.
Any
itself can be used to get aTypeId
, and has more features when used as a trait object. As&dyn Any
(a borrowed trait object), it has theis
anddowncast_ref
methods, to test if the contained value is of a given type, and to get a reference to the inner value as a type. As&mut dyn Any
, there is also thedowncast_mut
method, for getting a mutable reference to the inner value.Box<dyn Any>
adds the downcast method, which attempts to convert to aBox<T>
. See the Box documentation for the full details.
So what does that mean? Essentially code such as the following is possible:
|
If you check out the is
function under the hood what it’s really doing is the
following:
|
The TypeId
appears to be the real star of the show, and if you look into it a
bit more you’ll find that not only can it be compared for equality; it can also
be hashed!
|
To illustrate my point, let’s come up with a somewhat believable schema that you could likely have in your application.
|
If you wanted posts and related data to be searchable in Elasticsearch your document structure would likely look like this:
|
As you can see, this document is essentially a denormalized data structure of
what you can expect to find in the schema. It comes with the same benefits and
drawbacks when you denormalize data. It’s fast because you don’t have to look
at any other data sources; however, the data contains duplications which
increases space. Every Post
caries with it an Author
and any number of
Category
objects.
If you search by author.name
there is a good chance you’ll have duplicate
authors in each of the posts. If you fetch N number of documents and all of
them have the same Author
you’re needlessly deserializing N-1 of them. In
this example that doesn’t seem bad, but suppose it was a more complex model…
You’re just compounding the number of allocations you’ll incur and the size of
the total payload just keeps growing.
If you run into these kinds of scenarios they can be mitigated by using the
_source
and fields
directives in your Elasticsearch queries to pair down
the data you need. There is a lot of nuance with these and it is highly
recommended that you read the docs
for yourself. Having said that, what follows are a couple examples you may want
to think about in your application.
One strategy may be to fetch only the keys from the document and in a second parallel manner retrieve the data from cache or the database it came from.
Request:
|
Reply:
|
Slight side note; if you noticed the author.id
field is an array even though
our document has a single author it’s because the underlying mappings actually
store these leafs as a list.
This is a tremendous savings on Elasticsearch as it doesn’t need to send the original document and all it needs to do is supply the mappings that matched the document instead. The trade-off of course is you now need to wait for the response from Elasticsearch and then kick off extra potential requests to other data stores so weigh your application needs accordingly. Obviously if all you care about is several fields in your document this is a big win indeed.
A secondary strategy, that you can actually combine with the previous, is to have Elasticsearch pair down the JSON document it sends to you.
Request:
|
Reply:
|
This prunes the document down to just the fields you’ll be using but maintains
the relative structure of it. You’ll still be incurring allocation costs to
deserialize duplicates but at least it only for the data you’ll be directly
using. One giant bullet point to _source
filtering is it’s not free. By
default Elasticsearch will simply pass the whole document without parsing it at
all; however, source filtering will require Elasticsearch to deserialize the
document to derive the filtered structure. You’ll see a fair amount more memory
being utilized with this so at larger scale and volume of data you’ll probably
want to find other optimizations.
Using Elasticsearch as a document store is pretty descent, but as your data
requirements change and grow it can be easy to simply keep tacking on fields to
your documents. There are a number of solutions to mitigate how much data you
transfer to avoid large chunks of overhead, and sometimes they come with their
own costs. Here we covered the two primary ways of filtering data, fields
and
_source
filtering. If at all possible you should reach for fields
when
possible.
So it’s not anything directly related to NeoVim; however, if you’re looking at
that sweet eye-candy and digging it then you’ll most likely want the terminal
emulator that works the best with it across every OS. I rock it on both OSX and
Ubuntu personally and I’ve heard it works well on Windows as well. I highly
recommend Nerd Fonts with it as well
with the patched Fira Code
font with ligature support. This is what gives my
icons in the demo picture above in the file explorer, which is NerdTree.
This is an incredibly useful plugin that can really replace or augment some of the plugins you might already have in your tool-belt. It has a very good interface that lets plugin makers use it for all kinds of things, from the simple file fuzzy finder, to text searching capabilities, and even for things such as searching for all places a reference is located in a project.
If you’ve wished that syntax color highlighting was better, then you probably
haven’t heard of or used nvim-treesitter
. This amazing plugin has a deeper
understanding for code and can give you a richer coloring. I’ve noticed with
Ruby it does better then what I’ve seen with any of the bigger IDEs I see people
using. The same is true for C#
and Rust
. I highly recommend you try it
with a treesitter compatible color theme.
If you’ve tried setting up different language servers with NeoVim you know it can be a pain. This largely takes away that headache using it’s own form of additional plugins. If you’ve ever used VSCode, it is very much like that in my opinion. I believe the story for setting up language servers has gotten a lot easier; however, this is extremely easy-mode so I haven’t tried anything else in almost three years when it comes to autocompletion.
]]>First we’ll need to start with a small snippet of code to get an idea of where
it started. In this case I was tinkering with a simple cache store of a RwLock
around a HashMap
where the values are insulated in an Arc
.
|
I then wanted to add a bulk fetch method which would utilize a single read lock to bulk fetch as many cached items via keys. My first pass ended up looking like this:
|
Sweet, tests pass and we are in business! Now lets plug it in roughly to the code path I was hoping to use it in…
|
It turns out a slice won’t work, so it’s back to the drawling board. I’m not
actually even using the slice directly really, I’m just using the iterator it
provides via .iter()
. So why not just require an iterator to begin with? Here
is what my next try looked like.
|
This works now for any iterator; however, I’m not a fan of the (&keys).iter()
that is needed now to get it to work for different collections that can produce
an iterator. It turns out that there is also a trait that covers this as well,
and it’s called IntoIterator
.
|
This lets you use a reference to any collection which implements IntoIterator
.
I’m pretty happy with what was learned while tinkering with this caching idea.
Can’t say if the whole thing in total is worth anything, but this little bit of
insight gained is a big one. When first starting with rust it felt like I was
wanting to work with slices; however, more often then not what’s being called
for is an iterator. If you find yourself in the same spot then hopefully this
has opened your eyes!
I haven’t really posted much in the last six years. The eager readers who quickly scan back for other posts will see I made a lack-luster attempt to get back into it. Somewhere in between moves and a slump I was in really kept me from exercising this practise, which has atrophied so far this feels very foreign.
So what am I up to? Well I’m back to slinging ruby for a company called TrueCar. While I’m not really of fan of ruby anymore, I’ve come to a point in my career where the language used isn’t as important to me as the problems I’m solving and the people I work with. Now, having said that, there are some no-go languages still I just won’t approach if I can avoid it; such as Java and PHP. On the flip side of the coin, the languages I keep gravitating back to are Elixir, Rust, and C#.
Aside from my professional track, I’ve been diving deeper and deeper into Rust as a result from a side-journey I took in my life about 5 years ago. Having helped in the development of a “FreeShard”, a reverse engineered server emulation for an MMORPG client, I have really become more and more entranced by the idea of creating some of my own multiplayer games from the ground up.
I’ve even dabbled into learning Blender over the last year or two, which I hope to show off some of my work in posts to come, but here is one of the first works I made which I was pretty proud of:
]]>I’ve spent that last couple of years growing my Elixir craft as well as picking up on dotnet-core; however, I don’t think I’m where I should be in my growth. After really taking stock of my life both professionally and personally I’m not where I thought I’d be. Blogging is really a big part of that and it’s for that reason I’m picking it back up. There is so much I could be and will be talking about.
So… what am I really saying? I’ve taken stock of my life recently and am ready to make some changes in it; and blogging again is one of them. For a starting goal I’ll try to post at least twice a week. Hopefully as time goes on it will pick up more consistently.
]]>At first glance if you’ve ever had to combine corn in the field, it feels like it’s roughly the same activity. However, if you try this tactic you’ll quickly find you’re in for a world of hurt. The trick to operating a combine is to keep an eye on where you’re going and glancing at how the corn is being brought in as you go. If you focus too much on the current corn being brought in you’ll get too wrapped up in the each set and quickly drown in a series of micro corrections.
This applies to so many things in life. It feels natural to want to focus on what’s immediately coming up and loose focus of the planned out row ahead of you. I find myself doing this with so many things in my life and being back on the farm has helped bring it to my attention. Don’t get hung up on the immediate success or failure of what’s going on, but rather use it as a guide to drive what you’re striving for as an indicator of the overall goal.
]]>I’ve been writing Phoenix applications for about four months now and really enjoy it so far; however, I’ve been stuck working mostly on boring web APIs and haven’t had a chance to build anything that is more rich and interactive with a specific user application in mind. That’s all changed though as I decided to beef up on my front-end skills a bit and work on a pet project I’ve had cooking in the noodle for awhile now.
Chances are, if you’ve heard about Phoenix you’ve also heard people brag up the “Channel” system that ships with it. It gives you a great way to send real-time updates to the browser and doesn’t require a crazy amount of hardware to do it either! If you’re familiar with MVC then you can think of a channel as being a controller that maintains persistence and a constant socket open with the browser.
What does that mean though? Ask any web developer and they’ll be able to tell you about the life-cycle of a web request, which at it’s heart is stateless. This means every request you make to a web server requires it to build up state every time you hop to a new page. The overhead to build that up can be pretty incredible. With channels you’re able to store the state and keep it around for any requests that happen.
Let’s peel back what is going on with these channels by snooping through some of
the code from a fresh install. First you’ll want to direct your attention to
lib/project_name/endpoint.ex
. This is the starting code base for a request,
and right away one of the things we find is this (assuming your application is
named MyApp):
|
If you’re worked with Plug
routing this should should feel pretty similar.
What’s going on here is any requests to /socket
are being handled by the
MyApp.UserSocket
module. Let’s crack that open next and take a peek!
|
Quite a bit is going on here, luckily a lot of it is being handled for us with
the use Phoenix.Socket
statement. This is where your web socket connection
becomes a channel. Think of this module as being a initial starting point and
router for which the channel specific protocol takes shape. The channel
macro
wires up “topics” to more specific modules. I like to think of topics as web
routes. The *
in the channel does what you would expect and allows anything
to match at that location.
Next up is the transport, I don’t know much about this; but my assumption is
this specifies the underpinnings of how to actually talk with the web client.
Based on the commented out option of :longpoll
it looks like this would
support older clients that don’t have web socket support. There are some
libraries out there that use a long polling ajax request to simulate websockets.
The comments do a pretty great job of explaining the rest of what’s going on here! But what would a channel look like? Here is the code for a channel that I’m working on at the moment:
|
If you’ve worked with GenServer
some of this is going to look eerily the same.
This is as much experience as I’ve had so far so I’ll let the code do most of
the talking for now… This post has gone on a bit long so we’ll wrap up here
for now and dive in again in a second post with how to test this stuff, stay
tuned!
Some simple Google’ing and you’ll find a quick little strategy to move from Vim
over to NeoVim via creating the new standard config directory and copying your
vimrc
file over to the new init.vim
format. While that may work I took
this opportunity to take a hard look at the tools I was using and instead elected
to start with a blank slate.
Every plugin I had been using went through a simple checklist to determine if and how it was ported over.
no
camp
with this and I flat out just didn’t bring them over.almost
all of these the answer is
yes; however, I did run into one that had to get the boot: Powerline
.I was using the popular YouCompleteMe
; however, with NeoVim there is a better
option which takes advantage of it’s asynchronous architecture:
deoplete. I was bit taken back at
first when the TAB
key didn’t cycle through the complete options; however,
with a bit of help from a member in the community I was back on track pretty
quickly. Here is the solution to get your tab key to select auto-complete
options:
In your init.vim
|
nvim/autoload/utils.vim
|
Powerline just wasn’t working. I switched over to vim-airline
and was pretty
pleased with how much it looked and functions – so no complaints there.
Switched away from Vundle for Plug. Like deoplete
, it takes advantage of
the asynchronous capabilities of NeoVim and can install a full range of plugins
pretty quickly.
A whirlwind of change has rocked my life since I’ve posted last, some of which has been the culprit for my lack of posts… I would look at the calendar and think to myself, “Wow, it’s been a couple weeks. I should capture some of what’s been happening.” Here I am now trying to highlight some of the major events and happenings since I last posted about five months ago. I’ll start with the highs, move into informative, then end on the low.
About the time I stopped writing posts is the time I started getting serious about my health. I’ve really been watching what I eat and try to do about 15 minutes of exercise everyday. Since tracking it, I’ve lost 75 lbs and am almost back down to 200 lbs. I’m back to where I was before I left Austin, TX four years ago and it feels great! Being physically healthy has really become a big part of my life so I plan to have more posts on what I’ve done so far and how I plan to keep it going.
Almost a month ago my wife and I closed on and moved into our first house. We’ve spent the first five years of our marriage traveling from place to place and living in apartments so this is a brand new and amazing feeling for both of us. I enjoy living in a rural area with a big back yard and neighbors that are more then five yards away from me! Looking forward to working on the pole barn and turning it into a killer office to work out of!
I’ve been using Ruby for a long time now, and it was truly the first programming language that I loved to write code in. These last four months I’ve written mostly Elixir both in and outside of work and it has captured me with the same great joy I got when starting off with Ruby. The performance has been amazing and the community top notch so far. While I’ll always keep a weathered eye on my first love, Elixir is now my goto tool for most jobs that get sent my way.
Several weeks ago I participated in a company internal hackathon in which we broke up into teams, designed small applications, and went to town trying to implement them in 36 hours. I took that as an opportunity to play with Phoenix, a popular web framework for Elixir. While I have been getting a lot of good experience in with Phoenix, one of it’s big selling points of web-sockets is something I hadn’t yet dipped into. Naturally I was able to focus our demo around using web-sockets for real-time collaboration and fell in love with how much you could do.
Since then I’ve looked into tools that meld well with the functional languages I’m growing accustomed. I’ve decided to spend some time each week learning Elm, a web-client based language with some very interesting features. I haven’t done serious front-end development in quite some time, we’re talking since the MooTool and DoJo days for any who remember; however, this may be enough to get me dabbling in it again!
A little over a month ago my uncle was diagnosed with very aggressive, terminal pancreatic cancer. There isn’t a day yet that goes by where I find my mind drifting toward thinking of him and the struggle he is going through. Rage, sadness, and an overall feeling of helplessness invades me every time I think about it. For those who know him, he’s an argumentative and stubborn person who lives life his own way – and I admired his fervor and strength.
]]>JSON Hyper-schema
. It is a schema built on top of the
JSON Schema
and describes the URLs that can be built with a given resource.
Let’s look at how we can use this spec to help supplement our APIs.
Like last time, let’s work with an example. Here is the schema similar to the one from the last post:
|
Here is how we can enrich this with the hyper-schema spec. With this schema we have defined how to update the data, where to get the friends list, how to add a friend, and lastly how to delete the person.
|
This is a great way to describe to clients how it can interact with your API in
a way that can be automated. While a lot of clients still don’t use it much you
can use the Link
header of HTTP to point to these.
|
I like to think of this as “CSS for your API”. It describes to the client how it can style the data with links to more resources. Not many clients understand this yet; however, with companies like GitHub using it for things like pagination it feels like adoption will only get better as time goes by.
The JSON Hyper-Schema
is very extensive and we’ve only covered a small part of
it. It also makes provisions for media-types and URI validations. If you want
to read more in depth on the subject I recommend heading on over to the
json-schema.org and reading up more on it.
I did all of the normal things I could think of such as check the running tasks on her computer, make sure nothing inside of our network was hogging the bandwidth, etc. Everything I checked seemed to suggest the problem was outside of our home network. Every test I ran back out from our router was great, so I quickly began to suspect this problem was further up stream. After sniffing the network traffic from League I was able to do some trace-routes to determine the problem seemed to reside on the connection somewhere between a router in Chicago and Riot, the company which owns Leauge of Legends. I contacted others in my area that also used my ISP and they we’re reporting the same problem.
I reached out to Riot tech support with all of the logs they wanted and got back frankly what I thought was perhaps the poorest tech support response I’ve heard in a long time, “Don’t play when this is happening….” After several more back and forth emails with Riot I decided to give up and try another alternative which has fixed this problem when it comes up.
What I did was setup OpenVPN on a Digital Ocean droplet out in San Franciso and installed the OpenVPN client on my wifes computer. When she starts to have packet loss she can connect to the OpenVPN server and her packet loss magically goes away! I’m sure there are some other methods I could have used to route her traffic around this problem area, but this seemed like the surest way to go since I was never able to get the same latency issues through my web-host droplet that I have running.
If anyone else is having this problem where they get this lag that seems to start up about the same time everynight this may work for you as well. Here is a walkthrough I found for getting a droplet setup on Digital Ocean and installing the client on your local machine. If you’re just playing the game the $5 dollar droplet is probly enough for you.
Riot – if you are reading this I would be happy to help you fix this problem; provided I can work with tech support that stops insisting the problem is on my wife’s computer or is because of something internal to our network.
]]>The JSON schema is a spec used to describe complex data structures. Because it has an official spec behind it, there are quite a few tools out there that you can use to take advantage of it. This gives you a way to publish agreed upon documents that other vendors can use to model the data from your API.
This schema is simple, but shows off how data can be described. This models a
person as having a first_name
, last_name
, birthdate
, and optionally
friends
. Just glancing at it this all may seem pretty obvious except for
maybe the friends
part. The $ref
is a way in the spec to reference another
part of your schema document. In this case it references the top level of the
schema which means that friends
can be an array of Person
objects.
|
Part of the spec sets aside the key definitions
as an area where you can
define types so you don’t have to repeat them in your schema. Here is the same
schema from above using that as an example.
|
Of course sometimes you’ll have a rich set of models which reference each other.
Don’t worry, the spec also has a way to reference other documents via http.
This new spec expands upon the last one by adding in a hobbies
key, which
references /hobby.json#
. So what is happening here? The magic is in the
id
. When a reference is relative like this the spec says to default to the
host found in the id field to resolve another schema file. If the uri in the
reference is a fully qualified url then the id is ignored and it will look for
it at the location given.
|
A couple gotchas here are the spec really only makes way for http and anything else, such as from disk, are unsupported. The other gotcha when referencing documents is there doesn’t appear to be a relative location ability built into the spec. This means if you start serving your documents under a new sub-directory you’ll have to go through all of your references and update them.
If you want to learn more about JSON Schema you can head on over to the official website at json-schema.org. In my next post I’ll cover some more advanced sections of the spec, including parts which are geared directly for determining what kind of urls you can build from the data models you have!
]]>Sometimes we get complacent “dancing for the man”; staying comfortably distracted by the workday. Don’t let it happen! Always challenge and audit yourself to ensure you’re spending the time you have on the investments which are important to you.
]]>GenServer
behaviour you may be
interested in the Agent
set of methods that ship with Elixir. They provide a
lightweight mechanic to save and retrieve state. Here are two functionally
equivalent modules, one written as a GenServer
and the other with Agent
.
|
|
Pretty nice how much you can shrink the code; and for things as trivial as this
you probably don’t even need to have a module at all! To be fair I used the
anonymous function version of Agent
; which makes it seem a lot smaller. Here
is the same one again with non-anonymous functions.
|
How hard is it for a new person to pick up your project and start working on it? Think about the supporting database, message queues, mailer software, and any other systems your project ties into. Is there any kind of development credentials they’ll need? What is the process to get a change into production? When you go beyond the run-of-the-mill framework inside the vacuum of your local machine the barrier to getting started on a project can start to sky rocket. What are some ways we can help mitigate this?
Your README
should contain instructions and link to other resources that will
give a newcomer into the project answers to the questions above. When someone
has to ask about how to get started, take it as an opportunity to document what
is missing. If people can get your project setup and begin making changes
without having to ask questions you have succeeded.
As the project grows and changes keeping the README
up to date can be a
challenge. Try to remember that when you make changes to your process or
environment to reflect them in your document. Make it part of your peer-review
and pre-master checklist:
“Has anything changed that should be documented in the README?”
There are some fantastic ways to remove the headache of setting up the development environment for a project. You’d be surprised how far a good bash script goes. Look at all of the commands you have to type to get started on your project. Try to take those commands and put them into a script that can be run from checkout to get developers as close to ready as possible.
I also recommend Docker. This adds additional requirements that Docker be installed and developers know the basics of how it works; however, it gives you a great way to automate an environment that is identical for everyone. This is a huge winner when you are targeting developers that all might not be using the same platform. Removing the pain of knowing how to set up your environment for OSX versus Linux versus Windows is awesome.
Try to remember you won’t be the only or last person to work on a project. Make it as easy as possible for others to get started with automated scripts and documentation. When you lower the barrier of entry into your project it ensures it’s continued survival.
]]>If you work with a service like ElasticSearch where the keys are never in the
same order this is an amazing feature. I have used it with vimdiff
to wire up
some quick diffs. This example grabs two different documents from ElasticSearch
and shows a side-by-side difference of the two documents. Note how it drills
strait down into the ._source
key, which is the data we care about.
|
Sometimes what you are trying to extract out is pretty far down. For instance say you are using a pretty wordy hypermedia API that has the users in the structure separate from the comments.
|
The following will combine the user with the comment
|
And produce the following output:
|
Being able to manipulate json into an easily digestible format for scripts gets even better. Using the same example from above:
|
Produces the following output
|
There is a ton of features baked into this and I have only scratched the surface. As I run into uses for them I plan to put together a sticky page of recipes I have used. Stay tuned!
]]>