My current company - a UK energy startup - has always had this ongoing remote work mindset. I thought I'd share a thing or two how we made it work for us.
In short - although we are mostly a traditional "work from office" company, for a software developer there's nothing that would stop you from being 100% productive while working remotely.
Step 1: No meetings Wednesdays
This was a tiny idea that quickly became integrated into our culture. On Wednesdays we have no meetings. This also means you can from from anywhere. You can come and work from the office. You can fly to Budapest and work from there. Whatever works for you.
Most of our product teams are working from home on Wednesdays, and this has created an environment where we had to test and maintain our toolset and best practices. Our VPN has failed a couple of times, issues were addressed and now it is in a great shape and I haven't had a connectivity issue in ages.
Step 2: VPN
As a fundamental security precaution, connectivity to business critical resources should be limited to the office network or a VPN. This means a reliable VPN setup, able to handle a load of every developer, QA and data scientist connected to it simultaneously. Load testing a VPN before going 100% remote is a great idea.
Step 3: Culture
Work from home is not the same as remote work. This has been highlighted a few times now in HackerNews discussions. We are not really doing remote work, but more like enabling office type of work from any chosen location. This means most of the teams are starting work at 9am and finishing at 6pm. This means we don't do async work, but instead everyone are more or less synchronised even on WFH days.
This is not neccessarily a bad thing.
Synchronisity allows interoperability. It means you can easily switch between working in the office and working remotely at any given day, not just on Wednesdays. It allows pretty good flexibility in terms of when you need to work from home - you don't have to adjust your working patterns, everything mostly stays the same.
This means, there is no effect on your personal productivity. Things continue the usual way.
So yes, morning standups happen every day, everyone turns up - on site or online, and we plan together the new day ahead.
Step 4: Meetings and informal catch ups
Slack and Google Hangouts are all the tools you need really. But you have to build a culture of using them in the most efficient way.
Have a channel dedicated just to your immediate team. Encourage ongoing chat about who is doing what and if there are there any blockers. This is like a sound of an engine - you know it's working when there is rumbling going on.
An alternative would be to periodically ask everyone who is doing what right now. This is counter productive and can be perceived as annoying micro management.
Every meeting should be available on Hangouts by default. Ideally, every meeting should also be recorded and available to view later, but there might be some privacy concerns there. At the very least, you should be able to join every meeting via a phone line.
Hangouts are pretty handy also when you want to have a quick informal catch up. Those are irreplaceble - and often are an argument for working from office. When everyone are in the office, you can interrupt and talk to any of your co-workers at any moment, right?
Not really. Even in the office, even when sitting side by side there are good and bad times of interrupting your teammate. We try to indicate those moments by having a headphones on policy - when I have my headphones on, treat this as a do-not-disturb sign.
Now when it is a good moment to catch up, you might as well use Hangouts to do a bit of pairing, discuss a particular feature detail or do a mob review of a particularly challenging pull request. Reviews over Hangouts work equally well, if not better - no need to lean over each other around a laptop screen, you can hear everyone well (especially true when your office is a huge open space), and there are clear turns when it comes to providing feedback.
Step 5: Team autonomy
Team autonomy is probably the single most important enabler of remote work efficiency. Every team member should be able to deliver features either independently, or with the minimum interaction with the rest of the team.
This doesn't mean everyone decides for themselves what they are working on. In a traditional setup you would have sprints, goals and pre-defined tickets that are commited for delivery. Each team would hold refinement sessions when tickets would be created, fleshed out and readied for implementation. Dependencies would be highlighted and resolved. Once the sprint starts, you can pick up a ticket from the sprint backlog and start cracking.
At this point there shouldn't be much stopping you from delivering a ticket all the way to production. There might be some gates to open - QA would potentially be one of them. You might want to implement automatic pipelines for QA, or you might be lucky enough to have a dedicated QA engineer on your team. However the case, each engineer should be in a position to pick right tools for the job, do the job, test it and ship to production. And then verify it on production. And maybe create some alerts for a business critical feature.
There is a danger of siloing here when every team does things in their own way. You can easily mitigate this with a good open engineering culture.
Good engineering culture
Have a tech guild wide presentation on the things you've been shipping and challenges you've been solving recently. Extrapolate good solutions into easily reusable libraries, and encourage others do the same. Have an ongoing company-wide tech chat on Slack. Once a month we also do a mini tech conference in the company when anyone can come up and talk about practices they'd like to spread across the company.
Most importantly, don’t forget to get out of home for a nice walk to support your local coffee provider. Not every area of London is equally well covered with fancy cafes, but I feel like this situation has improved significantly in the past years. Keep your steps count up, even when your office is 5 steps from your bed.
Leaving Instagram for Open Web alternatives
"What, you're not using Instagram? Where do you post your photos then?"
This question came up a few times recently in conversations with my friends, so I thought I'd write a more detailed explanations.
I've been an active user of Instagram for a number of years. Daily logins, thousands of posts. Never became a proper IG celebrity, or even a micro-influencer - but I've had my fun following a significant amount of users posting pics from film cameras. It's been a long time since IG purists demanded IG is only appropriate for pictures taken on a phone camera. Doing that today would be your direct path to failure, since there are a billion users with the latest iPhone cameras, and every single one of them wants to become an influencer.
You have to be more inventive to win the game today.
Do you remember a time when all photos where forced to be a square? Oh, and a good selection of frames?
The answer to a question why you shouldn't use Instagram today have a few slices, so let me try and split this cake a little.
You are a product
I don't really like ads. I mean, it's totally justified for an online product to use ads as a revenue stream. Often, ads are the only viable revenue for an online service.
I used to like Instagram ads - they did nail the targeting thing and at some point it was recommending relevant events and things I actually was interested in. Lately however it feels that Facebook is grasping for increasing revenue. Facebook's core product is accessible via browser, hence ads can be blocked with extensions. I'm sure they are not happy about it. Instagram technically has a Web version, but it was deliberately strangled in order to push people towards the native app. This is where the majority of users are, and this is where FB has total control over the content.
This is where you can crank the revenue all the way to 11.
There are tricks for blocking ads in the native apps, but none of them will work with Instagram as I'm pretty sure that will block the rest of the content too. And, let's be completely honest, when you subscribe to services T&C:s, you also agree to watch the ads.
The only other fair alternative is that you stop using the service and leave.
Instagram used to have a nice API - a way for third parties like myself and yourself to build an alternative client with unique features to access pictures, leave comments, schedule posts and enjoy the experience on iPad - the core team been ignoring tablets forever. The bad news is - FB wants to strangle the third parties and stop them from existing.
The reason is easy enough - you don't have control over ads on those side apps, hence they don't contribute to the revenue. Why should you invest into features that don't contribute to your KPI:s? How are you going to get that quarterly bonus?
With the third party apps gone, the rest of the users will be forced to use the main client app, have all the latest features. And ads.
A sense of a timeline
Algorithimic timeline is great in some ways. Your best friends' pictures are always on top. The first picture is guaranteed to capture your attention and a like. One of the success metrics for the timeline team (I'm sure) is that top post must always get your like or comment. Because second post is an ad, and you'd want to scroll to see it after the success of a first post.
Algorithmic timeline is also a perfect tool for creating echo chambers. To guarantee your likes and satisfaction you will only be displayed posts from your previous likes and current desires, which means less relevant to your current mood posts will be pushed out of your timeline. People you follow become merely a number on your followers count.
In the current political climate of deliberate division I feel like algorithmic timeline might be a single most powerful feature that can bring extremely awful people to power.
I also like chronological timelines. I like how in Google Photos I can just scroll to a June 2017 and pretty quickly find what I'm looking for. Things will probably become more complicated once I'm old enough to loose track of time, but hopefully that'll take a few more years for my brain to soften on the social media apps.
As of right now, you can still access my account online on https://www.instagram.com/alexsavinme/ . As soon as you arrive, you will be prompted to log in by a not insignificant banner at the bottom of the page. If you dismiss the banner and start scrolling through the pictures, you will be stopped by a proper overlay forcing you to either login or sign up. There is no way of dismissing that.
So much for the open access.
"This is all great" - you might say. "But there is literally no competition right now. Well, maybe TikTok."
Lately I've been following the Open Web movement, and they have a few options in a form of open source, decentralised, anyone can host this apps. Decentralisation and openness are probably two key things that matter the most to me.
One of the direct competitors to Instagram is Pixelfed - https://pixelfed.org/ It is a web app which anyone can host. The clever twist here is that every instance can communicate with each other using a standard called ActivityPub. This way the network is decentralised, but you can still follow users from other networks, and everyone can like and comment on each other's photos.
You can post stories too.
At the moment of writing there are no native apps for Pixelfed, however their Web version works beautifully.
You can follow me here: https://pixelfed.social/alexsavin
If you decide to join Pixelfed, here are all the available instances: https://the-federation.info/pixelfed
Price: free (if you join one of the hosted instances)
Not a direct competitor to Instagram, more like a slow Twitter sort of service, which also fallbacks nicely to Open Web standards. It is what is written on a tin - a service allowing you to post small blog posts. You can follow friends too.
Here's mine: https://alexsavin.micro.blog
It looks just like a homepage, and there is RSS feed to subscribe to, if you are not a user of Micro.blog but still would like to keep an eye on new posts.
Yes, RSS is alive and kicking, there are still amazing clients like Newsblur to subscribe to feeds and read latest updates. In fact, 95% of all podcasts are distributed via good old RSS. And there is an argument if you can call podcast a podcast if it is not available via RSS.
Microblog allows you to post pictures too. And they have something called emoji tags - it's when you use an appropriate emoji to signal the type of the content in the current post. This in turn enables discovery of photos (and books, and movie recommendations, and music) that users posted on the platform.
I mostly use Micro.blog as my Twitter replacement + open bookmarks registry. The community is nice too. You can fave posts, but also you get thoughtful comments and some of the greatest discussions with strangers on the Internet.
There is a native app for iOS and MacOS too.
Price: 5$ / month. This gives me hosted version of the service, so I don't have to host it myself. And, hopefully, helps the creator of the service to keep it up.
It's still around - now saved by Smugmug, which tries to figure out what to do with it. I mainly use it as my main full res photo storage for all the film scans. There is no limit on how big a single photo can be - or it is incredibly high.
There are some unique communities still around on Flickr - one of highlights would be stereoscopic photo communities, and anaglyph in particular. If you have a pair of red/cyan glasses handy, you can discover tons of 3D photos on Flickr for your personal enjoyment.
My personal Flickr (been going since 2008): https://www.flickr.com/photos/karismafilms/
Cost: £4.50 / month
I should probably mention that every single above mentioned service allows public access to your public posts and photos for everyone with a link, with no need for registration or SMS. This is quite literally the fundamental principle of the Web how it was invented and intended to be used. It was also how the Internet was different from a bunch of other networks that existed long time before - everyone are invited, anyone can learn HTML and post your own webpage in no time. Also, you could view the source code of anyone's webpage, learn a trick or two, and use them on your own page.
These times are mostly gone now. Viewing page source on the Web is mostly reserved for software developers.
What is your release pipeline? It probably looks something like this. You write some code, add unit tests, maybe an integration test. Run it on your machine, then push to a CI and see if the build is green. Maybe test it on staging if you have one. Then - deploy to prod.
What usually happens next is that real-world users break your app. Not deliberately, of course, there's no pre-meditated evil intention. There are a few biases we have as software developers, and one of them is that if a feature is well specced and unit tested with 100% code coverage, it'll work just fine on production.
Except it, of course, won't. The real world will always have more edge cases that you can spec in your JIRA ticket.
Your freshly released feature might work just fine until it hits one of those cases when input data is not properly validated, or even manually entered directly into a database. In a rush. With typos. And then you'll have an unhandled exception, carefully captured and reported to Sentry. I hope you are not writing software for space rockets.
"But wait", - you'll say. - "Isn't this the usual circle of life? You write code, it breaks, you report a bug and fix it".
It totally is, and in most cases, there's absolutely nothing wrong about it. Most of us are never trusted with writing software for space rockets, nuclear reactors or financial instruments that automatically trade in millions of pounds. That's of course until you find yourself in charge of a team building a space rocket, and a JIRA ticket (sized as a medium) titled "MVP of a rocket, must not explode".
Shadow testing is a technique that can help you with that.
It's not going to help you launch the first rocket though. For that, you'll have to rely on your usual set of tests. But once you have a regularly scheduled rocket in place, it'll help you updating this rocket in a safer, less explosive way.
Hello, my name is Alex and I'm in charge of Revenue Assurance tech at one of the fastest-growing energy suppliers in the UK. We have over 1 million members right now, and significant cash flow.
Traditionally, energy business works like this - you burn some electricity, and then pay for it. The bill is generated monthly, and this is when you're expected to cover your energy usage with cash. In the UK we also have this thing called Direct Debit - this is when you agree for a company to charge your bank account directly, doesn't matter if there are sufficient funds or not. In the ideal situation, everyone is paying for the things they buy or use. However, once your business grows, you'll start having something called revenue at risk.
Revenue at risk is literally what it says on the tin. Some of the hard-earned money that belongs to you, and you are entitled to receive, is not coming your way. Parts of it can be delayed, and some of it will never find a way to your bank account. This is where Revenue Assurance comes in. They can think of the reasons, why the cash doesn't flow and find ways to make it do so. In an ethical, user-friendly way, of course.
For example, we have a set of cronjobs that can automatically detect failed payments, match it to a customer, check their circumstances and decide if we want to send a reminder about it. This is not a happy message you would like to receive in the morning, but it is something that could potentially prevent you from going even further into debt. This is also not a message you would want to receive by a mistake.
"But hey" - you'll say. - "Just copy the whole database into a staging one, and test as much as you want."
This was sort of ok until very recently when GDPR was introduced. Even before, copying sensitive customer data left and right was worrisome - but now this is completely illegal.
You could however, construct a version of staging data derived from production data. Names and addresses obfuscated, with non-sensitive data replicating current state on production. This is useful for some use cases - but not very useful to us, as we intend to send emails and other forms of comms. Besides there's also a replication delay between prod and staging data - about a week if you're lucky, and over a month if you're not.
What is the worst possible outcome?
This might be the most important question you have to ask yourself when releasing software. If everything goes horribly wrong, what sort of harm this can do? Is it something we can contain? What are the rollback procedures?
Depending on your answers you can decide what sort of testing you need. Facebook's famous mantra used to be "go fast and break things (as long as you can quickly fix them)". The worst possible outcome for an early Facebook was that someone's status update would go missing. On the other side of the spectrum are banks, transportation and space tech. The potential for harm is huge, and rolling back is tricky or impossible.
Where are you on this spectrum? For me, it is somewhere in the middle.
A traditional approach
At Bulb, we're pretty good at code quality. We do peer reviews. We have automated CI builds. Unit tests are a must, and a code coverage tool will post its report to a pull request. We have a staging environment, everything runs in Docker containers, and what you test is exactly what gets shipped to production. This gets us very far on the scale of confidence when releasing to prod.
But not to 100%.
There are a few things can go wrong and it is very difficult to predict it while still in the staging environment.
So what is the fine line between staging and prod?
- Production data consistency. Although the schema is well defined, the actual data can be unpredictable, and there is no way of knowing all the edge cases beforehand. Things have improved a lot though with us moving to Typescript and translating data typings into the code.
- Performance issues. Our staging environment is a replica of a production one, but it is not exactly 1:1. Certain operations that are fast in staging can be very slow in prod. Timing also matters, and while things can be fast at some hours of the day, they can become slow at other hours.
- Integration with 3rd party services. Almost every service nowadays has a development environment where you can test things, but again - in real prod situation things can behave differently, and exceptions can be thrown when you least expect them.
- Integration with internal services. Something that can be tested with a good integration test in a staging environment. However integration tests are bad at discovering edge cases, and again - ideally you want to run it against production data to get to the real thing as close as possible.
None of this might be a problem for you. The question you should be constantly asking is - what is good enough? How much confidence is enough confidence? And what is the plan when things go wrong?
(the plan is - rollback the deployment and data, of course, but this is like a whole other topic)
At some point in my job, I was tasked with implementing an automated system reminding our customers about missing payments. An initial version was fairly simple, was tested internally on staging and then on production with a group of brave beta testers. All was good, and it was successfully released.
Sometime later an improvement had to be released. Now this job was running against millions of customer records, daily. How do you test for regressions? How do you make sure the existing customers are not affected?
To make things more interesting, there are multiple stages a given customer is progressing through. At any moment you have thousands of cases at different stages of the process, and you want for our customers to never notice any transition to a newer version of the process.
My first idea was to look at production data we have gathered so far, extrapolate use cases and write a lot of unit tests. This could work. I would also have to maintain that huge set of tests and constantly be on the lookout for new cases - which are extremely tricky to identify.
My second idea was to release the change to production and see if anything breaks. But in a safe way.
We are talking classic reliability engineering here, but before things hit the fan. Imagine a way you could throw features to production, but if anything breaks only you would know about it.
A feature is released in shadow mode. It is there, in production, running side by side with the real thing.
Step 1: Your shadow instance receives live data. It processes the data and logs everything. In the end, it logs the result and doesn't actually mutate the state of the system.
Step 2: Your master instance receives the same live data. It does the same logging of all the decisions and actions.
Outcome: Now you have 2 sets of logs - shadow and master ones. They were fed with the same input data. Have they behaved expectedly and produced the desired result? We can find out this by simply diffing the logs.
Shadow testing is not a new thing - big companies with business-critical software are using it all the time. Tesla is using it to test autopilot features in its cars around the world. They push candidate updates to a selected group of cars without users ever knowing about it.
A candidate runs in shadow mode along with the master, receives inputs from the cameras and sensors in the real world and makes autopilot decisions, which are logged and sent back to Tesla servers for post evaluating. The only other alternative would be learning from actual car crashes. But that's, you know, hard to rollback.
Remember the part when a candidate instance in shadow mode is not supposed to modify the state of the system? This is the real key to shadow testing.
If you zoom out a little bit out of your app, you can see a larger picture where there's data coming into the app, and actions happening as a result. Some of the data could be persisted in a database. Emails could be dispatched. Phone calls could be made. A webpage could be rendered. What we want is for our app to be able to pretend to do all these things but not actually doing any of that.
Once you have identified all possible ways how an app can modify the state of a larger system, you'd want to implement a special flag. When this flag is enabled we want the app to run in a harmless mode where it receives production data, processes it and logs what it would do (instead of doing all those things). In our app, we called it toothless mode. When enabled, the app would become completely harmless and instead would simply log its intentions to change the state of the system.
You might want to rethink your approach to logging. Logs are the key to shadow testing and they should represent an accurate sequence of actions, as well as intentions to modify the state - aka results of the run. Every intention to state change should be logged, as this is how you will be detecting regressions in the behaviour of your app.
Once you have toothless mode and logs in place, you are ready for shadow testing. In our case, we run our apps as cronjobs once a day - so a single run has a clear beginning and an end. If your app is a service which is always up, you can run a shadow instance side-by-side. Depending on how actions are triggered in your app, you might have to think about routing this trigger to both instances simultaneously. For example, if your service is triggered by an HTTP request, you want this request to land on both shadow and master instances, with the same inputs.
It is entirely possible that your app modifies system's state in a way that it affects shadow instance. For example, during the run, your app requests data from a remote source, but also the same data is modified as one of the outcomes. This is not an ideal situation, and an immutable approach is always preferred. In the ideal world, your app is like a pure function, takes inputs, produces outputs and never mutates input values. However, if this is the case, you probably want to first run shadow instance, collect logs and only then run the master instance.
Here's the side by side comparison of the master and candidate logs running in production against the same input data. If they match 100% - there are no regressions or change in behaviour.
Collecting logs should be a simple case of tailing everything into a file. Some systems would also offer you auto-saving logs into a cloud storage bucket, which can be handy if you have huge logs and it takes a significant amount of time per run. Once you have your log files, you'd want to make a visual diff. VSCode is pretty good at this.
The sequence of events is important in shadow testing. A regression can express itself not only through bad output but also with a different sequence of actions performed. If you're using concurrency in your code (think promises), you will find out that the sequence of data processing is not guaranteed. Your diff will reflect this with chunks of logs shuffled around. Simply put, if your code uses concurrency, direct diffing of logs is not a good option.
Instead, you'd want sorted logs.
This is also the first step towards structured logs. You'd have to think a bit on how you'd want your logs to be sorted. Most importantly, you'll have to get rid of timestamps in your logs, as they generally confuse sorting functionality.
We use account ids. They are unique, always consistent and related to a given account that is being processed. Once sorted, you can easily spot regressions, even though the sequence of the events is now shuffled. Once you have detected the regression, you can switch back to the original log and restore the real sequence of events.
VSCode is a pretty good tool for starters. It has a built-in diff tool which will let you easily compare two files. You can also sort all the logs in case there was a concurrency issue and simple diff is not working out for you.
Things are easy when you have two separate files from shadow and master runs. At the moment we have an arrangement when logs are automatically uploaded into Google Storage bucket. Basically, all logs are dumped into the same file, and a new file is started on every hour. A single run can span across multiple files, and the same file can contain traces from both shadow and master runs.
Originally this wasn't a problem, as you could retrieve logs from a k8s deployment after it has finished running. However there is a limit on how many logs can be stored, and once you hit it, you will only get last ~10k logs. Google's recommendation is to setup logs streaming into a storage bucket, which is precisely what we did.
Now, it is possible to manually sort logs out of log JSON files. It is also not a particularly exciting task. Especially if you have to do it over and over again. So we implemented shadow test tools. It is a collection of scripts that can parse a group of files, pick the logs for both shadow and master runs, separate them and also structure them for easier comparison.
Another thing it can do is automatically handle expected differences. If you expect a difference in behaviour between shadow and master runs, you can describe this difference to the diff tool, and it will skip it from the final diff.
The tools currently are quite specific to our needs, but we might as well open source them at some point, if they become truly useful beyond the boundaries of our team.
In my mind, the ideal scenario would be having a shadow test run as part of the CI build. This could be one of those optional steps in the new Circle CI flows where you can let it run a shadow test when needed.
Running shadow tests is time-consuming. In our case, a single run can take up to an hour, after which you need a human to fetch the resulting logs, generate a diff and look for regressions. This can be automated further - you initiate the test, it runs in the background, generates a diff and posts it to you personally on Slack if there are regressions detected. Pats you on a shoulder if not.
Also watch Tesla's presentation from their AI day - specifically on how they use shadow mode to deploy increments to their autonomous driving model: https://youtu.be/Ucp0TTmvqOE?t=7892
Traffic shadowing with Open Diffy https://github.com/opendiffy/diffy
Traffic mirroring, also known as traffic shadowing, provides a powerful way to bring changes to production at the lowest possible risk. The mirror sends a copy of real-time traffic to the mirroring service. Mirrored traffic goes outside of the critical request path of the main services.
Alibaba and traffic comparison https://www.alibabacloud.com/blog/traffic-management-with-istio-3-traffic-comparison-analysis-based-on-istio_594545
Testing in production - the safe way https://medium.com/@copyconstruct/testing-in-production-the-safe-way-18ca102d0ef1
Current state of things
I wonder if my blog publishing pipeline still work, as the last blog post was about couple of years ago. Time flies.
Since then I've started a micro blog: http://alexsavin.micro.blog/. This is effectively my replacement for Twitter, as it is still maintaining open Web ideology, and is a nice place to be. Do subscribe, and let me know if you also have a micro blog - I'd be happy to follow you.
Ok now it is time to press publish and see if this ends up on my blog.
Reactive Conf 2017 takeaways - Datomic, Logux and CSS with Elm
Some of my highlights and notes on the recent conference in Bratislava. 3 days, 2 tracks, lots of cake and some Angular.
@swannodette on Datomic
David Nolen of Cognitec and Clojure gave a good overview of issues we're experiencing today when implementing mid to large scale apps, before explaining how Datomic database might solve some of them. Some of my thoughts here inspired by his talk.
If you have a choice of investing into tests vs investing into a simpler system, latter is always better. Less moving parts mean fewer things to break and less time for debugging issues. You still have to invest time into designing simpler systems though. Making large and complicated systems is easy and straightforward in a short term, this is what we're all doing after all.
PLOP aka place-oriented programming - a term Rich Hickey used in his classic talk to describe our traditional approach to mutable programming. Originally was required by limitations of old computers with little memory available. Things have changed quite a bit since then.
Conventional databases in the system are effectively huge mutable variables with all the applicable consequences. Debugging large systems is hard because it is impossible to replicate the exact state of it during the issue. We've figured this quite some time ago for React - if the state of the app is immutable things become much easier to control. We haven't transcended, however (for the most part) beyond just the apps - our systems are still full of mutable parts making a life of a developer hard and miserable. Datomic is effectively an immutable database, like a huge Redux store.
I've tried setting up Datomic in the past though, and it was not an easy task. It is also not free, although they do have a free tier. It was also only useful for Clojure apps. Things are getting better, however - Datomic JS client is coming, and you'll be able to start a Datomic instance from AWS Marketplace in a couple of clicks soon. Even today people manage to run it for their Node.js apps, but that does require some setup.
- Couple of true classics - Rich Hickey - Value of Values and also Hammock driven development
- Out of the Tar Pit book
- How to solve it book by mathematician George Pólya describing methods of problem-solving
Focal from Grammarly
Neat FRP UI library from Grammarly and also used by them in web and desktop apps. Written in TypeScript.
Jest as a platform by Rogelio Guzman
Apparently, Jest is becoming a full-fledged platform. The good news is that gives us some neat customisation options, as well as allows to peek inside Jest itself.
As of today most of our unit tests are quite a mess of libraries and conventions. Sometimes it is Jest. Sometimes it is Mocha + Sinon. Add a bunch of extra tools for better mocking, reporting results, linting the code. What Jest is trying to do is to pick and swap parts of itself to other things you prefer. Will we ever end up with a perfect testing setup working with all apps? Probably not. But you could have conditions in that setup.
A basic lifecycle of a Jest run consists of
- Finding all the relevant files
- Running tests
- Reporting results
Jest contains tools for achieving that, quite a few of them. What you can do today is to pick the tools you need, leave out the tools you don't need, and replace some default parts with other alternatives. You can swap test runner to Mocha, or even write your own.
Jest used to be a highly opinionated test framework that made choices for you, but they are opening the hood now for everyone to tinker with.
Sean Larkin on Webpack and OSS community
I've missed the 2 hours long workshop on Webpack but managed to attend an AMA with Sean Larkin. The good stuff:
- There is a SurviveJS Webpack book, free to use.
- A fascinating insight into how community influences feature picking decisions and the overall development of the project. Also how to deal with frustrated members of the community. Apparently, you have to show them love <3. In the words of Sean "The best weapons against trolls is show that you care". The general idea is transparency of the backlog. Another important influencing factor is potential sponsors of the OSS project. A company might decide to sponsor your OSS so they could get a preferential treatment, which is totally fair.
- OSS is done best when done full time, but that's not really the news - and that's why sponsors are important.
Richard Feldman on CSS as byte code
This talk might've been also titled "Why you should use Elm for CSS", but we'll get there in a moment.
"CSS was never meant to be used for creating user interfaces"
Not really. Back in 1990:s HTML was conceived as a content-only language, but users wanted to make the content look pretty. So around 1996 CSS started to happen so that users could write it in the browser and format the content the way they wanted it to look. Yep, CSS was meant to be written by users. This explains my long-standing question on why there is default CSS in each browser - because this is where all of the CSS where suppose to be by design.
Things are rarely used the way they are originally designed. Well, most of the time, but not always. Today Web is the biggest user interface delivery platform.
We're quite used to JS transpilers as of 2017. Yes, we have a bunch of browsers with different implementations of the new ECMAScript features, but this is why we have handy tools to make the shiny modern code we write compatible with every outdated browser there is. Most of this today is done automatically. Compiled JS is effectively today's bytecode.
You could draw some parallels into the CSS world - there are PostCSS and LESS and SASS, and CSS3 is sort of happening. But the vertical positioning of a content is still a tricky thing (unless you're all flexbox that is). There's room for more effective tools offering more powerful features and transpiling it all into browser compatible CSS. Hence Elm-CSS. It doesn't just build on top of existing CSS conventions maintaining the old mindset of "oh well it's still CSS", but tries to invent a new UI building language, evolved from CSS but not inheriting any of its bad parts. You also get full benefits of this being written in Elm - all CSS in JS ideas work here too, but the code looks much prettier.
Igor Minar and Evan You on client side app optimisations
Make the client side app as light and fast as you can they say. The art of optimising frontend bundles slowly moves towards the dark arts - and both Igor and Evan did a great job at demystifying it. Most of the job is done by Webpack anyway.
Today there are handy tools allowing you to peek into the compiled Webpack bundle and see what's really going on in there. Source-map-explorer got a special recommendation, I've tried it - it totally deserves its reputation. You have to have a certain understanding of the needs of your code in order to use it effectively - but even deep diving into the unknown codebase might give you some insights. The trick is to look at the way the modules are imported and question the need to import the whole module or only the necessary parts.
There's also a flip side - when writing a module, make sure it is, well, modular. Allow clients to pick what they need instead of dumping everything into
Webpack is capable of concatenating modules, but not by default. For that, you have to enable
Code splitting is still a good idea. My general argument about it was the speed of HTTP requests on slow mobile networks - but this is where http/2 comes in handy - and you can totally use it today. Code splitting is very effective for single page apps where your landing page does not necessarily need the rest of the app straight away - something we still do almost always, letting our users wait for the whole thing to load before anything meaningful can happen.
Honorable mention - Webpack Dashboard from Formidable. "Now when you run your dev server, you basically work at NASA".
Prop based testing by Gave Scholz
When testing things we need test data. Most often than not the data would be hardcoded. Sometimes (if we feel generous enough) we might write a generator. The idea of randomised prop testing is to use generators that understand types of the objects and can generate unique objects every time within the set type.
Why would this be possibly a good idea? This is effectively introducing entropy into your otherwise perfectly isolated test environment. That's exactly why. To design a fail-proof system you have to define boundaries of it, and let the robots test those boundaries properly. An average unit test will perform a very niche scenario with a single given value out of sometimes millions of possible values. Sometimes it is possible to test all of them for every build. Sometimes not. Randomisation is a middle ground.
Andrey Sitnik on Logux in production
We're notoriously bad at handling errors when it comes to the client side. Well, maybe also server side. But mostly client side. The forgiving nature of an app in the browser is that once a user is frustrated, they can always hit a reload button.
This was not always the case - in a native app, there is an option of hanging helplessly and hoping a user knows how to kill an app that is stuck. Or throwing the whole OS into a BSOD state.
But I digress.
Single page apps are easy to get into a stuck state because they rely on a good network connection. Which is rarely the case. Andrey brings an example of the internet in China where packet loss can reach 40%. It's just the way it is, there is no way around. Network connection is actually mostly bad in most parts of the world. Africa, New Zealand and Bethnal Green are among these places. You have to face the fact and address it as a web developer.
Logux is a solution to synchronise app state with a server in a reliable, fault-tolerant way. You could sort of compare it to Apollo and Relay Modern in a way - but it is much more minimalistic, relies on WebSockets and doesn't force you to use GraphQL or wrap your React components into containers. Its API is Redux compatible. It allows you to define app actions and have central app state, parts of which are automatically synced to a server side.
The secret sauce is The Log (hence, LOGux). The log is timestamped and allows to figure out the ultimate state of the app at a given time. Logux also implements its own timestamping, since you can't really rely on clients, or, in some cases - even servers. All it is, in the end, is the log of actions, coming from either client or server. Once the actions are received and reduced to the app state, all of that is rendered and presented to a user.
Logux has grown to a version 0.2 since its announcement on React London conflict in March 2017 and is used today by Evil Martians in production.
- Did a vlog on the conference - check it out