"All the News

That's Fit to Print"

Probably Danny

TYPESCRIPT EDITION

Weather: Rain, mild temperatures today, tonight. Showers tomorrow. Temp. range: today 42-35. Thurs. 36-32. Full U.S. report on Page 20.

ARCHIVE

This is a collection of my small posts on the internet, generally in reverse chronological order.

<Fixing Pyenv: a short journal>

This is a record of a developer making a small contribution to a big open-source project. It is not a tutorial, nor it’s an advice. It is a description of what happened and what I thought.

I was setting up my fresh macbook for a freelance task involving Python. I had been using pyenv for managing Python versions on Linux, so I wasn’t expecting anything to go wrong. I confidently entered “pyenv install 3” in the terminal only to see it fail.

Confused, I started searching the internet for a fix. There were a variety of suggestions, and here are some things I saw: setting an environment variable, uninstalling & reinstalling a specific homebrew package, and patching the pyenv bash script. I didn’t even know pyenv was written in bash before.

I kept digging because most of the solutions were a temporary workaround which would break again in the future. After more than an hour, I came across a GitHub issue about pyenv-macports incompatibility. Voila! That must have been it because I had macports instead of homebrew on my mac.

To my surprise, the issue was marked as resolved. There was a merged pull request adding a support for macports. How could I still experience the issue when the issue was already fixed?

Spending hours over a dev environment setup was not fun, but I had gone through much worse before as a long-time Linux user. At least I was not at a dead end in this rabbit hole. I started reading the code changes from the pull request.

Although I knew the fundamentals of C/C++ compiler and linker, it took a lot of Googling to understand exactly what the original bash script did and how the PR changed its behavior. I also tried manually running some commands from the script to see what they did.

I eventually made an interesting discovery. Running "port -q location readline" produced a filename on my machine, but the bash script was clearly expecting a directory name from that command. I tried changing the script to use the path I thought was correct. Pyenv worked after the patch!

So I went ahead and opened a pull request with this patch. I didn’t think it was a perfect solution, but I expected the project maintainer to understand what I was trying to solve in my PR and help me shape the PR into an acceptable contribution.

You can find the PR here: d3y.link/pyenv-macports-pr

And you can see me making mistakes. I broke test cases, and failed to pay attention to some details. The maintainer @native-api was super patient throughout the 6 iterations until we could merge the PR.

At the end of the day, I am really happy that I could make pyenv support my particular setup. It is rare to find an issue that’s affecting many people && fixable by me && not fixed yet.

Random bash knowledge of the day

FOO="bar${FOO:+ $FOO}" # "bar"

FOO="${FOO:+$FOO }baz" # "bar baz"

This idiom is for when you want to append or prepend a word to a variable but don't want to add an extra space if the variable's empty.

JS equivalent would be something like this:

FOO = "bar" + (FOO ? ` ${FOO}` : "");

FOO = (FOO ? `${FOO} ` : "") + "baz";

There are more operators other than :+ which you can find here: d3y.link/bash-param-sub

Just something I came across while troubleshooting an open source script. The file was full of this syntax 😅

Just spotted that the React Miami talks are now on YouTube, and there’s a ton of great content to dive into 🍿

The first video to pop up on my feed was David Khourshid's "Goodbye, useState." The talk summarizes the current best practices in React state management.

Although I agree with all of David's suggestions, it is a little sad that the majority of the solutions were about choosing a good 3rd party library.

There's no one solution to rule them all. We are in this world full of projects built by people who had to solve their particular pain point. And we stitch those together to build our websites.

On one hand, I'm glad that we haven't run out things to solve. We are well in the era of innovations and heroism. On the other hand, it is so painful and frustrating!

It's been 30 years since the internet became popular. Here's to 30 more years of chaos and creativity.

d3y.link/react-miami-yt

Job interview is a barren ground of feedback, so my mock interview with [REDACTED] this week was really valuable to me. He helped me look at interviews from the employer’s point of view.

We did a live-coding interview together, and honestly, I was pretty comfortable with this format. 1 hour to render the list of NYC subway stations and arrival times, API provided. After 1 hour of talking through my thought process and coding, I could complete 3 out of 4 requirements.

I felt pretty good about how I did. [REDACTED] is a very friendly person, so I could be myself the whole time and write as much code as I wanted to write. I wouldn’t have known how to sell myself better if he didn’t provide his thoughts.

The main point of his feedback was to be aware of how the interviewer fills out the rubrics and picks up “signals” during the interview. The employer wants to justify their hiring decision with solid evidence, so it is our job to provide as many reasons to support that decision.

And the simplest quantifiable criteria is “did the candidate complete the task.” It is a simple checkbox, and we want it checked. That’s why we need to prepare our dev environments beforehand and waste no time during the interview.

Another thing the interviewer looks for is the candidate’s level of experience. In my case, I’m trying to get a senior role, and I should have conveyed how experienced I am to the interviewer. [REDACTED] shared some common topics that can raise expectations.

Being aware of the production environment and all things that can go wrong is a very straightforward sign of seniority. Think flaky API, unwanted latency, server crashes, traffic surge. Point out the scenarios that come to your mind and briefly suggest how to mitigate each risk.

For product-focused roles, identifying a few UX improvement points can send a strong signal to the interviewer. Is there any friction in the user flow? Would the UI look nice on the user’s device? Touch on those topics.

Finally, live-coding is a great opportunity to showcase your soft skills. We can show the interviewer how we react to errors, collaborate through the troubleshooting process, handle cross-functional communications, and defend our decisions.

Although live-coding can never be very realistic, it is a tool to observe the candidates. We should be aware of how we are evaluated and act accordingly. I don’t think I’ll be able to hit all the marks in a real interview, but this gave me so many levers to pull during the interview. I feel more resourceful.

For everyone preparing for a technical interview, I hope this helps you nail your next interview!

[The name of the mock interviewer has been redacted as requested]

One day, a LinkedIn post about a book popped into my feed. On Writing Well by William Zinsser. Who talks about writing well in this era of AI frenzy?

Now that I have finished the book, it didn’t magically turn me into a star author. It did, however, make me rethink about what a good piece of writing is.

I used to hate writing in school. The assignments were never about an interesting topic and always needed to be in a specific format. So the writing was this painful act of squeezing words out of my already depleted brain. ChatGPT would have been a great help there.

It all changed when I started posting about my passion: software and the people involved—sorry if you expected me to say Zinsser changed my life. At first, I just wanted to share what I was excited about, and then I could see my posts slowly affecting other developers. It got me going.

Writing was fun but not easy. Remember how I didn’t train myself in school. Technical topics were particularly tricky. They got dry too easily. Simply listing out the lessons or source code could not entertain anyone.

The blogging platform I started on helped me disguise the dryness with elegant formatting. Code highlighting, diagrams, and italics were there for me to add clarity to my otherwise boring blog posts.

Then I began posting on LinkedIn where there’s no rich formatting. I could not insert images in the middle of the post or syntax highlight my code snippets. It didn’t even have italics! And the worst part is, it only showed the first couple lines of each post unless the reader chose to read more.

It forced me to work on my style. I learned how to start a piece with a good hook. I learned how to structure my writing in nice cadence. I learned how to pay attention to the rhythm of my sentences. Overall, I put a lot of effort into writing stuff I would love to read myself.

Despite how much I admire Zinsser’s style, I probably won’t reach his level in my lifetime since my passion is software and his was writing. I won’t be able to pick my words as carefully as him or structure my sentences as flawlessly.

And that is okay. His book showed me what good writing is about. “Writing well means believing in your writing and believing in yourself.” Not pretending to be someone else and showing the reader the idea that matters. That’s definitely what I can strive for in my writing.

Adding uptime monitoring to my self-hosted services.

This will notify me when my home lab is affected by a power outage or server process crashes.

https://d3y.link/status

Fellow web developers, it's time to take our resumes beyond the bounds of MS Word and Google Docs. How does using html and css for authoring a resume sound to you?

Here's an html resume template you can use today: d3y.link/resume-template

I consider myself a fairly fluent web developer but not really proficient in word processors. I generally make better websites than Word documents. Learning how to right align dates in Word and copy pasting the styles over and over again was—mildly put—agonizing.

These petty problems are all straightforward to solve with css.

So I did it. I wrote a resume in html and css. It looks nice on browsers with a box shadow and dark mode support. There's a few `@media print` queries for easy pdf export.

Feel free to copy the code and tailor it to your needs. Instructions included in the template repo 😘

My personal url shortener d3y.link is live! Big thanks to Yujin Kim for building it for me 😄

LinkedIn replaces any url longer than 26 characters with a randomly generated lnkd.in link. So users don't have any idea what they are clicking into.

This made me want my own url shortener and share more meaningful links with you. For example, you can check out the source code of the url shortener at d3y.link/source

This was my first time asking someone to build a personal software. I got a working product delivered last month and had a lot of fun building on top of it. Building the foundation always has the most friction, so diving right into a working software felt good. I would do it again as long as I can afford to hire more devs.

Next up, I'm gonna setup a basic personal website and start writing more long-form articles there (evertpot.com is my inspo). Can't wait to share links to those articles here :)

Economies of scale is amazing. People million times richer than me is still using the same smartphones and apps as I do.

Today’s hiring process in tech frustrates many people. And I have a suggestion for a fix. Hear me out, and let me know if you think this is viable or not.

Let’s start from the beginning of the process. Employers can accept free-form applications rather than formalized application forms and resumes. I mean free-form as in sales material: videos, posters, case studies, or testimonials. Candidates can submit any material that highlights their virtues the best.

The employer can then pick a handful of the candidates, and move on to the final interview right away. Test their skills, check their claims, and find a cultural fit.

That’s it—significantly shorter than our current practices. Let’s discuss some pros & cons.

The highlight of this approach is its efficiency. Because there are only two stages, it makes the hiring timeline orders of magnitudes shorter compared to the industry standard. Candidates can invest in building a good sales material once and reuse it for multiple applications. Compared to how job candidates waste hours on filling out the application forms for each role, this is a way less frustrating experience.

The lack of objectivity can be seen as a downside of this approach. The standardized application forms and interview stages do make it easier to quantify each candidate.

Yet, I believe objectivity has very poor return on investment and companies are investing too much in building an objective hiring system. Employers should be able to find great candidates even with the short process. Instead of sorting for the best-scoring candidate, employers can discover great candidates who are perfectly capable of taking the role.

Standardized processes are not only costly, it deprives flexibility in each hire. Cost efficiency leads to better flexibility and lower stakes for mistakes. Like any process in a healthy org, hiring can benefit from a leaner approach.

Quick thought: maybe Typescript could have achieved the performance benefits without having the interface syntax if it had an utility type for expressing hierarchy.

Something like,

type X = Extends<TParent1, TParent2, { id: string }>;

type Y = Extends<X, { id: number }>;

👆the compiler can see the type hierarchy and show a type error here.

Type X is a parent of Y, and {id: number} is not assignable to X. Thus, the type error.

Of course, this is missing the declaration merging feature, but I think it was a bad idea to support global interfaces and namespaces to begin with.

I would have much preferred to have just one syntax for defining types, and not have this perpetual types vs interfaces debate.

Found a link to a nice collection of Toronto tech groups posted on a wall when I was at the ATProTO meetup yesterday.

https://lnkd.in/gBKUqfuC

These ones pique my interest 👀

* Side Project Social: https://lu.ma/wmu4li6t

* Tech Pizza Mondays: https://lu.ma/8mnuq4hy

The first quarter of 2025 made me rethink about how to make bigger impact at work. As a software craftsman, my goal has been making greater impact with my work. Empowering my peers to reach their full potentials and delivering more profit for my employer.

Before this year, I thought I could make a progressively greater impact by growing my skills and putting in the hard work. I learned how to write better software from the masters at work and from my communities. I believed that my code would yield better results as I grew.

And this year, I had an easy win. Made a huge savings for my company with little effort. I could cut our infrastructure cost by half across the board by auditing our usage pattern. This was the greatest monetary impact I had made in my career, yet what shocked me was how it didn't need years of experience to achieve this. Danny with 1 year of experience could probably have done it. This experience changed how I look at the developer impact.

In essence, a software developer is a problem solver. What we produce is solutions to the problems in the world. So the simplest way of making a big impact is to solve a big problem. How well we solve it definitely matters, but not directly related to the scale of the impact we make.

And we can't always pick which problem to solve at work. First, knowing how to identify a big problem is a skill on its own. Second, we are often hired to solve someone else's problem. Problems come to us, not the other way around.

It doesn't mean we should rely on pure luck tho. What we can do is being ready for the legendary challenge. Growing into a person who can tackle big tasks and letting other people know what we are capable of.

Or you can launch a startup and solve your own big problems idk. You do you, I'll stick with coding for a little while.

We have the speaker lineup for our next event!

- Robert So: Agents

- Di Wu: Kubernetes

- Danny Kim: The simplest way of writing CSS in React

- Phil Welsh: Redesigning the Design System

- Sukhveen Anand: GraphQL and Federation

We are at full capacity, so in case you cannot attend the event, please cancel your RSVP to make room for the waitlisted people. It is difficult to find a venue for meetup events, and we would love to maximize the networking opportunities for the attendees.

Sometimes I think its incredible how we don't have an ultimate notes app or an ultimate todo app. Those are like the simplest idea ever, but still very hard to bring them to perfection.

I'm trying to test how viable inline styling is with React. Inline styling is not very popular, but it doesn't look too bad so far. Let me share my progress.

There are so many ways to style a React component nowadays.

Tailwind, the hot & shiny

<div className="p-4" />

Styled components, de facto standard from the CRA era

const Container = styled.div`

padding: 16px;

`;

Vanilla classes, oldie but goldie

<div className="container" />

.container {

padding: 16px;

}

And finally, our tribal taboo, inline styles.

<div style={{ padding: "16px" }} />

The web development community has a general consensus to avoid inline styling. As a result, it is very rare to find a project exclusively using inline styling.

Despite this general rejection of the idea, I have been wondering if it is viable to use CSS without tooling. Tools such as postprocessers or bundler plugins seemed less necessary due to the recent advancement of CSS standards.

The more I thought about this topic, it made a lot of sense to mix vanilla classes + inline styles.

1. This approach is really straightforward. No tooling, no magic. If you know how to write CSS, you can read the code written this way and understand how it works.

2. Corollary of the first point. If you are proficient in CSS, this approach lets you use 100% of your skills right away.

3. Theming & style reuse is trivial. You can either use CSS custom properties or JS constants in inline styles.

4. Most of the styles can be colocated. In my opinion, it is much quicker to understand code when styles are part of HTML/JSX. It is distracting to scan back & forth between styles & JSX when those are separate.

That's enough of good stuff, but before I switch to using inline styles full time, I would like to address my remaining concerns:

1. Does it have acceptable performance? From my quick test, nothing was immediately slow, but I will be more confident with a thorough benchmark.

2. Does it really improve developer experience? I'm yet to write any complicated styles (media queries, pseudo elements, specificity puzzles, etc). We'll see.

I'll share when I make more progress on this.

#CSS #WebDevelopment #React #Frontend

--Eulogy for Danbi--

After difficult years in the states, I moved into my parents’ place in Korea. And there she was, a cute grumpy furball. A four year old cavalier just as anti-social as I was. She was real picky about who gets to be her friends, and I was lucky to be one.

Danbi’s favorite was my dad, but she spent long hours alone most days. Dad was often away from home for months because of his employer dispatching him all across the globe. Mom coached student debate teams until late. I, unemployed, could spend time together with Danbi, only until I moved to Toronto.

Two years after I left, she retired with my dad and moved into a cozy farm near a stream. There she could spend everyday with her favorite person, doing her favorite job.

Fetching a bouncy rubber ball was her passion. She shined the brightest when she was flying across the field to get that ball in her mouth. A car crash once ruined her rear legs, but she survived to fetch balls again. Nothing could stand between her and her prey (ball).

But all things come to an end. Danbi peacefully passed away this week. I couldn’t share her last days on earth, but she was there during my darkest times. She always stayed strong and positively influenced her people even when she must have felt lonely herself. Danbi knew what it meant to welcome someone and made me feel at home every time. She has my eternal gratitude.

Here’s all of Danbi’s pictures I have. It will be great if you can take a moment to look at them so that there are more people remembering her 🙏

https://lnkd.in/geuJDhmi

My LinkedIn feed got too spammy, so I went through it and tried blocking a few dozen profiles I didn't like and connecting with a few dozen I liked.

LinkedIn slapped me on my wrist for not being authentic 🥹

Well I'm back now and trying to think of another way of clearing this mess.

Is there a limit on how many people I can block on LinkedIn?

I'm working on something the LinkedIn community will be excited about.

3 weeks ago, I asked about how we can make database schema changes easier.

https://lnkd.in/gXjxZmyw

I'm still searching for the answer, but I came across an interesting approach at TorontoJS Slack today. Thought I'd share 😃

"Event sourcing" means saving data as a stream of events instead of saving the current state. With event sourcing, the current state is calculated based on the past events. For example, an user account can go through a sign-up event, a change name event, an upgrade event. The current user state is the combination of all those past events.

https://lnkd.in/gNJ_Smdu

Because the current state solely depends on how we interpret the past events, event sourcing can be very flexible. A database without any need for migrations! It almost sounds like a cure-all in theory.

Although in practice, event sourcing can be the ultimate backward-compatibility hell. We need to consider all versions of data types and relations we ever had in the system when we calculate the current state from past events. Failing to do so can end up with a bad result.

I was also worried about the calculation speed for inferring the current state from past events. Thankfully, the calculation (or projection as they call it) is well optimized. There are several mature database engines that are purpose built for event sourcing such as Datomic.

It is a deep rabbit hole I feel like I'll be digging for a while.

Me: I have a great idea 💡

My programmer brain: Let's start from building a SaaS platform for that 😉 👉

😬

Interesting to see there's only 1 post about #FreeOurFeed on LinkedIn.

Although I don't really agree with the manifesto, I thought it was at least newsworthy. Maybe all people who care about social network feeds & algorithm already left this platform 😅

I started using Mastodon recently, and it is surprising how clean the feed can get when you don't see ads or people trying to game the algorithm. So I can't blame anyone leaving LinkedIn after all...

What are some ways you can make data schema changes easier? I can't be the only one who is afraid of changing data schema. This is not a how-to-do-things post btw. This is a what-do-i-do post. Genuinely asking.

Changing schema (or ERD, data structures, types, or whatever you call it) is a necessary part of the product lifecycle. We build a piece of software, find a better way of doing things, and make changes. Schema is at the center of it.

Because the most interesting part of a software is usually about how we handle data, schema changes can affect a LOT of code. And editing a lot of code is not fun. It is scary, especially in big code bases.

So I thought, if there's some tactics we can use to make data schema changes bearable then it'll greatly improve the developer efficiency. I know I spend quite a bit of time worrying about future db migrations when I can be just shipping features.

So... what do I do? or what do you do?

#help

Firebase pricing is insanity. $1/GB egress??

Came across this gem from 2010, and I found it fascinating to read how a type system and testing guide us to write 'correct' software in different ways.

https://lnkd.in/gkxDsg2a

"Yeah duh, type checking and testing make your code better. Who doesn't know that?" No, no, hear me out. The fascinating part is how they are different.

Testing sets the upper bound of the correctness. A correct program never fails a unit test. The problem is, an incorrect program doesn't always result in a failed test case. A program can pass an entire test suite and still have a bug. Because most programs can take infinite number of inputs, it is impossible to test all of them.

Type system on the other hand, sets the lower bound of the correctness. A type system can prove certain properties of a program. For example, it can prove a program never divides a number with a zero, or dereference a null pointer, or leak memory, etc. It can prove all sorts of nasty things never happen on our program. However, it cannot determine whether the program will run as expected or not.

That means both have fundamental limitations. Neither the Rust compiler nor 100% test coverage can guarantee a flawless program. Each has its own strength, and you might want to choose one that fits your strategy better.

Toronto folks, are you down for a board game night this Friday?

I'm hosting one at Snakes & Lattes College.

RSVP: https://lnkd.in/gWRrq6-W

If you're wondering why I'm posting this here,

95% of my social circle's on LinkedIn. Don't judge 🥹

Services that are charged based on usage without monthly plan:

- Database: CockroachDB by Cockroach Labs

- Email: ZeptoMail by Zoho

- Payment: Stripe

- Hosting/deployment: Fly.io

- Auth: Don't use a service, roll your own.

All of them scale down to $0 when you don't use any. I'm not saying monthly subscriptions are wrong. A company can charge whatever it wants to. Doesn't mean I have to like it.

For me, it doesn't matter how generous the free tier is. I will pick usage-based pricing over monthly plan any day of the week.

#saas #IndieDev #NoB2BThanks

React 19 stable is available. It was a long wait 😄

I published an npm package "little-ioc" this week. It's been downloaded 200+ times, and I have no idea why!

As the name implies, it is a teeny-tiny inversion-of-control library. Let me share how I got to this point.

npm link: https://lnkd.in/gQfWWhXP

The beginning was Lazar Nikolov's YouTube video about introducing inversion-of-control (IOC in short) to a Next.js project. IOC is a great way of reducing code coupling and improving testability of a codebase. In the past, I've tried to introduce IOC to my team's project without luck because I couldn't find a suitable library at the time. I really wanted to know how Lazar could add IOC to his code, so I couldn't resist clicking on the video.

YT link: https://lnkd.in/gz6AbPDz

In his video, he shared how he tried to use IOC library called inversify then switched to another one called ioctopus. Inversify didn't work on Next.js. It relies on type reflection language feature which fails in a fullstack environment like Next. ioctopus on the other hand, has a much simpler design and works fine on Next.

That got me thinking: it is great that Lazar could use a much simpler tool and still satisfied all his use cases. Can we go even simpler? How simpler can we make an IOC library?

So I thought about the requirements. Inversion-of-control is basically 2 parts:

1. Each piece of code exposes its dependencies as parameters.

2. A container fills in those dependencies for the code to run.

There is no reason to pull in any advanced language features or arcane magic for this. You CAN make an intricate dependency resolution algorithm with some magic (e.g. find a class matching the required interface), but you don't have to. It's just an icing.

After this revelation, I went ahead and built little-ioc. The result is so small you can probably read the entire documentation in 2 minutes and the entire source code in 10 minutes if you ignore the type annotation part.

I really like the simplicity of it, but I am curious how other devs would think about this little library. If you have tried inversion-of-control or dependency-injection before, please take a look and let me know what you think!

If you haven't tried IOC yet, I recommend you watch Lazar's video. It is super insightful 😃

#software #npm #javascript #GoSmallOrGoHome

Here's how to compress a folder with tar package. Is there a way to do this without dependency?

Had a blast at FITC Web Unleashed ✌

Bekah's talk on open source crisis was particularly inspiring because I admire how passionate developers have been driving the growth of the industry. Lowering the barrier of collaboration is my mission, and learning about the OSS maintainers' motivation and struggles was eye-opening.

The venue was spacious but casually intimate, putting everyone within arm's reach. It was so strange to stand next to the superstars of the web world.

Anyone attending VueConf Toronto this year? I live across the street from the venue & know many great restaurants nearby. Please reach out & we can hang out 😄

Also, they are trying to make the conference more inclusive and offering diversity tickets. The criteria seem wide enough (women, lgbtq, disability, ethnicity, financial issues, etc), I recommend reaching out to the team if the ticket cost is the only thing standing between you and vueconf. Doesn't hurt to ask.

What do you consider when you are building an internal tool? This week, I am building a couple pages of admin UI to reduce handoffs and optimize operations.

My team noticed there are a few day-to-day operations which require developer intervention, and we decided to make these operations more streamlined so that anyone in the team can perform the full task without the need of grabbing a dev.

The parts that are blocked by a dev are usually very trivial tasks, such as updating single database field or adding some static data to the codebase. Easy enough for a developer, not comfortable for others.

I love building this type of tools, but I also wonder if it is worth it. It will take a good part of this week for me to build the admin UI. Will this admin UI save my team DAYS worth of man-hour? I am not sure. We are a small team, which means the time saved has a tiny multiplier (single digit).

On the other hand, I am confident this will improve everyone's focus. Developers will get less requests throughout the day for small changes. Everyone else can perform tasks more quickly because it doesn't require dev intervention.

Also, the code I write today is something the team has to maintain over time. Custom UI will provide bespoke experience, but there are general tools like Retool or Airtable which might be sufficient for the usecase. In my experience, these no-code tools are faster to setup while harder to maintain once they get more entangled into the product.

Deciding what's the best use of my time is always a difficult problem. There are so many tradeoffs.

Should you write your JS code in CommonJS (CJS) or ES Module (ESM)?

The dual standard for JS module system has been going on for almost a decade, but there is no clear answer yet. Please look at the image attached. It shows ESM/CJS interoperability on NodeJS.

Given this, application authors (as in library users) are motivated to write their code in ESM because ESM code can import both CJS & ESM.

Library authors, on the other hand, are motivated to write their code in CJS because both CJS & ESM code can import CJS.

Special cases:

* If your code is meant to be imported from browser, then there is no choice. You must use ESM.

* Most bundlers can handle the mix of CJS & ESM in source code. And the final bundle is often neither CJS or ESM because it is a single file.

* Typescript compiler can convert ESM syntax into CJS. This is why all TS code looks like ESM unless you have "verbatimModuleSyntax" config enabled.

* Despite the compatibility issue, many library authors are pushing ESM-only libraries. This is to encourage the JS community to move onto the ESM-only future.

If you are writing an internal library for your company monorepo, I feel like going with CJS is the least difficult path. If you are publishing your library on NPM, you can go down the easy CJS path or push for ESM future by sacrificing compatibility.

Tho it seems like Node will achieve full interoperability in a few years. Which means we will most likely live in the dual standard world for another decade.

#javascript #nodejs #pain

Here’s a task many developers have to perform more than once in their career: collecting a substantial amount of data, manipulate it, and send it to somewhere in a specific format. AKA data plumbing 🔧

There’s something weirdly satisfying about this task. If it is just a few megabytes of data, we can just load the entire data on RAM, and the whole task takes a couple seconds. However, data plumbing often hit memory constraint & time constraint. We want to finish the task without running out of memory AND within a reasonable time span. It becomes a puzzle!

I had to perform such a task recently, and I would like to share some techniques / tools I used. My team was migrating our marketing emails from SendGrid to beehiiv . My job was to export each user’s data (email, display name, age group, profession, etc) and import it on beehiiv. The total data size was in 100GB scale. In other words, it didn’t fit my 16GB RAM.

Treating the data as a stream is the essence of solving this type of problem. We build a pipeline that can handle single item at a time, then we push individual items down the pipeline. This approach allows us to use RAM only for items inside the pipeline, not the entire dataset.

I wrote the majority of the final pipeline in RXJS. The library provides a set of building blocks for a data pipeline. Think of map, filter, split, merge pieces for data streams.

In my use case, I also had to deal with csv files that didn’t fit my RAM. The “csv” package (not the best name for SEO) from npm can turn a csv file into a stream (or vice versa). This has a great synergy with rxjs!

Finally, I needed to index my output by user email. I chose to use SQLite for this problem because storing as a JS object requires too much RAM, and using other database requires a server.

Annnd I think that’s all I used for this job. You should probably add some error handling, logging, and performance optimization, but this is the gist of how I approach this.

Of course there are so many ways to tackle data plumbing, and JS might not even be the best tool for this. I hope you take this post as an interesting case study 😉

#rxjs #sqlite #csv #softwaredevelopment

Where do you store your project secrets?

Our company code lives inside a private repo, and most of our secrets are stored in that repo as plaintext. I know that sounds terrible, but it didn't matter for my team's use case. Each of our developers has full access to the entire IT infrastructure.

That's changing this week. We are onboarding an agency, and we do not plan to share all our API keys with them yet. There is a difference in the level of trust between the internal team & the external one.

So we are looking for a place to keep our secrets! 🔐

I'm seeing a lot of companies providing SDK/client for their services as open source.

It's great because this allows the consumers to patch bugs locally, or better, fix them upstream.

Had to touch GrowthBook and Frigade repos this week. It was so much less friction than communicating with a sales rep. 🚀

Mind your step when you are wiring up an automatic deployment pipeline! (aka CI/CD)

I experienced a major footgun today. My team uses turborepo which can identify task dependencies and run them in the most efficient order.

So I configured the tasks like this:

* "deploy" needs to run after "test" and "build-production"

* "build-production" needs to run after "get-prod-environment-variables"

Very clear. Nothing could possibly go wrong.

Until I set it off to run on prod.

It "successfully" ran and kicked thousands of users out of our service!

The problem was I also had another set of configurations:

* "test" depends on "build-test"

* "build-test" depends on "get-test-environment-variables"

You can see when you run all depending tasks of "deploy", there is a race condition between the "get-prod-environment-variables" task and the "get-test-environment-variables" task.

This could result in the "deploy" task running with the test environment variables, and that's indeed what kicked out our users.

My team was fortunate that we had quick rollback system available, so the incident experienced by the users was only a few minutes long. Took a while for the team to actually identify & fix up the turborepo config tho.

I really appreciate the team members who immediately joined the all-hands-on-deck situation and helped me out of this issue. I recognize this is not good for the team's mental health & productivity, so I hope to continue improving on the reliability of our system.

#cicd #devops #firefighting

Quick little Typescript exercise

Yesterday on TorontoJS Slack, I saw Christopher 🇨🇦 Naismith and Marco Campos discussing how spread/destructuring syntax could be helpful for TS. Because such feature doesn't exist yet, I thought it would be fun to write a utility type for spreading types.

Consider using the word "event" when you are naming a variable/parameter.

<Anecdote>

Naming is difficult for programmers even with free AI chatbots who can spit out gibberish on-demand! Yesterday, I had to name a global state that tells UI components when a todo task is dragged.

I named this state "draggedTask", and it was holding the type of the dragged task and its ID so that the components could read the details from backend.

It was all fine until when I had to name the task details from the backend. The name "draggedTask" was already taken... 😅 🤦‍♂️

In general, adding the following affixes to a variable name doesn't really improve clarity: data, info, manager, details, my. (For god's sake, don't call your class MyDetailsDataInfoManager)

So I didn't want to name the details "draggedTaskDetails" because I felt like "draggedTask" described the variable better without being longer.

Hence, I wanted to improve the name of the global state. Calling it "draggedTask" was misleading because it only had the type & the ID. "draggedTaskId" or "draggedTaskType" didn't provide the full picture, and "draggedTaskIdAndType" was too verbose.

I was explaining this hardship to someone (who wasn't an AI), and suddenly realized the purpose of this state is to let components know when a task has been dragged.

And we have a name for that in the programming world: event!

Now the global state is called "draggedTaskEvent", and I am one happy programmer.

Just wanted to share this because using a global state, context, etc for communication between components is a widely used pattern. Hope this helps you write more satisfying names in the future :)

</Anecdote>

Lately, I’ve met meeting organizers who were confused about why participants were passive, bored and generally disengaged in their meetings.

So I asked exceptional conversationalists around me. Why some conversations go wrong?

The answers I got fell into 3 categories:

* Failure to perceive others.

* Failure to perceive oneself.

* Bad topic.

Which makes sense because those are the 3 basic elements of a conversation: others, oneself, and topic. A good conversation satisfies all 3.

Failure to read others is probably the most common cause. During a conversation, people constantly express their feelings verbally & non-verbally. It is necessary to adapt according to their reactions. If they are confused, go simpler. If they are overwhelmed, pause. If they want to share, let them.

Some of us are not as sensitive as others. (Are you the rational type who can’t take the fuss? 😛) On one extreme, autistic people are naturally weak in this area. It might sound irrational, but your favorite person to speak with is most likely a great listener who pays attention to you. Not someone with the strongest logic.

So if you are the less sensitive kind, you would need to be intentional about observing others during a conversation.

Being aware of how we are acting is important as well. For example, we might not notice we are talking in monotone when our mind is preoccupied with how others are doing. We would notice something is wrong by reading others but unable to adjust accordingly.

Some people start talking faster & with less pauses when they sense others are losing interest. This almost always makes the situation worse. You know, when someone goes on and on about a thing that you don’t understand for 10 minutes, anything that comes after (e.g. ”any questions?” or a quick joke 🤦‍♂️) doesn’t help. Knowing that your group is bored is only helpful when you can take an appropriate action to amend it.

Occasionally, it is the topic. It happens. The topic on the table is not something that interests your group. If that is the case, keep the meeting short or change topic. However, make sure the other 2 elements are satisfied before concluding it is a bad topic.

Albeit not all types of communication require engagement. Making a strong impression is more important in a keynote, and delivering information is more important in a lecture.

When you DO want engagement tho, please remember it is a combination of others, oneself, and topic.

#communicationskills #meeting #meetup #engagement

Yesterday, Matt Pocock posted how to use "React.forwardRef" with generic components. It is an arcane magic, and he didn’t go in depth about why it works.

Original post: https://lnkd.in/dh2FtYEe

So I spent some time digging! It seems like there are certain conditions where type params can survive an inference.

I couldn’t find any reference about this behavior, so let me know if anyone has any context about this magic. It's a powerful hack, so I wish I knew how it actually works. 🧙‍♂️

#typescript #react #whydoesthiswork

Discipline is not a replacement for a tool and vice versa. They are complementary.

You don't remove seat belts from a car even if the driver is a professional. Likewise, you cannot put a toddler in a driver's seat even if the car has a cruise control. For the best result, you need a decent driver and some assistive devices.

Although the tools vs discipline argument is not limited to software, I often thought about this when I saw arguments around TypeScript. TS catches many potential errors but not all. Experienced developers can find type errors using other methods but not as easily or fast as an automated type checker. All in all, TS is useful but cannot fix a "bad programmer".

Now that LifeAt is expanding as a productivity tool, this topic lingers in my head.

1 year ago • Visible to anyone on or off LinkedIn

I hosted code club & mob programming events almost every week in 2023! A LinkedIn post cannot fit all the good projects we worked on, so here is a page summarizing the highlights.

Thanks for sharing your code with the community and growing together.

https://lnkd.in/gn9_vVuj

#2023 #mobprogramming #happynewyear

Merged in this year's (hopefully) last bugfix before code freeze. It was another bizarre case, and I think you'll find the story entertaining.

A while ago, my team added 'windowing' to our list UI for performance optimization. All of our team members and a lot of users could enjoy the faster app. To our surprise, some of our users started reporting their apps suddenly lagging on their devices.

Theoretically, the app should have only gotten faster. And we could not reproduce the lag no matter how hard we tried. So we started talking to the affected users.

We immediately noticed a pattern. Everyone suffering the lag was on windows. This explained why we couldn't reproduce it on our side. All our team members were using a macbook issued by the company!

So we pulled out our personal windows devices and started testing on those. And just like that, we could reproduce the lag. It was painfully slow on my windows machine.

You know, 80% of debugging is figuring out how to reproduce the issue reliably. We figured that out, and it was encouraging.

Upon close inspection, I could see the app was entering an infinite render loop on windows, although the cause of the infinite loop was still unclear.

So we tested a ton more. After hours of testing different conditions, someone found an interesting case: the lag disappeared when the app was on an external monitor!

It didn't make any sense, but it was a critical discovery. We soon figured out the lag only occurs when the display scaling is not 100% on windows system settings.

Once we learned what triggered the lag, the only thing left was to make it not trigger the lag.

I replaced the initial 'windowing' logic with an alternative code that is less vulnerable to infinite loops. It was a foundational code change touching many files. It removed the lag... but came with a few new UI glitches.

At this point, I had been knee deep in this performance issue for days and was running out of mental capacity. It is a horrible feeling when you need to put off everything planned for the sprint because of a long-running emergency. And It was painful to tell the team that my fix wasn't strictly positive. It was a mixed bag, and the team had to decide which option was less bad. Letting the windows laptop users suffer the lag VS keeping the rest of our users glitch-free.

Eventually, another engineer saved the day by helping me iron out the most noticeable glitches from my fix. We could merge the fix in, and the customer reports about the lag stopped.

Hope it was an entertaining story to read!

Oh, and the story won't be complete if I don't mention the name of the heroes.

Rorrie (https://lnkd.in/g-pU4UG7) discovered that the lag only happens on laptop screens and not on external monitors.

Ashika (https://lnkd.in/gCP6e-7r) debugged the major glitches from my alternative windowing code.

December is a cruel month. Companies pause hiring and resize their teams to meet the bottom line. Low days make you wanna give up.

It makes me sad to see bright, kindhearted people consider quitting. They have more than enough to make a success in the industry. I don’t want them to give up now because I would love to see them thrive here, to reach their full potential.

At the same time, it’s hard for me to say "never give up." I know all too well what they have endured so far. It feels brutal to push them forward despite my strong faith in their ability to make it through.

It is such a dilemma.

Let’s end this on a high note because it is a holiday season.

const hip_hip_array = ["hip", "hip"]

const array_of_sunshine = ["☀️"]

Be happy, fam

I had an interesting debugging experience yesterday I want to share. TL;DR: directly using React Router navigation inside render function can break the app. Wrap navigation in a useEffect.

It began with a bizarre bug on frontend. The app consistently crashed at the end of the user onboarding flow. My first move was to inspect the error call stack. Most of the time, just reading the error message & knowing where it happened reveals the solution.

This one was different. The callstack was pointing at an internal React hook from @frigade/react, but that package was working fine before my changes. And I didn’t change how we use the package. Deeper investigation was required.

So I began bisecting my changes. The biggest change I introduced was replacing a React context with @preact/signals-react for performance optimization. I tried turning off all global state updates. The app still crashed at the end of onboarding.

It seemed clear that the React context & its provider were absolutely required to avoid error. Our code was relying on rendering more frequently than necessary. Hence it broke when I optimized it.

So I started sweeping all components from the original error stack for render irregularities: conditional hooks, state updates after unmount, … and state updates in the middle of a render.

Yes, React Router’s navigation is based on client state, so redirecting in the middle of a render can result in a bad situation. Putting the redirect inside a useEffect solved the issue immediately.

Fixing this bug was hard because the error message didn’t have ANYTHING related to React Router. I was lucky to find the real cause at all.

#debugging #react

What constitutes developer growth? Decades of field experience? Reading a lot of books? YouTube tutorials? 🌱

Most devs—including seniors—end up writing okay software; software that works with some flaws. Why is that? Why can’t everyone write exceptional software? I needed to find out because I’m interested in helping other developers improve their skills.

Developers stop getting better at some point. Consider a fresh junior dev at their new job. It takes 6-18 months to get used to the task. After that ramp up period, every day becomes the same old. The learning stagnates.

Then what? They might get promoted, or move on to a new job due to the volatile market. It takes 6-18 months again to get back on track.

Then it repeats. This pattern is wasteful & slow. It stagnates developers.

Growth comes from facing the next level of challenges. Not just new. NEXT. The continuation of your previous challenges.

For example, learning a shiny new library doesn’t always make you sharper. Likewise, experiencing the same crisis multiple times doesn’t make you tougher. We get distracted because it feels good to learn novel tech & avert crisis.

Instead, be intentional about writing code a bit better. Take time to think about how an exceptional developer would do it. Ask them, if you happen to know one.

It is unfortunate that the coding skills of many devs plateau early in their career. But that also means you can write better software than the most by making small improvements regularly.

Some might point out there are other venues for career growth. Leadership, management, business, etc.

I agree. But the argument stays the same. If you want to be excellent at something, you constantly need to expose yourself to the next challenges.

Hope it didn't sound too intimidating! Just wanted to share what I believe constitutes growth and what doesn't. Please please please don't burn yourself out.

#developer #growth #career

I spent a whole week fixing a performance issue at work! It was a high-impact task, so there was a sense of achievement and pressure at the same time.

It was also a good case of the Engineering Method, so I decided to share the details with my network :)

#engineering #performanceoptimization #pusheen

Quick CSS debugging tips that saved me today:

First tip - Freezing the page.

1. Open browser devtools.

2. Open the source tab.

3. Click on the "Pause script execution" button, or Ctrl + \

4. Your page is now paused. No script runs while you try to inspect element styles.

Second tip - Reveal invisible elements.

1. Open browser devtools.

2. Open the elements tab.

3. Click the "New Style Rule" button under "Styles". It is a little plus icon.

4. Add this line

* { outline: 1px solid red !important }

5. Now you can see the outline of all elements. Great for detecting what's overflowing a container.

#css #debugging

Do you use an ORM to access database? How do you feel about using raw SQL? I gave it a go, and it was nicer than I expected!

#orm #backend

Typescript can get very complicated, and this complexity frequently gets underplayed in online communities. On pretty much any post that complains about how complicated TS can be, I see these responses:

- "I guess you haven't worked in a large codebase."

- "You don't care about code quality."

- "You just need to learn more about Typescript."

No matter how good one is with Typescript, achieving 100% type safety is a challenge. There are things Typescript can't express (e.g. absence of a key, narrowing a generic type parameter, etc), and trying to express these purely on type-level results in a convoluted code. There are two ways to avoid the mess.

1. We can loosen up the type checking with "@ts-expect-error", "any", "unknown", or "as". Pretty much throwing our hands up, and declaring that fixing the type error is not worth our time. It is important to isolate the unsafe zones when taking this approach.

2. We can add runtime checks to simplify the problem. Zod, the library, is like a little cheat code for this. We need to understand that this is a heavy abstraction on types tho. The source code of Zod is full of large classes and advanced type arithmetic.

Typescript can express most things in a concise, elegant way. On the other hand, there are not-so-rare cases that fall outside of what TS can express easily. We shouldn't pretend like those cases are not there. Typescript comes with the cost of authoring the types, and provides enhanced developer experience in return.

#typescript

I've been pondering about the tradeoffs between dynamically typed languages and statically typed languages lately. This talk provided me with a lot of insight!

TL;DR of the talk: dynamically typed languages solved common pain points of the popular statically typed languages in the 90s. We realized that those pain points are not inherent to static typing, so newer statically typed languages emerged with those issues mitigated.

It has been a mystery to me why Typescript is more pleasant to use compared to other statically typed languages I have tried in the past (Java, C#, C++). You know, all of these languages have static types, but the developer experience varies a lot.

That led me to think that maybe the underlying dynamic language -- JavaScript -- is the source of joy. Not the static type checker. I thought, maybe we will get the same level of joy as Typescript if we use JavaScript "right". So I have been researching the right ways to use dynamically typed languages.

Richard Feldman's talk showed me a new perspective. The following characteristics have a very high impact on the developer experience, but they are not inherent to either static or dynamic typing:

- Fast feedback

- Helpful errors

- Concise code

So I decided to focus less on the static/dynamic dichotomy, and focus on those characteristics.

Typescript is a "gradually typed language", which means it has both runtime & build-time types. The IDE can provide instant feedback (the red squiggles) when you are editing code. You can still run Typescript code even if there are some type errors. As a result, you can have static & runtime feedback instantly!

Before the gradually typed languages (TS, Hack, Sorbet), we had to choose between the two: good static feedback vs fast runtime feedback. Having access to both is an innovation in my opinion.

Richard Feldman listed some downsides of gradual typing so that we don't hastily conclude that gradual typing is the future.

1. Dynamic type has a runtime performance cost.

2. A type system becomes complex if it tries to retrofit a dynamic language.

I personally think the performance cost is a good tradeoff for many use cases. However, complex type system is a tougher pill to swallow.

No one likes to fight a type checker/compiler. The more complex a type system is, the more often you have to fight it. It degrades the developer experience. This led to a group of people advocating for a pragmatic use of Typescript: loosen up type checking in areas where it is difficult to satisfy the compiler.

In conclusion, I am planning to learn a few modern statically typed languages. I am curious if any language can provide fast feedback + simpler type system.

#typescript #developerexperience #techdiscussion

https://lnkd.in/gQ7PXDMH

"Why use Typescript when JavaScript is perfectly fine?" I often hear this question from frustrated devs who are forced to use TS against their will. Here are some context about why people started using TS. This post won't make you love TS but feel less angry about it.

"TS: Because web is awesome but JS could use some help."

Web is an amazing platform. Unlike Android or iOS, users don't need to install each web app before using. There's no review process either. Web devs can publish anything anytime immediately without caring about app store policy.

The problem is, JS used to be the only programming language that could run on a browser. And JS used to suck a lot more circa 2012. Out of this discomfort, people started building new programming languages that transpile to JS. Coffeescript, Dart, and TS are all from this time, and TS won the web language war because it was an easier sell. Other languages looked completely alien to JS devs while TS looked just like JS with some special syntax sprinkled on top. JS interop was least painful in TS too.

But JS got a lot better since 2012. Is it okay to write a web app with JS in 2023? Absolutely, but there are still the classic tradeoffs between static typing vs dynamic typing. If you want dynamic, JS is a solid choice. If you want static, TS is one of the easier sells.

It's just that more companies desire the advantages of static typing (i.e. better tooling) and willing to accept the cost (i.e. longer build time, boilerplate code). Today, I don't think JS is inferior as implied by some TS users.

Now that WebAssembly opened the floodgate of language options--the good old C++ joined the chat--we'll need to answer "why XYZ when we have JS" more often. TS is not the end.

#typescript #javascript #webdevelopment

Sometimes a JSON file is too heavy to parse on memory. However, you're in luck if you only need a small part of the JSON file at a time. There is a library for streaming JSON called "stream-json".

This technique was very useful when I had to dry run a migration logic on a backup of our production database. The size of the backup file was multi gigabytes: small enough to fit in my hard drive, but too big for JSON.parse.

Here's the gist of the code I wrote for processing that backup file.

const fs = require("node:fs")

const { parser } = require("stream-json")

const { pick } = require("stream-json/filters/Pick")

const { streamArray } = require("stream-json/streamers/StreamArray")

const userStream = fs

.createReadStream("backup.json")

.pipe(parser())

.pipe(pick({ filter: "users" }))

.pipe(streamArray())

for await (const { value } of userStream) {

// `value` is single user data.

}

Not too bad, eh?

#spacecomplexity #optimization

I think web is a really compelling choice for building mobile apps today! With just HTML + browser API, you can let users use camera for image attachments and use the native share menu.

And much much more. For a full list, check out this reference: https://lnkd.in/gp2aTprX (operating system integration section is the real gem)

For a demo, check out this simple codepen: https://lnkd.in/gGF8hwkP (open from your mobile device)

#web #mobile #pwa

Quick SEO tip: if you have the same content accessible from multiple URLs, make sure **canonical URL** is provided.

https://lnkd.in/gKc9xTBH

After my team set up a reverse proxy to combine a Webflow site with a React app, Google search dashboard started complaining about duplicate contents. Now we have <link rel="canonical"> inserted on all Webflow pages, we'll see whether it will resolve the issue.

#seo #web

I configured reverse proxy at work last week, and wrote a blog post to share what I learned during the process!

https://lnkd.in/d5YAJPRN

My team have been using iframes to embed our webflow (no-code site builder) pages in our React app. Our landing page was embedded this way as well, and search engines didn't like that 😔

We decided to use a reverse proxy to stitch together the Webflow pages with our React app, and learned that it allows us to combine a LOT of services under the same domain.

The result is pretty cool, but the DX wasn't great before I made the local dev environment use the same reverse proxy configuration as prod. In the post, I shared how I made the local environment consistent with the prod config 👍

#web #webflow #webpack #vercel #react #network

Here's a great event for femme developers in Toronto area!

Awesome opportunity to get connected with industry experts & take quality workshops.

Wed Oct 4th 6pm-9pm

RSVP now: https://lnkd.in/dUfN-3Vm

Spots are limited! (Don't forget to fill out the confirmation form after you get an email from us.)

#meetup #toronto #software #womenintech

Date & Time. It's the topic that makes many developers' blood boil. What's the best way to store timestamps? If you just thought "ISO 8601, duh," read on. (Come along too if you didn't think that)

I had a long conversation with the bright minds from TorontoJS about this topic. The final verdict was "it depends" (sigh). Let me elaborate:

Decades ago, computer scientists standardized how to record a point in time. Unix time is the number of seconds elapsed from 1970/01/01 12 AM UTC. For instance, as of writing, it has been 1695345084 seconds since that epoch. The beauty of Unix time is that it can represent an accurate time regardless of timezone or local calendar system.

ISO 8601 is a more human readable format, but the concept of using the absolute time is the same. It looks like this: "2023-09-22T01:11:24Z". For computers, both Unix time & ISO 8601 are easy to store, sort, and manipulate. It is not surprising why they remained as the most popular way to store timestamps for so long.

Then we can save everything in ISO 8601, end of story!

NO.

Regrettably, humans have invented something called time zones... and they change all the time. For most people, this ever changing chaotic system is what they use day to day. That's the date & time everyone's clocks display. What a shame.

Then what about this? We save times as ISO 8601 with an offset!

I live in Eastern Timezone with daylight saving time (DST), 5 hours behind UTC, so I store my time like "2023-09-22T01:11:24-05:00"

Also NO. The offset for each timezone can change. Brazil decided to drop DST in 2023, so the offset is different from 2022.

Let's say you planned to meet your buddy in Sao Paulo at 2PM on September 22nd 2023. While Brazil was observing DST, the offset was -4 hours, so that was "2023-09-22T18:00:00Z" in UTC. Now that Brazilian government decided not to observe DST, "2023-09-22T18:00:00Z" is 3PM.

Do you see the problem here? If you have stored your schedule in the absolutely truthful ISO 8601 format, you will show up 1 hour late. Your friend will get angry. You can argue that's unreasonable, but ISO 8601 does not match the socially accepted concept of datetime.

In order to avoid confusion in ordinary citizens, schedules should be stored as "wall clock time". aka "Sep 22 2023 14:00, America/Sao_Paulo". Yes, that implies the time represented is not absolute. It is at whim of government bodies.

Because the list of timezones and corresponding UTC offsets are constantly changing, there's is a body of organization that maintains it. The timezones are represented in a compound name like "America/Sao_Paulo" or "America/Argentina/Buenos_Aires". This name format is called IANA timezones or Olson strings.

It doesn't mean it is wrong to use Unix time at all. There are many use cases where absolute timestamps are more valuable than wall clock time. Recording financial transaction logs is a good example.

Hence, the verdict remains: "it depends".

Hey, are you tired?

I meet a lot of people around me who are feeling tired in their lives. It's the feeling that you are never at your 100%. Knowing that you are not achieving much every day. Things are piling up (conceptually & physically) faster than you can handle. The idea of waking up tomorrow doesn't sound too appealing.

And it seems like everyone is susceptible to this. I see people who struggle while they do job searching, take care of the family, write doctorate thesis, prep for university, freelance, work full-time, or well, anything.

Regardless of what the goal is, it is really, really hard to start working on even the simplest task when the energy is depleted. I've experienced it during my 18-month job search, and a few other times in my life. It feels kind of embarrassing that I can't get anything done, and that stress just drains more energy from myself.

In the end, I either do things that need 0 motivation (video games, eating, drinking, watching stuff), or find distractions that feel productive (learning C++ when you're supposed to find a civil engineering job).

Not a therapist nor psychologist myself, so I don't feel entitled to give you any advice. But for those of you who are curious, here's what I do when I start getting tired:

* I try to exercise regularly, like 30 minutes a day. And lord, I hate working out. I only started doing cardio when I found out about my blood issue, but I cannot deny the positive impact. It really helped me cope with stress better and increase my daily mental energy capacity.

* I avoid "sedative" food. Can't explain why, but some foods make me sleepy. A good portion of starchy rice for lunch will make me drowsy for the rest of the day.

* I pick up a hobby. Believe it or not, I never thought of having a hobby until earlier this year! One day, I had a stunning sourdough bread in Seattle, and got this urge to make one myself in Toronto. So I started baking, and my friends have been super supportive. It made me happy and I started looking forward to what I'll bake each weekend.

* I avoid being in a "do-or-die" situation. Funny because I've been working at startup companies for 3 years now. Ironically, my productivity hit rock bottom when I was thinking there was no future if I failed. I learned that if I want to be gritty, I shall not fear failures. In order to deal with failures, a failure shouldn't mean the end of me.

* I try to be more near-sighted. I live in the moment. Big objectives can paralyze me. My inner engineer will start drawing a Gantt chart for that giant deliverable, and it's no good for me. First, I often don't have the right knowledge to plan out properly anyways. Also, it feels bad whenever I don't hit the milestones of that poor plan.

Maybe now you can see why I tell you this is not an advice. Some of my decisions go right against the formula of success out there.

Just know that you're not the only one who's feeling tired. That's all.

No hashtags; just for my network.

Today, I officially conclude my contract role and start as a full-time employee at LifeAt.

Getting a work permit under American startup as an immigrant in Canada was certainly not the easy process. I know there are many international students and professionals abroad who want to build their career in Canada, and I'd like to share my experience here, hoping to support those on a similar journey.

Upon completing my civil engineering studies in New York, I found myself very underprepared for the job market. After months of unsuccessful job hunting and seeking a sponsor for an American green card, I felt overwhelmed and exhausted by the process. Eventually, I gave up.

Back to Korea I went, and there I started my career as a software developer. When the world was hit by the pandemic, I seized the opportunity of remote work and moved to Toronto with the status of an international student. I lived as a full-time developer AND full-time student for a while.

Then I got an offer from a Canadian company. They decided to sponsor my work permit, and it took a little over 6 months to get it issued. I worked as a part-time employee in the meanwhile.

This experience was eye opening to me: it was the first time I ever had someone doing the immigration paperworks on my behalf. All the uncertainties and overwhelming amount of paperwork that used to burn me down got reduced to a manageable level. It didn't make the process any quicker, but I could go about my day without constantly worrying about visa. This experience starkly contrasted with my post-university days when I drained all my energy on depressing thoughts alone.

Later, I transitioned to my current role in the USA while still living in Toronto, introducing new immigration complexities. My employer was willing to take care of the immigration process this time too, so I could witness the power of delegation again.

To those facing similar hurdles, I strongly recommend exploring legal/immigration services. While not free, the value they provide was immeasurable to me. Don't hesitate to consult an immigration lawyer; even a brief conversation can make a world of difference. At first appointments, they usually assess your current situation and give you clear options with associated costs.

Furthermore, if your immigration status is a barrier to your next career step, consider suggesting platforms like Rippling or Deel to your (prospect) employer. These HR services specialize in legalities, including immigration, and can ease the process for both employees and employers.

The current job market is rough, and having to deal with immigration doesn't make it easier. It is too easy to burn out. That's why it is important to retain one's energy.

Hope this post will help someone save time & effort on their journey.

Quick Prisma tip: use direct connection when you run "prisma migrate deploy". Although pgbouncer helps us build highly-available systems, it is not compatible with Prisma prepared statements.

The back story for those who are interested:

I found out about this incompatibility when I was trying to deploy an app using Neon & Fly.io. Here's the link to the failed GitHub Action: https://lnkd.in/gFZ_iaf4

It says 'db error: ERROR: prepared statement "s3" already exists\n'

It didn't make much sense to me the first time because the deploy action only failed occasionally. So I started digging through the internet, and I learned that it is related to connection pooling. Then I came across this documentation from Prisma: https://lnkd.in/gZNnqXFF

The doc describes the cause of the issue & mentions that using direct connection (via DIRECT_URL env var) is a workaround. I pushed the fix, and hopefully it will remove the intermittent deployment failures!

With recent trend of distributed highly-available postgres, I thought it would be worth sharing my experience here.

#debugging

There has been recent news about a Canadian coding bootcamp winding down its program, so I’d like to discuss sustainability in the tech industry's training programs.

To begin, let's ask the obvious question: why are bootcamps struggling currently? It's because the demand for junior developers has been slowing down. This is a major issue, as we can't consider our training programs sustainable if their quality depends heavily on market conditions.

Considering the supply and demand dynamics, aspiring developers evaluate the cost of education against potential salary prospects and job opportunities. Unfortunately, in the current market conditions, the job market for junior developers seems uncertain, making the cost of education appear relatively expensive.

The major component of education costs is instructor salaries, and decreasing them would likely compromise the quality of education. In the software engineering field, skilled programmers are attracted to industry work due to the profession's high-paying nature. Maintaining a reasonable balance in compensation is crucial to prevent a shortage of qualified educators. The dilemma applies to all paid curricula that are not self-serve (e.g. universities & bootcamps).

This is why we need a completely different approach. The one that doesn’t need paid educators. But jeez, how can we do it for free?

I think we can achieve a scalable & sustainable training through community effort. No, I’m not talking about a professional society where you need to drink wine wearing suits & ties — gosh doesn’t that sound suffocating. I’m referring to our rich open source communities & tech user groups. We already have many experienced developers willing to assist beginners.

The challenge is figuring out how to encourage interaction across all experience levels. We don’t want senior engineers forming a gated society, nor do we want to recreate volunteer-based bootcamps. Sustainability is only achievable when the activity provides value for all experience levels. It can’t last long if we rely only on people’s generosity or passion.

I believe public mob programming can be an answer to all this. A diverse group gathering together to solve a small technical task together. Mob programming is conversational, so there is a plenty of time for everyone to fill up their gaps in context of the task. This can be fascinating to novices & seasoned developers alike because everyone has a very different way of looking at a codebase.

By making this free-flowing stream of ideas readily accessible to everyone, we can foster a holistic growth of the developer community. Expensive private education should not be a barrier for aspiring developers to enter the industry.

Are you interested in advancing the tech industry for the better? Leave a comment or find me on TorontoJS Slack! I host monthly online mob programming events, and looking for venues to sponsor our in-person events.

Have you ever tried debugging your frontend in a production environment? Last week, my team encountered a peculiar bug that occurred exclusively in the production environment, and I would like to share the techniques we used to identify its cause.

1. Adding Scoped Client-Side Logging

Most JavaScript developers are familiar with the usefulness of console.log for debugging purposes. Strategically placing log messages can provide valuable insights into the application's behavior. However, scattering debug logs throughout the codebase can confuse both users and developers. By conditionally enabling client-side logging that is relevant to the specific issue, we can get the benefits without compromising user experience (UX) or developer experience (DX). Here's an example utility function:

function logDebug(scope, ...message) {

const sessionScope = sessionStorage.getItem("debug_scope");

if (scope === sessionScope) {

console.debug(...message);

}

}

With this function, we can freely add logs without worrying about them appearing in other users' consoles. The messages will only be logged when the "debug_scope" in the session storage matches the intended scope.

2. Adding Source Maps

Many modern web apps are delivered as a JavaScript bundle, which is virtually indecipherable to the human eye. This is where source maps come into play, allowing our tools to map parts of the bundle back to the original source code. Reading stack traces or setting breakpoints becomes feasible when the source map is available.

However, shipping source maps to production is not recommended, as it would essentially expose the entire frontend code to the world. Fortunately, browser devtools enable us to load source maps after the fact.

https://lnkd.in/gj-Tg4Qx

It's worth noting that we should always use the latest version of the source map. To achieve this, my team made the deployment pipeline generate the source map as an build artifact. Anyone on the team can download the latest source map now. If you feel extra fancy, you can publish your source map behind the authentication layer, although manual handling sufficed for our use case.

These techniques highlight the inherent challenge of client-side debugging: the necessity of concealing certain information from the public. While working within this constraint can certainly be more challenging, we've demonstrated that there are techniques to overcome it. Hopefully this post will help you debug your next production issue!

P.S. After extensive logging, my team managed to identify the cause of the bug. It turned out to be the order of some asynchronous operations in our app. The network latency in the production environment exposed the issue to the surface.

Thank you Sami Xie Yuri Yang Sammy Lam Kaoru Tsumita Tehseen Chaudhry Jen Chan for taking care of TorontoJS In-Person Code Club today!

We worked on 6 projects over 3 hours with around 30 people, had a ton of discussions, faced naughtiest bugs, and bounced ideas nonstop. It was a great event, and it could not have happend without the contribution from our amazing team of volunteers & the event sponsor Super.com

Please reach out to TorontoJS if your org is interested in hosting these lively tech events. We always have exciting ideas awaiting venues to take place in!

https://lnkd.in/gxFy7c-i

#thankyou #seeyounexttime

I was planning to write about the great people behind open source communities today, but as I curated my list, I noticed something important. Let's talk about diversity in open source communities instead.

The world of open source software suffers from the same lack of diversity as its commercial counterpart. At first, this seemed strange to me since open source communities are supposed to be "open" to everyone. But as I delved deeper, I realized that we need to examine how new contributors join projects and remain engaged to find answers.

Just like in any job, an onboarding process is necessary when starting to contribute to an impactful open source project. Without support from existing community members, it is nearly impossible to make a meaningful contribution in the long term. Implicit bias can skew the entry into the community if there is no systematic effort to encourage diversity. It "feels" easier to invite people from one's own circle.

This is why open source communities should focus on lowering the barrier to entry. Communities should be welcoming enough that even complete strangers can have a pleasant first experience. Building accessible and easy-to-read documentation is a good place to start. Then, having a quick point of contact like a Discord server can be helpful in case one needs to ask questions. Once a new member makes a contribution, acknowledging it is a great way to form a welcoming environment. Enforcing a code of conduct that forbids elitism or toxic comments is also crucial.

I know this may sound like a lot of work, and open source contributors are already overburdened. However, fostering an open culture can keep the communities sustainable in a number of ways:

- By lowering the barrier to entry and providing support for new contributors, open source communities can attract new contributors from diverse backgrounds, increasing the pool of talent available to the community and ensuring that the community remains vibrant and dynamic.

- More energy within the community will encourage existing members to stay longer as contributors/maintainers. Burnout or turnover will decrease.

- As more people feel safe to voice their opinions, this will drive innovation with new ideas and different perspectives.

Because I love & admire what the open source movement has achieved so far, I've been proactive in assisting other developers to start contributing to open source. Also, I believe that public interaction among a diverse group of developers is the ideal way to achieve holistic growth in the software world. I'll still share my list of amazing open source figures you can follow on Twitter, but I sincerely hope the list will become more diverse over time.

#opensource #software

How do you tackle big initiatives at work? The kind that spans weeks/months. I’d like to share my story, but I’m curious what other engineering teams handle daunting tasks.

Recently, my team at LifeAt decided to migrate our Typescript project to strict mode. We knew it couldn't be done in a single pull request, so we set up a tracking system to monitor the number of type errors in our project and measure progress. Our goal was to eventually develop with strict mode enabled and achieve zero type errors.

Using this tracking system had a significant impact on our team's behavior and work process. By seeing a consistent decrease in type error count, we were motivated to continue working towards the end goal. Additionally, the system fostered collaboration by providing a clear goal for everyone to work towards and a way to measure progress.

One of the key takeaways from this experience is the importance of using data to inform your decisions and staying motivated throughout the process. By tracking the number of type errors and using that metric to measure progress, our team had a clear goal to work towards and a way to measure their progress. This made it easier for everyone to stay motivated and invested in the solution, and it helped to ensure that everyone on the team was working towards the same goal.

Last week, I found out Storybook community started working on almost the same initiative (https://lnkd.in/gBbKYgQv). This made me curious of what strategies other engineering teams are using to tackle big initiatives such as migrating to new UI component library, adding test coverage for a legacy module, connecting existing codebase to a newly acquired company’s code.

Were there any approaches that worked particularly well for your team? Any pitfalls that need to be avoided? Please share your experience!

#softwaredevelopment

Software trends can be insightful and distracting at the same time. Frontend dev community is seemingly moving at the speed of light, and there’s always an insane amount of heated discussions across the community. I would get overwhelmed by the noise if I were to follow *everything*.

What I do to cut down the noise is thinking about the “why”. This approach helps me get the most out of these discussions. Why do people argue for/against TailwindCSS, signals, server components, Rust tooling, web components? Sometimes, their arguments don’t resonate with me, so I move on. Other times, I might lack the knowledge to fully comprehend the argument, but I don't worry about that for the time being. Instead, I focus on the things that matter most to me.

Here are some examples. Do you care about the size of JS bundle you ship to your users? Everyone agrees smaller bundle size is better, but everyone feels different level of excitement about the topic. How about having consistent styling across multiple projects? And what about the time it takes to hydrate your React app? Same story. Some things matter to you more than others. You can focus on what actually feels like an important improvement to you, and follow the forerunners in that area.

#jsfatigue #sustainability #software

Have you ever thought about hosting group programming sessions, but didn't know where to start? As someone who has been hosting weekly group programming sessions within the TorontoJS community for the past six months, I've learned a lot and I'm excited to share my experiences with you.

It all started with a small group of people trying to do something for Hacktoberfest 2022. However, because the initial members were mostly beginners, it was difficult to contribute to major open source projects. So I spent a lot of time curating beginner-friendly open source issues that the group could work on together. On every weekend, we had a meeting on Zoom to crack down the curated issues. We explored the open source codebases, attempted error reproduction and discussed potential fixes together. Although we couldn't complete the Hacktoberfest challenge in the end, I could see that the members were enjoying this type of activity a lot.

So, I continued hosting weekly Zoom meetings after Hacktoberfest, and started incorporating other types of code challenges too such as Advent of Code, personal projects, and community tools. Through trial and error, I found some patterns in group programming with different types of challenges.

Leetcode or Hackerrank style challenges are very approachable, so usually, there is a fierce exchange of ideas when we work on this type. However, it is less rewarding because all the challenges are already solved by someone else. They are practice problems and nothing more.

Working on major open source issues (e.g. Docusaurus, Faker, Storybook) is the polar opposite of Leetcode. They are much more impactful while requiring more contextual knowledge. There is usually more interest in attempting this type of challenge, but there is less discussion because fewer people feel confident enough to make suggestions during the session. There is a higher chance of "getting it wrong." Therefore, it is important to do enough research about the open source issue and the repository before bringing it to the group. Only after explaining the purpose of the repo, general structure of the repo, and what needs to be fixed, can the group meaningfully start collaborating on the issue. Despite the heavy prep work, there is still a good chance of failing to engage or finish the task. I'm in the process of searching for ways to mitigate such risks because it is incredibly satisfactory to do something impactful with other people.

Finally, personal projects fall somewhere between Leetcode and major open source contributions. They are usually less impactful to the world but require less prep work at the same time. Also, personal projects draw the highest level of engagement. There is at least one person who is heavily invested in the project (the author), and the scope of the projects is smaller than big open source repositories. The biggest obstacle in this type of challenge is that most people feel uncomfortable leading a discussion around their own project. The session becomes a live streaming of solo development if not facilitated correctly.

Here's a trick that I found effective in solving such a situation: have a volunteer share their screen and ask the project author to guide the volunteer. If you are familiar with the driver-navigator pattern, this is similar except there is a whole mob participating in the conversation as well. This setup works wonders because it lifts some cognitive load off of the project author (the navigator). The author doesn't need to explain and type at the same time. Also, having a driver who lacks context is great because the navigator can see exactly where the driver gets blocked. Without a driver, many project authors struggle to guess where the knowledge gaps of the mob are. Conversation between the driver and navigator informs the entire mob, thus it leads to high engagement. Here are some code we could write with this approach: Evert Pot's city builder project, PaulBH's Matrix digital rain animation project.

Does group programming sound tempting to you now? If so, I hope my findings about the 3 types of challenges and their characteristics help you in planning your next event! And if you're still not convinced that group programming is a valuable activity, I encourage you to join us at TorontoJS and spectate one of our sessions. I believe that group programming is one of the most effective ways of growing as a developer, regardless of your experience level. So don't be afraid to give it a try and see the benefits for yourself!

* Special thanks to TorontoJS volunteers for facilitating the group programming events - Sherry Yang Jen Chan Yuri Yang Marco Campos Chris West Kaoru T. Sami Xie Divish Ram Sammy Lam Ken Beaudin Zeinab F. Aaron Thomas Matt Jackson

Can you tell what's wrong with the code below?

```

const liveStreaming = useMemo(() => new LiveStreamingClient(

props.userId,

props.roomId,

), [props.userId, props.roomId]);

useEffect(() => {

liveStreaming.join();

// cleanup

return () => liveStreaming.leave();

}, [liveStreaming]);

```

There is a subtle race condition if you look closely, which bit me at work last week. The "join" method & the "leave" method above are asynchronous, so there is a possibility where "leave" is called before "join" completes.

How can we make sure we don't "leave" before "join" completes? The answer is by making "leave" wait until "join" is done (duh).

Sorry, let me elaborate: we can use a mutex to lock the live streaming client during the "join" operation. This will make the "leave" operation wait until the live streaming client gets released by the "join" operation. Here's how the code would look with a mutex:

```

useEffect(() => {

mutex.runExclusive(() => liveStreaming.join());

// cleanup

return () => mutex.runExclusive(() => liveStreaming.leave());

}, [liveStreaming]);

```

Much safer & still readable!

If you want to check out how you can implement a mutex for yourself, please take a look at my blog post below. If you want something off-the-shelf, there is a package called async-mutex on npm.

#typescript #react #computerscience

https://lnkd.in/dgpY5FzV

There's no way I could spend 590 USD on a TypeScript course, but I'd like to share some valuable resources that helped me level up my TS game.

The official TypeScript Handbook is actually really well written. It is targeted at programmers who haven't used TS before, so it's a good place to start if you don't mind a little bit of reading. I still go back to this handbook a lot to remind me of specific syntax.

https://lnkd.in/gqxc-QNz

Once you get a little comfortable using TS, "Effective TypeScript" by Dan Vanderkam is a good read. It dives deep into the characteristics of TS type system, and covers a lot of edge cases of the language. You'll develop a good intuition around TS by reading this book. I find myself reaching out to this one quite often when I do code reviews.

https://lnkd.in/gJDRSF3y

Matt Pocock is one of the wizards behind XState's typegen magic, and he offers so much learning materials for free on his YouTube channel & the Total TypeScript website. The videos are top quality, and he covers a lot of cutting edge stories too. I recommend following him if TS is a significant part of your career.

Lastly, jcalz is a TS guru on Stack Overflow who has indirectly saved me countless times when I was seeking for answers. He is exceptional at explaining things, and you can tell he deeply understands the pain of the OP. You'll probably come across his answers a lot when you google TS questions.

https://lnkd.in/g4Y4qyJu

That's it. I'm not against buying a premium course if you can, but don't feel discouraged if it doesn't fit your budget. Have a good night!

#learning #typescript

It requires an incredible amount of dedication to develop & maintain a high-impact open-source library.

Being able to work with a diligent maintainer like Hyo Chan Jang was the best part of dooboolab while I was there. If you're using react-native-iap package in your project, please consider supporting them on Open Collective.

https://lnkd.in/gi7K59TE

Have you heard of the term "invariant"? It is an incredibly useful concept for simplifying complex error handling.

It is well known that error handling is essential in building robust software, but what is less known is how to do it correctly. Robust error handling is not about making our application impossible to crash. Instead, it is about making sure that expected input produces expected output. And yes, even edge cases fall under the category of expected input.

Let me elaborate a bit more. We can prevent most crashes by wrapping the entire application in a big try-catch block and adding a console.error in the catch block. But how much does this approach help make our application robust? Not much, because we do not know what kind of exception will occur and how to handle it.

Java attempted to solve this problem by including all possible exceptions that can be thrown in function signatures. Thanks to this language feature, Java developers could know which exceptions can occur at any given function, but that did not mean they knew how to handle every single exception case. The essence of the problem is that developers cannot know how to deal with all exceptions, and making the compiler more powerful does not solve the issue.

Static analysis tools are powerful these days, and compilers are excellent at asking what-if questions. "Hey, what if this variable is null? You didn't handle that!" We often shoo away the compiler with a little bit of guilt by wrapping our code in a try-catch block.

try {

// ...

} catch (e) {

console.error(e);

}

We can do better by telling the compiler, "I assume this will never happen. You can crash the application if it does." This way, we can focus on handling errors that we expect to happen. You might think it is crazy to provide an app so many opportunities to crash, but it is much safer than pretending to know how to deal with those edge cases.

First, an immediate crash is much easier to debug than subtle behaviors or a series of logs. Second, in case our assumptions are invalidated, our automated tests have a higher chance of finding the issue if it throws an exception.

So, what does "invariant" mean after all? In computer science, it refers to a truth that does not change. In this context, we can use it as a technique to define the boundaries of our assumptions to the compiler. It is a simple technique that you can apply right away, so give it a try!

// typescript example

function invariant(

condition: unknown,

message: string,

): asserts condition {

if (!condition) throw new Error(message);

}

// usage:

invariant(somevar != null, "somevar should always be non-null")

#invariant #softwaredevelopment #errorhandling

Exciting news - I'm planning to overhaul the desktop application I wrote four years ago! While reviewing the old source code, I noticed a few things. First, my code is far from clean, and it looked like a cryptic dark magic summoned from hell. Second, test code is a much better way of explaining how the app should work than comments or readme files. Despite the not-so-well-organized state of my app, I could clearly understand what my intent was by reading the corresponding test code. Lesson learned: write tests. Your future self will thank you.

Now, onto the plan. The new app will be a full-stack web app built using:

* React

* Express

* Typescript

* Prisma

* Vite

* ESBuild

* pnpm workspaces (monorepo)

* Turborepo

* CockroachDB

* Vercel? (TBD)

I intentionally avoided shiny new technologies like SolidJS or Remix because I want this to be a contributor-friendly project. Firebase and the like are not considered for the same reason.

The goal of this project is to prove that it is possible to achieve high maintainability and good DX without using bleeding-edge tools. It will involve a lot of integration tests, so stay tuned!

#webdevelopment #reactjs #expressjs #testing #prisma #pnpm #cockroachdb

https://lnkd.in/dDjjisV5

Expressing complex business logic as an easy-to-read test-friendly code is not a trivial task. Take a look at Reactive Architecture and learn how you can apply it to your code!

TLDR; Split the code into event sources and handlers.

#software #architecture #functionalprogramming

https://lnkd.in/gZRpFkMj

Have you ever had headache from Typescript generics? Because TS libraries have heavy usage of generics for flexibility, library authors like Tanner Linsley are always pushing the limits of TS generic's capabilities.

In his recent tweet, he suggested making generics a first-class type. I imagine something like this:

```

type And<A, B> = A & B;

type Or<A, B> = A | B;

// Magic! TS can't do this yet.

type Apply<TypeOperator, A, B> = TypeOperator<A, B>;

// usage 1

type X = Apply<Or, number, string>; // type: number | string

// usage 2

type AndNumber<A> = Apply<And, A, number>;

type Y = AndNumber<string>; // type: string & number

```

I have no doubt that this will expand the possibilities for the language, but I have concerns too.

1. Although it is easier than C++ templates, TS generic type errors are hard to comprehend. The error messages will only get more difficult as the power of generics grows.

2. I expect tsc will get slower because it cannot make as many assumptions about each types. The typechecker is not that fast as of now, so I'm not sure if it'll be worth the tradeoff.

A good part is that we won't need to worry about the impact on runtime performance thanks to type erasure!

#typescript #generics

https://lnkd.in/gBe3BsHz

Most test frameworks provide a way to "mock" a dependency. This is great because it lets you write better test code, but it can quickly get verbose in Jest if the module to mock is coupled tightly with the code under test.

Here's an example from Jest documentation:

```

jest.mock('../foo-bar-baz', () => {

const originalModule = jest.requireActual('../foo-bar-baz');

return {

__esModule: true,

...originalModule,

default: jest.fn(() => 'mocked baz'),

foo: 'mocked foo',

};

});

```

That's a piece of code to replace just 2 properties of a module. You can see the `jest.mock` call will get longer if

- There are more properties to mock.

- Each fake values are more complex.

This is a hassle because oftentimes, we don't care what the fake object does. We just want to avoid calling the actual dependency (like payment module, db access, etc) while not breaking the code under test.

Wouldn't it be nice if we could do this instead?

```js

jest.mock('../foo-bar-baz', () => mockEverything());

```

I figured out a solution with TorontoJS folks at my weekly mob programming session. Blog post incoming. Stay tuned!

#testing #jest #proxy #javascript

I recently worked on a feature which suffered from a bad scope creep. Through that experience, I learned that some scope creeps are really hard to prevent. There are landmines that can't be detected beforehand. In my case, it was the limitations of Electron & Spotify's app review process that increased the scope of my task. No preventive measure would have stopped the task from getting bigger than what I expected initially.

What we can do, however, is to clearly define "what we want", "what we do not want", and "allocated resources" during the planning phase. This definition can guide us when we need to make decisions with new information along the way. Some examples:

1. "Oh snap, I just found out that we also need X, Y, and Z to complete this task." -> We can still finish everything within allocated time. Report the findings and continue on.

2. "The new UI doesn't feel good to use. I know exactly how to improve it" -> The UI improvement is on our do-not-want list. Do it in the next iteration.

Predicting how long a task will take is impossible, but it is trivial to decide what you want and how much you would spend on that "what". So I'll try to focus more on those decisions in my sprint planning.

#project #planning #software #agile

Today I learned that I can define an infinitely deep recursive type in Typescript with "interface".

Let's begin with what doesn't work.

```

type Box<T> = {value: T};

type X = Box<X>; // Error: Type alias 'X' circularly references itself

```

Ok, that looks reasonable. Type X is Box<Box<Box<... so it cannot be initialized that way. BUT, we can achieve the impossible by using interface.

```

interface IBox<T> {value: T}

type Y = IBox<Y>; // OK

```

Did that blow your mind? This works because Typescript defers interface initialization while it eagerly initializes types. This is one of many subtle differences between type and interface in TS.

Next time you're dealing with infinitely deep types, consider sneaking in an interface in the middle to defer the initialization process!

Playground link in the comment section.

#typescript #interface #recursion

This was my first impression on Tailwind CSS as well. "Why would anyone want to write inline CSS?" Although I'm not a fan of Tailwind myself, I think it has nudged the community in the right direction in two ways.

1. Design consistency through utility classes: for a while, the standard way to enforce visual consistency of our apps was through component libraries + theme. The problem: our apps get tightly coupled with the component libraries. It is a total nightmare to deal with breaking changes involving component libraries. Coupling with a CSS framework is much less of a problem.

2. Not everything needs a name. Sometimes you just want to apply some styles, and it can't be just me who ends up writing ".button-wrapper" or "StyledDropDown". IMO we don't need to worry too much about splitting up markup files into smaller pieces. Indirections and aliases can actually be harmful to readability of markup files. We just need to give good names for the reusable bits, and that's exactly what Tailwind CSS does.

Although I don't feel comfortable with Tailwind CSS's tooling yet (or lack thereof), I have to acknowledge the good parts.

Wouldn't it be great if we could have a smooth conversation on GitHub pull requests? My team started using Axolo this week, and I think this is as good as it gets.

Axolo is a product that syncs each PR to its corresponding Slack channel, and it feels great to do the everyday PR interactions on Slack!

What's the average turnaround time for PR reviews in your org? In my personal experience, it's really rare for one person to review a PR more than once a day. That means one back & forth in PR takes at least one day. This makes it unbearably inefficient to ask quick questions as PR comments.

So we maintain a double conversation: a fast conversation on Slack and a slow conversation on GitHub. Because we know GitHub conversations are slow, we tend to make each code review as complete as we can. 10+ comments arrive at the same time, then we reply back with another 10+ comments.

I think the ideal review process should be incremental. The initial review should confirm the general direction is correct, then subsequent reviews should have a smaller scope than the previous review. Also, code review should be bi-directional with a healthy amount of discussion. All of the above is difficult using GitHub alone.

Syncing GitHub comments with Slack unlocks a path to this ideal review process. Multiple incremental reviews, quick questions, and heated debates can all happen in the context of a PR.

Although the experience has been great, there are some rough edges too. Source code is not visible on Slack, smart links to PR/issue/commits are missing, and reviewers are not automatically added to Slack mirror until they leave a comment themselves.

Also, I'm a bit concerned about the amount of distraction this can cause. The same type of concern teams had when they were shifting from email to instant messaging. I'm optimistic about Axolo, but we'd need to see how it turns out in the long term.

#github #slack #communication #codereview

ESLint is a great tool, but I have a slightly different take: it should not slow you down even from the beginning. If one adds a bunch of ESLint presets (the "recommended" rule sets), they end up wasting time trying to satisfy the linter.

I think many developers are initially annoyed by a linter because they enable too many rules that they don't care about. Imagine you have an assistant who complains endlessly about the things you don't give a flock. Wouldn't that drive you crazy? So why would you configure your linter to become that annoying assistant? A linter is much more useful when it only reports the mistakes that you know you need to fix.

It came out ranty, but I really recommend a linter to any developer. Just wanted to touch on the source of that initial annoyance.

Today, I listened to Chris and Andrew's first podcast episode and found it to be incredibly relatable for those looking to secure their first developer role. Out of all the advice given, I believe the most important takeaway was to "not worry about your first post."

When you first start sharing your work or opinions, it's normal for almost no one to care. It's okay to share your unpolished work at this stage because people are busy and have incredibly short attention spans. Don't worry about the quality of your work until you have an audience with expectations. As Austin Kleon stated, "You'll never get that freedom back again once people start paying attention."

P.S. Thanks for mentioning my weekly group programming gatherings in the episode!

#steallikeanartist

Thank you everyone for showing up this evening! Hope it was a fun chat. There was one question that I feel like I should have answered better, so let me unwrap it here. The question was "how should I plan my personal projects?"

The person who asked this question was concerned about making poor decisions during the planning phase which might result in undesirable architecture later on. The problem is, that no technique can make us make good decisions 100% (or 50%) of the time. This means no matter how elegant the planning process is, our initial plans will have a lot of flaws.

So the point of good software engineering is less about increasing the chance of good decisions. It's more about designing our code to be easily fixable when we realize that we made a mistake. The authors of "The Pragmatic Programmer" even argue that every software design principle can be derived from ETC (easier to change).

This is the reason why I suggest people worry less about the planning itself. Start from a minimal codebase (or "tracer bullet" to borrow the term from the Pragmatic Programmer) then add more requirements & constraints on top of it. Eventually, we are bound to hit the point where we feel like it's too difficult to introduce a significant change.

It is really important to reflect on our codebase at this dead end because it is a precious example of bad design. Looking at thousands of good examples doesn't make us better developers if we can't tell them apart from bad ones. By writing & reflecting on bad code repeatedly, we grow our intuition to recognize bad designs. And when we learn how to avoid those bad patterns, our range of how far we can go without hitting that dead end increases.

Focus more on flexibility than on correctness. We make mistakes. Quite a lot. There's no way around it. That's why we need to account for mistakes when we code. This is a hard concept to embrace because we want all our projects to be successful. We try to put them on a fail-proof path from the beginning, but all projects are destined to stall at the builder's limit. The best we can do is to gain experience and expand our range.

Ok, that's what I wanted to say but couldn't deliver it well during our chat. Have a good night and find me on the TorontoJS Slack channel if you want to connect!

The Pragmatic Programmer: https://bit.ly/3X5xq1a

Software Design for Flexibility: https://bit.ly/3ItARL1

Recursive types are powerful in Typescript, but it is essential to handle the base case, just like we care about base cases in recursive functions. Recursion without any base case has a risk of infinite loop. Typescript recursions are a bit more tricky than regular recursions because of the peculiarity of the `any` type.

Let’s take a look at linked list as an example.

```

type ListNode<

Value,

NextNode extends ListNode<any, any> | null

> = {value: Value, next: NextNode};

```

This list is great because we can put various types into this list, and the type is inferred correctly!

```

function listNode<

Value,

NextNode extends ListNode<any, any> | null

>(value: Value, next: NextNode) {

return {value, next};

}

const myList = listNode(

123, // number

listNode(

"hello", // string

null

)

); // type: ListNode<number, ListNode<string, null>>

```

Okay, now let’s take this a step further. How about the `Tail` type that infers the last node of a given linked list type?

```

type Tail<TNode extends ListNode<any, any>> =

TNode extends ListNode<any, null>

? TNode["value"]

: Tail<TNode["next"]>

type MyTail = Tail<typeof myList> // type: string

```

Awesome! Everything seems to be working fine. But it breaks when we pass in `any` as a type parameter.

```

type UhOh = Tail<any>;

```

This produces an error “type instantiation is excessively deep and possibly infinite”. Let’s step through our `Tail` type and see why this happens. `Tail` first checks if `TNode` extends `ListNode<any, null>`. Because `any` both satisfies & not satisfies the condition at the same time (I know, weird), the result is `any[”value”] | Tail<any[”next”]>`. That’s equivalent of `any | Tail<any>`. Do you see the infinite loop? `Tail<any>` resolves to `any | Tail<any>` which is `any | any | Tail<any>`…

Here’s a magic trick to solve this issue. We can detect if the type parameter is `any` and terminate the recursion early.

```

type Tail<TNode extends ListNode<any, any>> =

IsAny<TNode> extends true

? any

: TNode extends ListNode<any, null>

? TNode["value"]

: Tail<TNode["next"]>

```

`Tail<any>` doesn’t result in infinite loop anymore. Finally, this is the secret sauce:

```

type IsAny<T> = unknown extends T & "EXTRA" ? true : false;

```

That’s all. Stay type-safe everyone!

Code: http://bit.ly/3WUAOf8

#typescript #recursion

Have you been using useCallback to optimize component rendering? Have you ever thought it is inconvenient to wrap all your event handlers in that hook and remember to list up all the dependencies every time?

Yes, me too. And I was so happy when I came across this RFC for "useEvent" hook. It's like useCallback but without the dependency array! useEvent is not part of React yet, but many people already use it in their code. Notably, @docusaurus/theme-common has an implementation of useEvent (that's how I found out about useEvent in the first place).

useEvent hook is not all about convenience tho. Its practical benefits are more important. The purpose of useEvent is to improve useCallback patterns because useCallback's memoization invalidates far too often. Take this component for example:

function MyForm({ userId }) {

const [message, setMessage] = useState();

const handleClick = useCallback(() => {

sendRequest(userId, message);

}, [userId, message]);

// ...

}

Here, handleClick memoization gets invalidated whenever userId props or message state changes. You can see that useCallback's optimization gets less and less effective as dependency list gets longer. By using useEvent, we can ensure that handleClick's reference never changes, resulting in much more effective optimization:

const handleClick = useEvent(() => sendRequest(userId, message));

There are still discussions around the drawbacks of useEvent hook, but I think it is a very nice piece of utility to add to your React codebase. The example implementation is tiny, so please take a look if it interests you.

https://github.com/reactjs/rfcs/pull/220

There are 2 types of problems in this world. The first kind is something you know that you can solve given enough time & effort. The other kind is something you have no idea how to tackle.

Type 1 problem is what triggers the "flow" experience. You know the rules, you have constraints, and you make familiar moves that fit the situation. Occasionally, You make mistakes but learn immediate lessons from them because you know exactly what went wrong. You can stay in this state for hours without noticing.

Type 2 problem is a big time sink. You can spend weeks on this type of problem without making much progress. It can be frustrating. It can burn you. In my experience, I rarely succeed in solving type 2 problems and learn very little during the process.

When I was younger, I thought type 2 problems were required to improve my skills. You know, "go beyond your limits." That was what my teachers told me. That was how heroes made breakthroughs in movies and mangas. So I didn't back up from challenges. I blindly accepted frustration, anxiety, and blockers as part of my growth.

Then I realized most of my growth came from solving many type 1 problems. The problems that lie right at my limit. There's nothing heroic about solving type 1 problems. I just show up every day, doing mostly the same thing, maybe a bit better than yesterday. It is plebian, but this is the mechanism of growth.

Does this mean we should ONLY attempt type 1 problems? The answer is not that simple. We are not good at measuring our capabilities (check out the Dunning-Kruger effect), not good at measuring the difficulty of a problem (ask any project manager), and most importantly, there can be things more important than personal growth.

For people who want to prioritize growth: I suggest you be conscious about the type of problems you are dealing with now. Seek help or downshift your gear if you feel like you are stuck in a type 2 problem. Growth needs dedication but doesn't need to be distressing.

#flow #growth

Today's my first day at LifeAt. I am so excited!

I love it when a hiring process involves a take-home assessment (no sarcasm). It is the only type of interview where I feel like I'm displaying my full potential.

As long as the assessment is

* relevant to the role

* not timeboxed (or reasonably timeboxed)

* actually reviewed by someone

I enjoy working on it.

It's like experiencing a game with <10-hour playtime. It's right inside the Goldilocks zone in terms of difficulty. Some assessments give you a sneak peek of the technical challenges of the employer, and that's interesting too.

Pair programming & system design interviews are next on my preference scale because they make me more anxious. I always leave these interviews thinking there was not enough time to show what I can do.

There's no doubt that assessments are time-consuming. Sometimes, you get a poorly designed assessment. I know many developers dislike take-home for these reasons.

But I hope companies will continue utilizing take-home assessments for hiring.

What do you consider when you build a multi-step form for a web app? Last week, I had to make one for a take-home assessment, and these are the things I considered:

* If a user closes the form in the middle, do I want to persist the progress? Where should I store this progress? In the database? In browser storage? What if the form contains sensitive personal information? How should I restore the saved progress? What should the database schema look like? How frequently will that schema change?

* What should I do if the steps are not fixed? There could be conditional steps or dynamically added steps. How do I express such state in JS/TS?

* When should I validate the form? Should I block the user from proceeding to the next step if their inputs are invalid? If so, should I save the invalid input?

Answers to these questions depend on many factors, but I can say this for sure. Such complex UI definitely belongs in a test harness.

#javascript #webdevelopment #ui

A month ago, I posted about making React Query query keys safe.

https://lnkd.in/gV_Gzi_i

I have an update to support cancellable requests. This modified approach allows React Query to safely cancel unnecessary requests. You can learn more about AbortController and cancellable fetch here:

https://lnkd.in/gy5MUM-9

#react #typescript

I'm still searching for my next role, but I'd like to pay gratitude to the amazing people who have been supporting me through the journey.

Thanks everyone who gave me a referral, a piece of advice, and kind words. It means a lot to me!

I learned something about Typescript enum today. To iterate over an enum, you can run Object.values() which infers the type correctly!

Example:

enum COLOR { RED="red", BLUE="blue" };

const colors = Object.values(COLOR); // "red" | "blue"

The difficulty of iteration was what stopped me from using enum for a long time. However, I still don't think enum is more attractive than literal unions. That's because the compiled JS is somewhat jarring. When you compile the COLOR enum above, you get this blurb:

var COLOR;

(function (COLOR) {

COLOR["RED"] = "red";

COLOR["BLUE"] = "blue";

})(COLOR || (COLOR = {}));

I don't like it when Typescript syntax generates hard-to-understand JS code. So here's the approach I have been using until today. The compiled JS is very straightforward in this case:

const ALL_COLORS = ["red", "blue"] as const;

type Color = (typeof ALL_COLORS)[number];

Ok, it is nice to have a solution that is closer to JS syntax. But the problem with my approach is that it is tedious to refactor. Everything is a literal, so you would need to fix the type errors one by one if you want to change "red" to "Red" for some reason. (Note that I rarely needed to do such refactoring)

Then I came across this part of the Typescript handbook today:

https://lnkd.in/gg6djAsW

In short, you can define an enum-like object like this:

const color = { RED: "red", BLUE: "blue" } as const;

type Color = typeof color[keyof typeof color];

This is nice because it is a solution that generates a straightforward compilation result (with a literal object) while allowing easy refactoring. The only eyesore is the verbose Color type declaration. You can write a terse generic type to address that.

type InferEnum<T> = T[keyof T];

Here's how it looks now:

type Color = InferEnum<typeof color>;

Nice!

#typescript

When their monolith gets too difficult to manage, people look at what other companies are doing and get attracted to microservices. Microservices look amazing from a developer's perspective because a developer only needs to focus on their own independent service. The deployment pipeline takes care of the complicated dependency tree. The specs are tested automatically, so it's easy to detect any incompatibility between versions/services. All these lead to better focus & productivity.

The problem is that this is just half of the story. One does not simply adopt microservices. It's not like you decide to go microservices on Monday and start using it on Tuesday. Microservices architecture is just an abstraction layer on top, so it means you need to spend more time building a more generalized solution to your current issues. And it is common sense that solving a general problem is more difficult than a specialized one.

I'm not saying microservices are bad. It is nice to have a general solution because you don't need to come up with a bespoke solution for each case. However, operating microservices is a difficult problem that most likely needs a dedicated team of engineers. As the original post pointed out, it makes sense for big techs to make that expense because they are dealing with 1000+ moving parts.

Adopting microservices architecture to manage 5 components? It is a case of early optimization in my opinion.

#microservices #optimization #softwareengineering

These are very respectable opinions about Typescript, and the post includes many common concerns of new TS users. As a TS fanboy, I'd like to describe what developers gain by choosing TS over JS, and which of these concerns can be mitigated.

The best part of TS is that we can statically prove certain parts of our codebase (as opposed to dynamic proofs like unit tests). This implies so much opportunity for static tooling (like refactoring, and autocomplete), although the current set of tools still has a long way to go. These days, TS tooling is usable out of the box but can be improved with some tweaking IMO.

Having "type safe areas" in our code can help us focus better because there is generally less number of edge cases we need to consider within these safe areas. Notably, undefined and null values take much less mental capacity with TS. A common strategy is to expand type coverage until we can isolate the "danger zone" into a small number of modules. We can then unfold our null paranoia within that small danger zone.

As mentioned in the original post, not everything needs rigorous proof. We can write a perfectly good function with some gaps in type safety. In such cases, we can carefully use type assertions or "any" when appropriate. In fact, it is common to see such assertions in popular TS libraries. We can't do that in many compiled languages, so I consider this a strength of TS.

The concern about slow prototyping is also valid. The essence of TS is about ensuring type safety, which is not required for prototypes. According to Andrew Hunt, prototypes are throwaway projects for testing concepts. So it makes sense to choose JS for the prototyping phase. However, prototypes are not something we keep building on top of. After the concept is proven, we discard the prototype and start fresh. This is when we gauge long-term tradeoffs between JS and TS.

As with any other static analysis tool, TypeScript cannot be perfect. So it is beneficial to know its capabilities & limits.

#typescript

"Don't wait until you know who you are to get started." - Austin Kleon

What will you do if you were the world's most amazing dev? Will you build a killer app? Will you make a new programming language? Will you help other devs? The best part is that you can get started today. Don't wait.

I'm trying to build a Prisma generator that outputs a Remix dashboard. Stay tuned!

#prisma #reactjs #remix #webdevelopment

Before I became a professional software developer, I was a hobbyist programmer for more than 5 years. I hesitated for a long time to get into tech because I thought getting paid will kill my passion. Despite my concerns, I'm still here, still passionate about what I do. Let me share how I could do it:

First of all, my concern was not groundless (although I could not explain it back then). External rewards can actually dampen your intrinsic drive, and it is called the Sawyer Effect which got its name from a famous quote from Mark Twain:

"Work consists of whatever a body is obliged to do, and that Play consists of whatever a body is not obliged to do."

I was wondering how I could retain my passion even after two years of cash compensation, and here's my hypothesis: I have never considered my salary as a reward for the code I wrote. For me, salary is how my employers express their appreciation of my presence. I think they pay me because they want me to live a good life, like patrons of a renaissance artist. It might sound strange, or narcissistic, but it is an effective way to disconnect the reward from my everyday tasks.

I "think" this mindset is what allowed me to keep my passion. It might not make any scientific sense, but it works for me.

Drive (Daniel Pink): https://lnkd.in/gWNYSqZU

Predictably Irrational (Dan Ariely): https://lnkd.in/gcp6BJxc

I just finished reading The Nature of Software Development (Ron Jeffries). I also watched a Let's Play video of Software Inc. on YouTube this week. There is a stark contrast between how these two depict software development, so it was fun looking at the tech world through two different lenses.

As a software developer, I hadn't had a chance to look at a project from the manager's perspective. So watching someone playing Software Inc. was an enlightening experience. Within the video game, the player was a founder of a tech company and was able to manage every aspect of the company. To develop a new product, the player only needed to select features to include and set the "level of tech". After assigning a few teams to the project, the player got the finished product delivered on the deadline.

The entire mechanism was so simple, thus mindblowing. Of course, this was an exemplary waterfall project. Wasn't waterfall a monstrosity that everyone wanted to kill? The process looked too good in that video game.

It's no secret that there is a metric ton of literature explaining why this utopian waterfall does not work in real life. In short, a piece of software is too complex to design upfront & we cannot predict how users will think of a product before it is delivered.

Now that real-world version sounds impossible to manage, doesn't it? Imagine Software Inc. where you can't choose any product features and your teams are always late to release.

The Nature of Software Development suggests an alternative way of management: Instead of focusing on what to build, managers can focus on the values to deliver & how much resource to allocate. The company will have an end result on the deadline which delivers the desired values. There are additional details to ensure that there will indeed be a functional product in the end, but this is the gist of "the natural way".

I still think I would prefer Software Inc.'s simplified waterfall if I were a manager. It's almost like a software vending machine. Too bad that we can't fulfill the management's fantasy.

The Nature of Software Development: https://lnkd.in/gkHGjfkq

Software Inc.: https://lnkd.in/gnE9hzZH

#software #agile #book #videogames

Naming things is difficult, but it gets to another level when you are in a non-English-speaking region.

To start, you would be lucky if every name could be translated into English. Sometimes, it is impossible to translate a word without losing its meaning. Legal terms are a good example because it is a use case where ambiguity is unacceptable.

Even if you can translate everything, there's no guarantee that everyone in your team understands English. And I have seen people trying to use the Roman alphabet to express Korean pronunciations. Translated or not, trying to express Korean words with the Roman alphabet hurts readability.

How about naming things with the Korean alphabet (Hangul)? That would improve readability for sure, wouldn't it? Yes, it would be more readable, but it brings up more challenges.

First of all, there are multiple ways to serialize Hangul into bytes. EUC-KR was a popular encoding back in the days when operating systems didn't have Unicode support. Because each Hangul letter is composed of multiple Hangul characters (e.g. 산=ㅅ+ㅏ+ㄴ), there are two separate Unicode blocks for supporting "precomposed code" and "composable code" differently.

That is why it is difficult to use Hangul in identifier names. You need to ensure all components (database, client, server, etc.) in your system use the same encoding & composition scheme. Imagine debugging variable names that look the same but point to different values!

Furthermore, there is no uppercase or lowercase in Hangul, meaning that you cannot use common naming conventions like camel case or pascal case.

As far as I know, the majority of Korean programmers only use the Roman alphabet for coding to avoid all those headaches mentioned above. At the end of the day, programmers are not linguists, and we just want to build things in peace.

What I love about teamwork is that I can focus on my task while leaning on teammates to take care of other parts.

When I work on my own side projects, I usually need to be concerned about absolutely every aspect of each project. The concerns range from overall software design down to nitty gritty best practices. There is a lot to keep track of, and it slows me down. Although I enjoy challenging my mental capacity, it is just not sustainable to do more than a couple of projects in full-awareness mode.

Yesterday, I started working on a front-end feature for Collective Focus's community fridge web app, and I was amazed by how much focus I could gain from not worrying about the rest of the system. I still needed some context about the project to understand my task, but it was not part of the puzzle when I was typing out my implementation.

If working solo is a 5x5 Rubik's cube (because I am concerned about the consequences of each move I make), teamwork is like solving a jigsaw puzzle.

#collaboration #productivity

React Query users, how do you ensure you're using the correct query key every time?

React Query manages query cache based on query keys, and developers can use anything as a query key as long as it is serializable. If you are using the same query in two different places, you need to make sure you're using the same query key in both places. React Query does not warn you when the keys do not match. It will just cache the same query twice. Depending on the use case, this behavior can make maintenance harder.

Here's my attempt at solving that problem. The "Query" higher-order function takes a query identifier + a fetch function and returns arguments for the "useQuery" function. Instead of repeating the same query key every time, developers can now import a predefined "Query" and spread it inside "useQuery".

What do you think?

#react #typescript #functionalprogramming

Activate to view larger image,

My 2nd PR for the community fridge web app was merged yesterday!

https://lnkd.in/g58f-xhM

I'd like to share my experience so that people who are curious about open-source workflow can get a sneak peek.

Time goes back to this July when I first met Hamaad Chughtai at a local tech talk where he told me about his recent contribution to the community fridge project & civic tech. After my contract got terminated earlier last month, I was searching for things to do with my free time, so I reached out to Hamaad to ask him if I could join the project. Hamaad happily connected me to Bernard and Jean Paul Torre from Collective Focus who were managing the development process.

They showed me around the codebase and gave me plenty of context during the onboarding voice chat which helped me gain insight into the status of the project & the people involved. Then I spent a whole weekend setting up my local dev environment because I had no previous experience with the tech stack this project was using (AWS lambda, SAM, DynamoDB).

When I got confident with my setup, I picked up my first ticket from their issue tracking system which was about writing a seeding script for DynamoDB. After writing that seed script, I opened my first PR, then pinged Jean on Discord for a review. It was a simple PR, and we could merge it with just a couple of iterations of feedback.

https://lnkd.in/gBsj9mRT

My second ticket was more complicated than the first one because it involved integrating S3 (a new piece of infrastructure) in addition to a new API handler. I put my effort into making sure S3 integration worked seamlessly in local environment so that developers can have a pleasant experience. Throughout the 11 days of review, Jean provided helpful feedback & suggestions until we could merge the PR.

Now the team is ready to deploy the first version of the app, and I'm excited to see it go live!

If you want to contribute to an open-source project, I strongly recommend reaching out to people involved in that project first. It makes your job significantly easier. If you are interested in the community fridge project in particular, message me or directly reach out to Jean.

#opensource #civictech #aws #collaboration

For me, the biggest pain point of Styled Components was coming up with the component names. It didn't feel right to name a <div> with a margin <StyledDiv> or <DivWithMargin>. Should I call it <StyledDiv2> when I have another styled div? Should I rename <DivWithMargin> when I need to add more styles to it?

These questions often interrupted my focus, so here are some alternatives I've tried:

* Built-in inline styles: this works without any additional setup but is limited in functionality.

* CSS props from emotion.sh: allows me to do interpolations and theming on top of inline styles.

* tailwindcss.com: allows me to write an abbreviated format of inline CSS. I think of it as keyboard shortcuts for CSS.

* Sprinkles from vanilla-extract.style: similar to Tailwind CSS, but it has strong Typescript support.

I love how fast developers are making progress on this front!

#react #css #webdevelopment