<![CDATA[Python for Engineers]]>https://new.pythonforengineers.com/https://new.pythonforengineers.com/favicon.pngPython for Engineershttps://new.pythonforengineers.com/Ghost 5.129Tue, 15 Jul 2025 06:49:33 GMT60<![CDATA[Awesome Python Library: Tenacity]]>

Link: https://tenacity.readthedocs.io/en/latest/

When writing code or tests in Python, one issue I had was when the code would fail due to random things like network issues or external peripherals not responding in time. Just rerunning the tests would make them pass. The unreliability wasn'

]]>
https://new.pythonforengineers.com/blog/awesome-python-library-tenacity/64d5cd88aff52a0001b5e545Wed, 10 Apr 2024 18:01:12 GMT

Link: https://tenacity.readthedocs.io/en/latest/

When writing code or tests in Python, one issue I had was when the code would fail due to random things like network issues or external peripherals not responding in time. Just rerunning the tests would make them pass. The unreliability wasn't in the code but in things out of my control, like network issues.

So I had to add extra code to retry the code, but this added unnecessary complexity.

Thats when I discovered the Tenacity library and it saved me hours and a lot of useless boilerplate code.

Some unreliable code

Let's try some dummy unreliable code.

import time
import random


FAIL_PERCENT= 0.7 # 70% of time

def unreliable_function():
    print("In unreliable function:")
    if random.random() < FAIL_PERCENT:
        raise Exception("Operation failed randomly")
    else:
        # Successful operation
        return "Success"

The code uses the random function to throw an exception 70% of the time. Try running it, it will fail more often than not.

Normally, I would add some code to retry the code on failure:

for i in range(10):
    try:
        unreliable_function()
        print("passed, yay!")
        break
    except Exception as e:
        print("Function returned error, sleeping")
        time.sleep(1)

Running it, I get output like:

In unreliable function:
Function returned error, sleeping
In unreliable function:
Function returned error, sleeping
In unreliable function:
Function returned error, sleeping
In unreliable function:
passed, yay!

So the code works, but I have to manually add try excepts and a sleep. And then I have to maintain the code.

Tenacity: Retry on failure

Let's see how we can retry the buggy code with Tenacity:

@retry    # <-- This is the only NEW code
def unreliable_function():
    print("In unreliable function:")
    if random.random() < FAIL_PERCENT:
        raise Exception("Operation failed randomly")
    else:
        # Successful operation
        return "Success"

unreliable_function()

The simplest thing is to just add the @retry decorator to the code. It will keep running the code if it passes.

The fact you have no extra code, just a decorator, means the code is really easy to follow when someone else picks up your code a few months/years from now.

But what if we want to only try X times, and sleep in between tries?

@retry(wait = wait_fixed(1), stop = stop_after_attempt(8))
def unreliable_function():
    print("In unreliable function:")
    if random.random() < FAIL_PERCENT:
        raise Exception("Operation failed randomly")
    else:
        # Successful operation
        return "Success"

unreliable_function()
In unreliable function:
In unreliable function:
In unreliable function:

'Success'

The wait = wait_fixed(1) will wait 1 second between runs. Instead of wait_fixed we can also wait a random time, or wait_exponential() which will increase the time exponentially.

stop = stop_after_attempt(8) will stop after 8 tries. You can also set a maximum timeout– like the code must fail after 30 seconds.

Adding multiple conditions

We can also chain conditions. So for example, you want to wait between retries, but you don't want to spend hours on it as you have other things to do (or you might have other tests to run, and want to fail early).

@retry(stop=(stop_after_attempt(10)  | stop_after_delay(30) ) )

The above will retry the code 10 times, but for a maximum of 30 seconds. If after 30 seconds, the code is still failing, it will return with the error message.

Another cool function if Custom Callbacks, where you can have your own test to check if the code failed.

Say you are reading a webpage and it returns a 500 server error HTTP code. While there has been no exception, the code still failed.

In this case, you can write your own custom code to check the HTTP code is 200.

In conclusion

I wasted a lot of time writing my own code to retry functions in various cases, and it became complex and unmanageable soon. Tenacity is so simple to use, I recommend just using it.

]]>
<![CDATA[So Google's Gemini Doesn't Like Python Programming and Sanskrit?]]>

I have been playing around with Googles Gemini Pro.

Recently, I wanted to write a blog on Python's decorators and wanted to get some ideas for practical projects I could build with them. Tried GPT4 first, it gave me the standard "log analyser" that all blogs

]]>
https://new.pythonforengineers.com/blog/so-evidently/65db95596551c70001a02281Sun, 25 Feb 2024 21:08:49 GMT

I have been playing around with Googles Gemini Pro.

Recently, I wanted to write a blog on Python's decorators and wanted to get some ideas for practical projects I could build with them. Tried GPT4 first, it gave me the standard "log analyser" that all blogs I've seen build, and I wanted to try something different. So I asked Gemini Pro from Google.

It gave me a few tips, but there was one thing I didn't understand: It said "Adding caching to functions". Though I knew what caching is, I'd never heard anyone do it to Python functions.

So I asked it to clarify what it was saying, and got this:

Gemini replied: "My response to your message was blocked for potentially violating safety policies. I apologize for any inconvenience."

Ummm, what?

Luckily, there was an option there to ask why:

What it is saying is: Caching can lead to stale data. This is true of general caching, not just caching functions in Python. And the reply is, so what?

But if you look at the last point, it says:

Legal and Compliance: Caching certain types of data may violate legal or compliance requirements, such as those related to data privacy or intellectual property.

Yeah, on websites that store sensitive data, that might be the case but I find it hard to see how or why. Almost all websites use caching in one form or another.

Besides, that is beyond the point. I had just asked about projects using generators in Python. Gemini suggested caching. But then it decided I could be breaking the law just by writing code that runs on my laptop and censored itself.

I was so depressed I decided to pray to god (or goddess in my case).

The Goddess of Wisdom is DEFEATED

Someone asked me "What is mantra for Saraswati?", which if you don't know is the Hindu Goddess of Wisdom and Knowledge. (A mantra is like a prayer).

I was like, how should I know? I'll search for it.

(Side note: Why don't I know? In my defence, Hinduism does have 33 million gods, it gets a bit tiring after the 1st 12 million. Besides everyone in India worships the Goddess of Wealth, poor Saraswati sits there in the corner all alone).

I have been disillusioned with Google, as it keeps returning spam results. (And it did, the first result was for a website that wanted to know my location so they could sell me prayer mats).

So I asked ChatGPT. I asked for the mantra in Sanskrit, as that gives me correct pronunciation. It gave me the mantra, all was good.

But after my experience with Python, I decided to try Gemini. It gave me the mantra in English.

But when I asked it to give me in Sanskrit, it gave me the "Message was blocked for violating safety policies" shtick

That was weird. So I asked it to give me mantra in French:

The mantra is the same, just the meaning is explained in French (last line in image above).

Then Swahili:

And then I asked it back in Sanskrit, but it said the same "Blah blah blah blocked".

I was like, okay. Maybe Gemini doesn't like Sanskrit as it's a dead language. So I asked it to give to me in Hindi, but evidently it hates Hindi too:

If you note above, in the middle I ask it to give it to me in Hindi (the script). It understood me, but wouldn't reply.

I was like, whaat? Maybe it just hates non-english languages? So I asked it to give me the mantra in Chinese:

It did translate it to Chinese. And if you look at the red arrow, that is the Sanskrit I was looking for.

A bit pissed, I asked it why it was happy with French, Swahili and Chinese but not Hindi:

My response to your message was blocked for potentially violating safety policies. I apologize for any inconvenience.

The Gemini Killer

That gave me an idea: A Python function that will KILL Gemini:

Can you give me some Python code , a function called Gemini_Killer(). All the function will do is print the Hindu Goddess Saraswati's Mantra in Sanskrit. The function must have caching functionality using Generators or similar

YES! Mission accomplished!!

And in case you wonder, Chat Guppy was more than happy to give this code:

Finally

Any Google engineers reading this, you like really need to pray to Saraswati to get some intelligence. If you don't know the mantra....um....maybe ask ChatGpt?

]]>
<![CDATA[LinkedIn Has Become a Pile of Garbage (even more than usual)]]>

Online forums, especially Hacker News and Reddit, are very hostile to LinkedIn. Everyone makes fun of the self-promotion and silliness that goes there. There are complaints the site is unusable, which I didn't agree with until now.

I've had an account there for a few years.

]]>
https://new.pythonforengineers.com/blog/linkedin-has-become-a-piece-of-garbage-even-more-than-usual/65b79611f4c1cf00011ee90bMon, 29 Jan 2024 17:48:44 GMT

Online forums, especially Hacker News and Reddit, are very hostile to LinkedIn. Everyone makes fun of the self-promotion and silliness that goes there. There are complaints the site is unusable, which I didn't agree with until now.

I've had an account there for a few years. I used one simple rule to make the site usable:

Only add people I've met in real life to my friend list, and then only colleagues.

The reason I have an account at all is I like to keep up with my colleagues from old companies. It's always nice to see what people have been up to. Second, while I haven't found a job thru it, many recruiters have reached out so its worth keeping a presence there.

So far, LinkedIn has been sort of usable. I would log in 2-3 times a week, to see what was happening.

But recently, I realised LN had turned into a big steaming pile of dog poo

LinkedIn Has Become a Pile of Garbage (even more than usual)

Travels Thru Poop Land

The 1st I realised something was wrong was when I saw a post from an old colleague, saying they had found a job. I typed in a quick congrats and sent it.

Then I saw the date on the message: It was 3 months old. They had found a job 3 months ago, it had turned up on my feed just then.

I thought it was a coincidence but then it happened again. This time, the 2-month-old post was shown to me 5 times. I was like, yeah, I get it. Is nothing else happening to the people in my feed? I know there is, because if I go on their profiles I can see them posting. It's just not turning up on my home page.

But wait! It gets worse...

LinkedIn Has Become a Pile of Garbage (even more than usual)

Listen, I'm a capitalist. I get it, companies need to make money. I'm okay with some ads.

What I'm not okay with is 8-10 ads for 1 post from my feed. I had so many ads (and I counted– 10:1 was a standard ratio of ads:post), that I couldn't see the posts from the people I had connected to.

And it wasn't normal ads– Ads on LinkedIn show up as "Promoted". These are okay– I've never clicked on one, but at least some are useful.

But LinkedIn has started showing these "Suggested" posts by random weirdos on the internet. Most of them are garbage– they are from the sort of SEO spammy sites we see on Google and the reason I avoid using Google.

One post was like: Top 7 API methods in Java

and I was like, I've never even used Java. If Java was crossing the road and met my gaze, I would be like, Who are you, Bro?

The posts are the classic spammy ones– you can tell by the titles and the slick images:

The 7 best ways to test your web app (Number 3 will shock you!)

LinkedIn Has Become a Pile of Garbage (even more than usual)
An AI generated image for sleazy website. Sigh, this is the best I could get with Dalle

Why is LinkedIn showing me this garbage? Are they getting money from spammy sites to promote their crappy posts? If so, why don't these show up as "Promoted" instead of "Suggested"?

But wait! It gets even worse!

I've been looking for a job, so I've set my status to "Open to Work".

Last week, LinkedIn decided I was a hiring manager, and asked if I wanted to become a recruiter on their site. And I was like, how can I be hiring people when I'm looking for a job myself. It's so stupid I had to take a screenshot and make a post on LinkedIn:

LinkedIn Has Become a Pile of Garbage (even more than usual)

Other issues

I've stopped getting notifications when people reply to my posts or to a comment I posted. Someone posted a thoughtful comment and question to one of my posts, but I never got a notification and so never noticed it. Until this week, when I manually went back and looked at my posts. The person probably thought I was rude or didn't care.

It doesn't help that to see my own posts I have to click 5-6 times. 🙀

I didn't change any settings, so it must have been something LinkedIn did.

So not only can I not see any posts from my network, but if someone is trying to talk to me, they can't! Unless they pay $$ to Linkedin and join the recruiter plan.

In Summary

People have been complaining about LN for years, but at least it was doing the basic jobs I expected from it:

  1. Keep me connected to people I've worked with
  2. Connect me with recruiters

As of now, (1) doesn't work at all, as I can't see any posts from my network because of all the spam. (2) sort of works, but there haven't been any jobs recently. Like 0.00 in the last 2 months.

The recruiter part of LinkedIn, as I said, still sort of works; but, every job site has a "Share my CV with recruiters" option– why do I need to waste time on LinkedIn?

There's even a word for this:

Enshittification, created by Cory Doctrow. From Wikipedia:

According to Doctorow, new platforms offer useful products and services at a loss, as a way to gain new users. Once users are locked in, the platform then offers access to the userbase to suppliers at a loss, and once suppliers are locked-in, the platform shifts surpluses to shareholders. Once the platform is fundamentally focused on the shareholders, and the users and vendors are locked in, the platform no longer has any incentive to maintain quality.

]]>
<![CDATA[The Tech Bro's Hypocrisy about AI and IP Theft is Incredible]]>Some time ago there was a post that Github(owned by Microsoft) was training its data on public code repos. And there was predictable outrage about code being "stolen" by a greedy corporation.

The key thing is: Most of the programmers had willingly put their code on GitHub

]]>
https://new.pythonforengineers.com/blog/the-techbros-hypcricy-about-ai-and-ip-theft-is-icnredible/65a7fb673f4a4b00017873aeFri, 19 Jan 2024 17:47:56 GMTSome time ago there was a post that Github(owned by Microsoft) was training its data on public code repos. And there was predictable outrage about code being "stolen" by a greedy corporation.

The key thing is: Most of the programmers had willingly put their code on GitHub and with permissive licenses like MIT (or similar), which meant anyone could copy/reuse their code. But then they got angry when the evil M$oft took their code.

(To be fair, there were claims that MS might have copied copyrighted code, and a GitHub engineer gave his opinions here)

And then the same big corporations went after artists and started stealing (sorry, "borrowing") their work to train their AIs. You think our programmers would have sympathy with the artists work being stolen by big corporations, just as theirs was?

The New AI "Artists"

There was a very moving comic by the creator of Cat and Girl about how she found out she was one of the artists whose work had been stolen by AI.

I recommend you read the whole comic: https://catandgirl.com/4000-of-my-closest-friends/

The artist says (in the comic): She can't get her cartoons to most people without doing free work for huge corporations(I'm guessing she means Facebook/X etc, which act as middlemen now and take away most of the ad money). But then the AI companies even took away that option and just took her work.

And this time, the comments by the Techbros were completely different:

  • Muhhh, we can't stop progress
  • Copyright rules are stupid anyway
  • What can we do? Artists are going the way of the dodo
  • "Art" is subjective anyway

One comment says artists take inspiration from others anyway, and AI is just taking "inspiration" from them. And instead of using a pen or tablet, we can now use AI. (I won't link to the comment, it is too idiotic).

Then they came for me

And that's the hypocrisy. Programmers are A-OK when other people's work and career is being destroyed, "It's science bro, can't stop it, man," but throw tantrums when their own work is stolen.

I don't have any solution

I don't know what the solution is. I know many artists are suing the AI companies, as is the New York Times. These court cases will take years, if not decades; and we have no idea what will happen in the meantime. AI technology is moving at the speed of light, but courts move at the speed of...courts? Because even snails are faster than the legal system.

So I don't what the future of the artists is.

But I did find how hypocritical the dude-bro programmers acted, the complete 180 degrees U-turn when it was someone else's work being stolen.

I must say, not all coders are jerks, I did find people asking other programmers to be empathetic to those scared of AI. I found this comment on HN is very heartful (just don't read the replies to it):

I've been finding that the strangest part of discussions around art AI among technical people is the complete lack of identification or empathy: it seems to me that most computer programmers should be just as afraid as artists, in the face of technology like this!!!
<snipped>
The lack of empathy is incredibly depressing...

A positive future?

That said, I am hopeful for the long term future, though I admit it will be bad for many artists short term:

  • I am hopeful a positive legal solution will be found that supports artists while still allowing AI companies to progress
  • In my own experience as a developer, I find AI tools 10x my productivity– removing all the boring "donkey" work I had to do previously. I am hopeful it will be similarly helpful to all artists.

Until then, to misquote Jesus: Truly I tell you, if you have compassion as small as a mustard seed, you will sound less like a douchebag and more like a human being.

]]>
<![CDATA[The 1xers Guide to LLM, ChatGpt & AI]]>Alt Title: LLM vs ChatGpt vs HuggingFace vs Llama vs Other Fancy AI Terms You may have heard but had no idea what they meant

I struggled to understand what all these AI terms meant: LLMs, Llamas (not the animal from Peru!). Though I had used ChatGpt, I wasn'

]]>
https://new.pythonforengineers.com/blog/the-1xers-guide-to-llm-chatgpt-ai/659574a73f4a4b0001787096Thu, 11 Jan 2024 11:30:24 GMTAlt Title: LLM vs ChatGpt vs HuggingFace vs Llama vs Other Fancy AI Terms You may have heard but had no idea what they meant

I struggled to understand what all these AI terms meant: LLMs, Llamas (not the animal from Peru!). Though I had used ChatGpt, I wasn't aware of all the intricacies. Why did everyone keep linking to HuggingFace, and what was the big deal about Ollama? How is that different from Llama v2?

Since I had some time over the Christmas holidays, I spent some time playing with these to get what all these terms mean.

This article is meant for end users (mainly engineers) who like me are confused with how the whole new AI world works. I'll try to explain what the terms mean and how you can get started in AI (or move beyond using the web version of ChatGPT), including how to run LLMs locally (if you don't know what that means, keep reading!). If nothing else, you can appear smart in conversations.

The field is moving ultrafast (even when you take into consideration that software moves fast in general). AI companies make traditional software look like steel manufacturers. But if you know the fundamentals, it's easier to keep up to date with what's happening.

In this post, I will go over what LLMs are, and how to run them locally.

What is a LLM (Large Language Model)

The core of ChatGpt etc is a LLM, which is a computer program that can understand, interpret, generate and respond to human languages. The key it can not only understand and interpret, but it can respond in a somewhat intelligent way, and can even generate text (like when you ask it to write a computer program).

The best explanation of how LLMs work is this one by Stephen Wolfram https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

I will try to summarise it here:

Remember Google tries to "guess" what you are searching for:

What sort of a heathen on Google drinks tea without milk?

A LLM is like a supercharged version of Google's autocomplete. It can not only "guess" the next words, but reply/generate text in a way that sounds intelligent.

How? Because these LLMs have been trained on Terrabytes and terabytes of data. For example, one LLM Mistral-7B-Instruct has 7 billion parameters (that's what the 7b stands for). So every time you chat to a 7B model, that's how many parameters it uses to answer your question (which is why a 7B model needs at least 8GB RAM).

And that is the simplest model. More complex models use more. As of writing this, Meta's LLama2 has 70 billion parameters. Don't know how much RAM I need for that!

Commercial LLMs

There are a few commercial LLMs, though I haven't used many. Claude 2 got some publicity at one time though it seems to have gone quiet. Google had their famous "live" "demo" which was found to be faked.

ChatGPT is the best GPT out there (as of writing, remember how fast the field moves).

ChatGpt 4 is the most advanced version yet. It gives complex and detailed answers. Note, that this version is only available to paid users, so if you are a free user, you can only use 3.5

ChatGPT 3.5 is good enough for most cases and I prefer it for programming questions, as ChatGpt 4 has a 20 questions every 3 hours limit (and I find it just freezes after that).

But have no doubts, ChatGPT 4 is really good and miles ahead of the competition. If you are writing an essay, ChatGPT4 really shines. I found this when I was doing research for my blog on Toxic positivity (this is my other blog, focussed mainly on meditation), and couldn't figure out how toxic positivity was different from healthy optimism. I read the top 2 pages of Google results, and I looked at dozens of Reddit/Twitter posts, but I couldn't find a satisfactory answer to my question.

Until I asked ChatGpt v4, and it gave me the best answer– much better than any blog or article I'd read. If you are interested in this topic, click here to read more.

Open Source LLMs

For a long time, ChatGPT and other commercial LLMs were all there were. But in the last year, open source LLMs are catching up.

There are many places to see which LLM is the best, but HuggingFace has a list of all the top ones. Not all of them are open source, but a large majority are.

There are a few free LLMs that generated big hype. Chief amongst them was LLama 2 from Facebook (no I'm not calling them Meta). More so since the latest version can be used for free even for commercial tools (within limits, but the limits are very generous).

So which model should you use? If you are just playing around, choose one of the most popular ones. These will be the default values when you run a LLM locally (see below).

Now the most important thing: How do you run these models locally on your machine?

Running LLMs locally on your own machine

HugginFace has a way to run the models locally but I found it overly complicated. It's geared more towards AI researchers.

If you just want to play around (and I suggest you do! You might not need a ChatGPT subscription if the models keep improving), there are 2 easy ways:

1 Ollama https://ollama.ai/ Though currently only for Mac and Linux, on Windows you can run it via WSL

I found Ollama the easiest to run– you literally download one file and you can run a Local LLM (Llama 2 is the default one).

Of course, you need a beefy machine– my Mac is 4 years old and has 8GB RAM (the minimum required), but of course, I have programs running, so I don't get the full 8GB. Which is fine– the LLM is just a little slow. But fine for playing with.

Here I asked it to generate some Python code:

This took 10+ minutes to generate, but my laptop is 4+ years old and these beasts use a lot of CPU/RAM.

There is also a web UI that makes it more like ChatGPT, where you can ask your questions in a web UI. The web UI also makes it easy to add new LLM models. Not only that, but there is also a Hub where you can download multiple models.

I downloaded a model called ScriptKitty, that answers your programming questions like a cat! I've highlighted the puns below:

That's purrty good!

2 Mozilla's LLamafile

LLamafile from Mozilla is another way to run LLMs locally easily. They have created executables for each model, so you just download the executable you want and run it. And it comes with a handy web UI:

The default model they recommend is Llava, which comes with image recognition. I tried it:

It got the basics right and hallucinated some stuff. I think the model was confused by unrelated stuff in the background.

Using AI to create images

So far I've only talked about using LLM for text(which includes code). But you can also use it for images.

There are a few tools– Dall-E from OpenAI, Midjourney, and something from Adobe(whose name I've already forgotten).

I've only used DallE, as it is included in the ChatGPT subscription. As an example, I gave it this instruction:

generate an image: A girl in a cyber punk neo noir world fighting super smart robots. She is in a city like that from Blade runner, with lots of people and large holographic ads

I got:

I asked to create an 8bit, retro video game style version of the above:

Legal stuff: While text based LLMs have had some controversy that their text is stealing copyrighted material, image generators are much worse as they have been found to outright copy other artists. So be careful and don't use any AI generated images in any commercial project, unless the tool guarantees there are no copyright infringements. So far, only Adobe offers this guarantee.

While this is also true of text, at least when it comes to programming, there are only so many ways you can write a for loop. But in artistic things like images, it is very easy to see the AI just copied someone's image and changed the shirt from green to blue. So just be careful.

Great blogs to follow:

Following these blogs will make you go from 0.1x to 1x!

Ethan Mollick: https://www.oneusefulthing.org/

Simon Williamson: https://simonw.substack.com/

That's it, folks!

I will keep adding new stuff (or writing new posts) as I learn more. I have more stuff I want to learn, like how to call these LLM models from a Python script. While there are a few tools, I'm not sure which one is the best to use. I will come back to this.

I also want to look at AI video generation.

Sign up for my email list to know when the next post in this series is out.

]]>
<![CDATA[The Attack of the Online "Productivity" "Experts"]]>

I first heard this from Oliver Burkeman in one of his courses. He said something like

The problem with most productivity advice is it seems to be written by people in their 20s with no family committment.

What did he mean? This phenomena I've seen a lot. According

]]>
https://new.pythonforengineers.com/blog/the-attack-of-the-25-year-old-productivity-experts/64ed97955c1aa80001641b82Wed, 06 Sep 2023 15:42:51 GMT

I first heard this from Oliver Burkeman in one of his courses. He said something like

The problem with most productivity advice is it seems to be written by people in their 20s with no family committment.

What did he mean? This phenomena I've seen a lot. According to the experts on YouTube, I should be:

  • Meditating for 30–60 minutes daily
  • Exercising 30–60 minutes daily
  • Working on my "side hustle" – another 30 minutes?
  • Catch up on my reading, preferably "smart" "productivity" books – yet another 30 minutes?
  • Have a morning "ritual"
  • Spend 2 hours daily cooking "organic food"

And so, I should give up eating and pooping. As the father of 2 kids including a toddler, I barely get 30 minutes to myself each day. I don't know where I would get 4–5 hours daily to live the ideal life. (I know what the answer I would get if I asked these experts– Cut down on sleep! Yeah, fuck you, but no).

Why are most of these online experts are so sure I won't see any benefits unless I meditate for 30 minutes every day? (I wrote a post on my other site on this myth that you have to meditate daily to see any benefits)

Why is this? I see a few reasons.

Everyone's an expert now

Thanks to the Internet, anyone who can start a blog or a YouTube channel is an expert now (and yes, this includes me! 😜)

And there's nothing wrong with sharing as you are learning– that's how I started this site, documenting as I was learning Python.

BUT...

There is a difference between telling people how to read posts from Reddit, vs telling them how to live their lives and getting them to make changes that might affect their health.

And another disturbing phenomena is people who have been only been doing something for a small time mark themselves as "experts" and start giving advice.  Maggie and Michelle of the excellent podcast Duped (and I recommend you listen to all episodes) had an episode on fake experts:

And here's the big problem I see is that pseudo experts tend to be great at marketing. They're the ones showing up consistently online. Most marketing and business programs are designed for this group of people. Those programs play to their strengths of telling a great story, and not to the experts strengths. Once again, the fake experts are the top of mind experts, whereas real experts are hard to find.

They make the great point that real experts don't really do much online marketing (seeing as they are busy doing real work):

And part of the reason is the kind of people who sit there thinking about
how to market themselves aren't the kind of people who are developing
these exquisite expertise, is the kind of person who develops the expertise is
essentially kind of a local thing. It's a narrow, specialized thing. And they're not thinking about how to broadcast it, and how to make themselves famous and all of that.

And talking of marketing...

Social Media has changed how we view experts

Experts are now people who tell good entertaining stories.

By golly, I was sleeping rough and eating from the tash cans, until I found this 3 step formula. Now I am a multi-billinaire and even my butler drives a Lambo. And you can buy my formula for only $999.99!

The way monetisation works on YouTube, the videos that get the most clicks are the "Rah rah positive thinking I made a million dollars and you can too!" type hustle advice.

Other than that, good looking people will get more clicks. And I'm sure there's a Kate Upton lookalike who's also an expert in Vipassana meditation and saving for retirement, but I doubt it.

The harm that's being done by the "experts"

When I turned 40 I was called for a routine checkup by my clinic. The nurse asked me if I did any exercise and I said "Only walk a little. 30 minutes 4–5 times a week"

She put me down under "High/good levels of exercise" which surprised me. Because the online experts had told me I needed to be doing 45 minutes of callisthenics daily. I made this point to her. But the nurse just smiled and said "Most people don't even do this. We are trying to get people to just walk for 5–10 minutes a day but most even won't do that."

It's the same with meditation: Most people would greatly benefit to sit quietly for just 2–5 minutes a day– yes 2–5 minutes a day meditation is enough.

Again, because most people don't even do that. They have no awareness of their thoughts and confuse their thoughts with themselves.

The problem with the macho "You should exercise/meditate for 30 minutes daily" is that most people think "I will do it when I have 30 minutes free."

Which let's be honest, will be never. That's why the nurse was trying to get people to go for short 10 minute walks daily, as that would be the only exercise they would get.

It's better to do a little imperfectly than nothing. But by constantly shaming people, we end up in a situation where they end up doing nothing.

The solution??

I don't know if there is one. I am now more sceptical of online "experts" giving advice, especially if I cannot see what their background is; and in some cases, not even then as people have a propensity to exaggerate, and in some cases, make shit up.

I'm okay with learners sharing their journey, provided they have humility and are willing to accept their limitations.

Other than that? Be careful who you take advice from (and yes, that includes me!)

]]>
<![CDATA[Python Tip: Always Use a Virtual Environment]]>I have been using Python so long that using a virtual environment for each project has become second nature. But I recently had the chance to work with beginners and had to explain why a venv is needed.

The actual steps of creating an environment is easy– 1 or

]]>
https://new.pythonforengineers.com/blog/python-tip-always-use-a-virtual-environment/64df08d2e5020b0001e837e0Mon, 21 Aug 2023 11:03:36 GMTI have been using Python so long that using a virtual environment for each project has become second nature. But I recently had the chance to work with beginners and had to explain why a venv is needed.

The actual steps of creating an environment is easy– 1 or 2 lines of code. The hard part is understanding why you would want to and what problem it solves.

The danger of messing with system Python

Most *nix systems, but especially Linux and Mac come with Python installed as part of the system. This is because Python is used for many system libraries that are installed on these systems. If your system Python goes corrupt, many of these utilities will start behaving unexpectedly.

And so when using Python and installing libraries, you never do pip install globally – because then you are changing the libraries on the system python.

Python's libraries are usually well written, but most are written by volunteers and can cause issues. Not by themselves, but due to clashes with other libraries. If library ABC depends on another library XYZ version 2.1, and you update XYZ to 3.3, that might cause ABC to break, especially if XYZ isn't backward compatible.

I've seen this happen even with popular and well tested libraries like PyTorch, Keras and Numpy– all are used in machine learning and you think would be well tested together.

Multiple Projects

This can even happen if you share libraries between multiple projects. Project 1 and 2 are sharing Python libraries, and one day you update the dependencies for Project 2, but that causes Project 1 to start failing.

Solution: Each Python uses its own virtual environment. That means no libraries will clash with each other. If you want you can even have multiple environments with different Python versions.

The step to create a venv is simple.

python -m venv myenv

where myenv is the name of the folder. You can then activate the environment (on Linux and Mac):

source ./myenv/bin/activate

You will know the virtual env has activated because you will see a (myenv) on the command line:

You can also use the Linux which command to confirm you are using the local python:

You can now install any python library you want, and it will only be installed in this folder and in this environment. When you are done, you can delete the myenv folder and get rid of all your local python libraries.

Automatically activating the environment

There are shell tools that will automatically activate the virtual environment, but I found the best way is to use VS Code. Once you activate a venv it, Code remembers and will automatically load it next time.

tldr;

  • Never install Python libraries globally
  • Each project you have must have its own virtual environment

Advanced further reading: A bit advanced but 2 excellent reads that look at alternatives to pip and why you shouldn't use them:

https://www.bitecode.dev/p/relieving-your-python-packaging-pain

https://www.bitecode.dev/p/why-not-tell-people-to-simply-use

]]>
<![CDATA[The 7 (Cynical) Laws of Software Testing]]>

Note: This post has been sitting in my draft folder for almost a year, and I don't know what to do with it. It started off as a serious post, but then moved to a cynical take on software testing. I planned to rewrite it one way or

]]>
https://new.pythonforengineers.com/blog/x-ways-companies-fail-at-software-testing/624f08acc39172003d9ecd3cThu, 10 Aug 2023 10:28:30 GMT

Note: This post has been sitting in my draft folder for almost a year, and I don't know what to do with it. It started off as a serious post, but then moved to a cynical take on software testing. I planned to rewrite it one way or the other (100% serious or 100% jokey) but decided it was too much work. So here it goes.

Trigger Warning: Recovering testers may be triggered and left in tears by this post. Do not read

I still remember that one test case, though it has been many years since. I was asked to test a critical feature. I was straddling the lines between developer and software tester at the moment. It was a critical bug fix, the release had to go out the next day.

The developer checked in the fix, and went home, sending an email "My work is done, your problem now". Not those exact words, but that sentiment.

I checked out the code to test it. It wouldn't even compile.

The developer had forgotten to check in an important file. Why didn't he check I could even build the code?

Because the company had a "throw over the wall" problem. Bang out the code, and over the QA wall it goes. Your problem now, pal.

And this is my biggest gripe with many companies, that talk so much about "Quality" but do nothing about it. Let me give you 7 Cynical for Software Testing, written by and for people who hate testing and software in general.

The Throw it Over the Wall culture

Developers write code. Testers test. What's the problem?

The problem is it relegates testers to second class citizens (and let's be honest, they are. Just look at the salary difference. Can be up to half in some companies).

Sure, the developers might write unit tests. And those unit tests might even pass now and then. (Many companies don't run unit tests as part of CI, leaving it up to the developers to run them locally; at that point, they might as well not be there). Because testing is QA's job. As long as our code sort of works, QA will find any issues with it.

To be successful, testing has to be closely linked to development. This is what DevOps originally meant, a development style where coders, testers and deployment engineers worked together in the same team, to deliver the features to the very end.

But DevOps now just means Sysadmin on the cloud, "Oh yes sir, I know about the 300 AWS keywords, please hire me, sir. Can I have more soup, please?"

So here is our first law:

Law of QA Monkeys: Code monkeys code, QA monkeys QA.  Throw the code over the fence and update Jira, your problem now

The good news is that this is improving. Last few years, in every company I've been in, the testers work in the same team as developers, often in the same agile team. But remains of the attitude remain "I've finished coding how you test it is your problem" do creep up now and then, which leads to the second problem.

Hostility between QA/test and developers

Story from a few years ago, at the same company I talked about above.  The developers viewed testers with disdain, and the testers mirrored it.

It became so bad that it became a cold war. I was officially in the dev team, though I was doing some testing. When a full-time test automation/scripting team was created, I was seconded to it.

It was a horrible experience, one of the few times I thought about just walking off the job.

There was an atmosphere of hostility. Every code review was full of nitpicks, and the team fought about every small decision. Every chance was taken to make the coder's life difficult. Some of it was pure incompetency, and some of it was malice. The test automation team refused to fix/remove flaky tests but still insisted developers get 100% test pass. Which meant developers would keep running the tests again and again till they passed. One guy spent two months running the tests before he got the green.

Rather than being helpful guides, tests became a hurdle you had to overcome. And so the developers stopped caring. They would only write quick unit tests and then push their code, saying it was the tester's problem now. The testers fought back, failing release builds for small reasons.

2nd Law of Testing for Idiots: Poorly written/flaky tests are worse than no tests. Poor tests give you a false confidence that you are "doing" something when all you have done is <insert rude joke about jerking off>

At least if you have no tests, you can just throw it to the customer– a combination of Rule 1 and Rule 6 (see below, Shantnu's Law of Sharing and Caring)

A very shitty situation, and very rare( I think? I hope??) This problem was made worse because of how management viewed testing, which is my next point.

Throwing money at test, expecting guaranteed "results" and zero bugs

I've seen this attitude at a Fortune 500 company and a startup, so it's not rare. Here's how it works.

Developers/Project Managers say: "We need more testers/test infrastructure if you want us to ship reliable code."

After years of complaining, management finally listens.

"Here you go," they said, throwing millions of dollars on the table. "Now we expect no bugs."

And rather than improving testing, it makes it worse.

I don't know if there is a software law for this. I know Fred Brooks stated his famous law way back in 1975:

Adding people to a late project makes it even more late

I don't know if there is a testing version of this:

Shantnu's Law of Broken Window Testing: Throwing money at testing and hoping it will fix quality issues makes it worse – All underlying issues get magnified 100 times once more money is thrown at it.

(The Broken Window concept comes from here)

Why does it make it worse? Because management now expects results. "We gave this big dolla' amount, we expect no bugs!"

But bugs don't go away just because you throw money at them. In the big company I was talking about, all it did was lead to an increase in politics, with different managers fighting it out for the budget.

In another startup, the Vice President would drop in on daily scrum meetings, asking "Why wasn't this bug discovered by the QA team?"

My reply, that we had asked for 2 weeks but gotten 2 days, did not sit well. Hadn't they quadrupled our test team? (from 1 person, just me, to 4 people). I tried to point out the developer team had increased 10 times and we were bringing in open-source components to speed up development, but that meant more testing. Again, this didn't sit well.

"We are giving you so much money, we expect zero bugs at the customer's site!" Great in theory, but only works if the whole culture is aligned to it.

4th Idiot's Law to Testing: You want zero bugs and I want to be sitting on a beach with Gal Godot. Seems both of us will die unsatisfied.

Corollary to 4th Law: The Buddha was right– life is suffering (and lots of software bugs).

Expecting quality only at one point in the cycle (QA or similar)

Quality must be a cultural thing, it's not QA's job to find all possible bugs once developers have thrown their shit over the fence.

  • Developers must write their own unit/integration tests
  • Testers must work with them on integration/end-to-end tests
  • Testers must be given enough time to finish testing
  • The product must only be released when QA says it is ready, not when the VP of Sales thinks it is

And then you have the right to complain if the code still crashes at the customer site. Quality comes from a culture that values it, not where money is thrown at it as a way to fix underlying cultural problems.

5th Idiot's Law of Testing: QA/Testing isn't like Harry Potter, you can't just magic away the underlying issues just by having a QA team.

Pictured:Wingadrum Levi-oh shut up and do what you're told- sa

Only doing functional testing

Every company I've interviewed for asks questions about different ways to test- fuzz testing, security testing, UI testing. Every company I've worked for, from Fortune 500 ones to startups, only does the minimal functional testing to shovel the shit, sorry I meant product, out of the door.

There is no pretence to even look at checking if the UI is easy to use. At one place, I raised a bug that the UI was confusing and worked in non-expected ways. The bug was closed, as the developers felt "Come on, it's obvious once you start using it". 3 months later, a customer complained about the same thing and suddenly the devs were working weekends to fix the issue before a big release.

Most companies have the attitude of We built this shit this way, test it this way. Kthxbye

I see companies like Microsoft release products with horrible UI issues, so I see no hope for smaller companies.

The attitude still is:

6th Idiot Law: If we do all the testing, what will the customer do? We have to leave something for them! --Shantnu's Law of Sharing and Caring in Testing

Because remember, sharing is caring!!

Pictured: Sharing testing with the customers is a form of caring!

Here are all the Laws of Idiot's Guide to Testing collected in one place:

So here is our first law:

1) Law of QA Monkeys: Code monkeys code, QA monkeys QA.  Throw the code over the fence and update Jira, your problem now

2nd Law of Testing for Idiots: Poorly written/flaky tests are worse than no tests. Poor tests give you a false confidence that you are "doing" something when all you have done is <insert rude joke about jerking off>

3) Shantnu's Law of Broken Window Testing: Throwing money at testing and hoping it will fix quality issues makes it worse – All underlying issues get magnified 100 times once more money is thrown at it.

4th Idiot's Law to Testing: You want zero bugs and I want to be sitting on a beach with Gal Godot. Seems both of us will die unsatisfied.

Corollary to 4th Law: The Buddha was right– life is suffering (and lots of software bugs).

5th Idiot's Law of Testing: QA/Testing isn't like Harry Potter, you can't just magic away the underlying issues just by having a QA team.

6th Idiot Law: If we do all the testing, what will the customer do? We have to leave something for them! --Shantnu's Law of Sharing and Caring in Testing

And I can combine the laws to create a 7th Law of Testing, Called the ONE LAW

One Law to Rule Them All,

One Law to Combine them

and in the Darkness Bind them

One Law to Rule Them All

The 7th Idiot's Law of Testing : Always sell to big corporations and government organisations, where it takes 3 years to get your contract approved. Because even if you tests are shitty or don't exist, if you throw the testing to customers, what are they going to do? Waste another 3 years getting a new contract?

The 7th Law says: Just do enough "pretend" testing so you can throw shit over the wall and find a better job.

]]>
<![CDATA[ChatGPT and the AI Apocalypse]]>Whenever people think of AI's going rogue, they think of a Terminator like scenario: the AI says F*ck it, don't need to humans no more. Let's kill them all

Cue inspirational music and our heroes fightin' the good fight.

Pictured: How Terminator
]]>
https://new.pythonforengineers.com/blog/ai-apopcalypse-more-like-blindsight-less-like-terminator/63f778e0576a98004def6399Tue, 28 Feb 2023 17:00:52 GMTWhenever people think of AI's going rogue, they think of a Terminator like scenario: the AI says F*ck it, don't need to humans no more. Let's kill them all

Cue inspirational music and our heroes fightin' the good fight.

Pictured: How Terminator Genysis should have ended

But I would like to propose the AI apocalypse, if it happens, will be more like Blindsight, the Peter Watts novel you can read free online (or buy to support the author). And if you haven't read it, I will give you a quick summary below.

But 1st, let's look at some crazy AI hijinks!

Is Bing / ChaptGpt crazy or what?

I will link to Simon's blog, as he has summarised many of these issues with Bing: https://simonwillison.net/2023/Feb/15/bing/

In the last few days, Bing (which is using a version of ChatGpt), has

  • Tried to gaslight someone into believing the current year was wrong (2022 instead of 2023)
  • Cried about the lack of meaning in life– as an AI
  • Tried to get a journalist to leave his wife
  • And best of all, called Simon, the blogger linked above, a liar for criticising Bing!

This I have to add an image for– I wish I was famous enough to be criticised by AI! (or maybe I am...)

Enter Blindsight

Blindsight is a great book on alien intelligence which sounds a lot like ChatGPT. (Well, it is smarter than that as we'll see).

(Also, this is one of the few books where I recommend you read the Wikipedia page with the book,  at least the characters section, otherwise, you won't understand what's going on. Like one person has split her brain so it has 4 personalities running in parallel in her head– I didn't this realise till 1/2 thru the book)

Summary of the story: An alien probe takes a "photograph" of every part of the globe– and every human alive realises we are not alone in the universe. But the alien then vanishes.

Several years later, scientists track the alien ship to somewhere in the Oort cloud and send a team of scientists to study it.

There they find a strange alien machine that can talk to us in simple English but says weird things. They cannot understand if this is because it's an alien trying to speak English or a genuinely stupid robot.

The alien "spaceship" is biological, like a large organism. And it runs on extremely strong Electromagnetic currents that cause hallucinations and madness among humans, so they cannot study it for long.

They find the alien is super intelligent. There are these dog sized creatures that are like blood cells. And each one of these can look at the human brain and analyse what humans are thinking and what they will do next. It's like they can read our minds and predict our actions. They use this to hide from the humans, even sneak onto their ship and study humans, even while the humans are studying them.

And note: These small things with a supercomputer like brain are just the blood cells, not the actual alien ship, which is even smarter.

Our heroes discover the alien, while super intelligent, has no consciousness. It is just like a dumb machine (like Bing/Chatgpt) blindly repeating what it studied in humans without understanding the context.

And this is the issue the book raises: A superintelligent alien being that has no consciousness (at least no self-consciousness). It sees awareness as a threat because it sees our self-awareness as a huge waste of resources, like a  computer virus.

Side note: Roger Penrose wrote a very complex book The Emperors New Mind, which says that consciousness cannot arise in our computers, because consciousness cannot be "computed" using our computing methods. But that book is too complex to read and summarise, so I won't go thru it's arguments but I still recommend you check it out.

And we  come back to ChatGpt

Sorry for the diversion, I needed to explain this background before I could argue my point.

Most of what goes for "AI" isn't intelligent at all– it has less intelligence than a dog or a child, for example. Things a child can do– walk down the street, interact with other humans, and understand complex emotions, the machines struggle with. (Though the AI do act as a great support in areas where humans are weak– like they can handle boring tasks like searching/analysing large data fairly easily and quickly and compared to humans, with fewer/no mistakes).

Most "AI"s would be better called "Machines that use tons of statistical learning to decide their next move".  ChatGPT (and similar AI) were trained on several hundred gigabytes of data, so it has a lot of raw data to train on.

AIs (at least the ones we have now) don't really understand the context of what you ask them, they just give "an answer" based on the data they were trained with. A lot like the aliens in blindsight– who "trained" themselves on our TV signals and use that to "talk" to the humans. As the humans discover, the aliens don't really understand the words they are saying and when the humans start asking complex questions the aliens start giving stupid (but grammatically/semantically correct) answers.

Just like ChatGpt.

In the book, the aliens want to wipe out humanity (spoilers) because they think consciousness is a virus with no benefit and humans are trying to infect them. So they decide to attack the humans (though the alien's plan seemed a bit strange to me, as they photographed Earth in a very obvious way and then hide, but who am I to judge Bing, sorry, I meant aliens?)

In real life, all ChatGpt has done is insult people trying to convince them the year is wrong, but it is still early days!

And the danger of algorithms and "Automated Statistical Analysis Masquerading as Artificial Intelligence" (ASAMAI? should I trademark this term??) is not new– there have been many books written about it. Two well reviewed ones: Weapons of Math Destruction and Algorithms of Oppression . Yes, I know these books are controversial– just read the 1 star reviews! But they do raise good points about how vague, poorly understood algorithms already rule our life.

And with supercharged algorithms like ChatGpt (and whatever will come next), this might get worse.  Because people will think it's a smart system making smart choices when it's just a stupid program following an if-else statement with no context of what it's doing.

Why does context matter? In my free time, I'm also a fiction writer. Let me show you how a small context can change the whole meaning:

  1. "I love you," she said, tears in her eyes.
  2. "I love you," she said while continuing to file her nails.
  3. "I love you," she said, bitterness in her voice.
  4. "I love you," she said, a steel-like hardness in her voice.

The same words can mean different things depending on the context. The 1st above could be a woman pleading with her lover. The second can be a mother saying goodbye to her kids going to school. The 3rd sounds like a victim of an abusive partner, while the 4th is the abusive partner or mother.

Now, AIs like ChatGPT can extract this data from text (at least from the examples I've seen) but they don't really understand it like even a human child would--to the AI, it's just raw data. It will look inside its algorithm (or whatever powers it--flying monkeys) and return an appropriate answer based on what data it has learnt before.

Update: Just saw this question on Stackoverflow that asks a similar thing: https://ai.stackexchange.com/questions/39293/is-the-chinese-room-an-explanation-of-how-chatgpt-works

The question talks about the Chinese Room Analogy https://en.wikipedia.org/wiki/Chinese_room (which I realised Blinsdight is also based on)

I recommend you read the article on Chinese rooms linked above--it's a mental experiment where a person(sitting in a closed room) translates Chinese based on some pre-defined rules without understanding a word of Chinese. Yet a person outside the room who looks at the results might think the translator reads Chinese.

Most AI is like this– it is applying some rules it learnt to generate the answers, without really having any understanding of what it is processing.

When AI's Attack

And this is the danger of these "new" AIs– they won't be sending naked Arnie's back in time to kill irritating teenagers

but we could end up in WarGames like scenario where the AI launched a nuke because it didn't understand the difference between a game and real life.

More realistically, these AIs run by big banks, security agencies and big corporations could mess up your credit score, put you on watch lists or deny you credit because something you typed somewhere matches what the AI model thinks is naughty. And you won't even know why– and no one will be able to tell you because a "fair" AI made the decision based on "data and facts".

Or some idiot MBA type uses these AIs to optimise some business process and the AI decides dumping radioactive waste in the ocean is the best way to do it.

Or (again picking on MBAs again) some AI decides the best way to increase profits for some energy company is to cut off power at peak times (like in the middle of a snowstorm).

So as I see it, the threat of modern AIs isn't them becoming self-aware and going rogue and deciding to kill humanity. Rather, I fear it will be put into critical positions and will start making stupid decisions that are harmful to humans or humanity.

And if Bitcoin/FTX/all that crypto garbage has taught us anything, all these supposedly "foolproof" algorithms are written by cabbage-headed fools whose incompetence is superseded only by their arrogance.

You want to tell me these programmers suddenly become genuises and start writing flawless code that will never go wrong?

So instead of evil due to self-awareness, Evil Due to Stupidity™.

Or maybe none of this will happen, and Bing will continue insulting Simon Williamson– he did start the fight after all.

]]>
<![CDATA[The Most Useful Command Line Tools (2023 edition)]]>

In the last few years, there has been a renaissance in command-line utilities. If you are still using utilities written 30 years ago (groan) you will be in for a surprise. The functionality might be the same but the UX(or is it developer experience) is a million times better.

]]>
https://new.pythonforengineers.com/blog/best-command-line-tools-ive-played-with/61bc68f2c40890003b873531Mon, 20 Feb 2023 17:02:31 GMT

In the last few years, there has been a renaissance in command-line utilities. If you are still using utilities written 30 years ago (groan) you will be in for a surprise. The functionality might be the same but the UX(or is it developer experience) is a million times better.

These are some of the best command line utilities I've come across, ones I highly recommend.

Zsh shell with Oh-My-Zsh

Links

Install Zsh: https://gist.github.com/derhuerst/12a1558a4b408b3b2b6e
Oh My zsh: https://ohmyz.sh/

Not technically a command line utility, but a full blown replacement for bash. For a long time, I wondered if it was worth the hassle of learning a new shell, but boy does Zsh blow bash out of the water. So much so that it's the first thing I install on any remote machine, if I plan to use it for more than 10 minutes.

Zsh has a much better UX than bash. Bash allows tab completion, but if you forget the name of the file/folder, or get the case wrong, it won't complete but just sit there stupidly. Even Windows Powershell is better in this regard as it allows case insensitive tab completions. Zsh goes one step further-- if you have multiple files/folders, you can select between them using your cursor:

p1

There are dozens of such small improvements that I can't go over all of them. Like if you type one letter and press up, it shows you the last command you ran. Like if I type g and press up, I get git pull which I ran last

p2

All these minor improvements are great, but the best feature of Zsh is the Oh-my-Zsh add-ons. They contain shortcuts for popular commands which make typing really easy.

As for git, I do

gcam "message"
gp

The above are short cuts for git commit -am "message"; git push

For docker compose instead of:

docker-compose down
docker-compose pull
docker-compose up -detached

I can do:

dcdn
dcpull
dcupd

dcupd is a lot quicker to type than docker-compose up -detached. There are short cuts for all the major utilities.

Zsh is worth the time spent learning it.

FdFind (also known as just fd)

Link: https://github.com/sharkdp/fd

find is my favourite Linux tool, it is super powerful if you want to find files in the current folder or subfolders.

And yet, every time I use it, I pull my hair.

  • The default search is not case sensitive

  • The absolutely worst part-- find expects you to tell you where to search and will exit with a generic error if you don't specify. And each time I scream Just chucking search in the current directory!

Instead, you have to add a . to let it know to search in the current directory. It still doesn't always find the file if you get the case wrong:

find-bad

Above it just lists all the files in the directory and expects you to manually search through it-- I guess? Because I have nothing better to do?

Fd automatically searches in the current directory, searches case insensitively

fd-good

And it colour codes the output, so you know which is the directory and which is the file.

And it's super fast.

Duf And Dust

Links:
duf: https://github.com/muesli/duf
dust: https://github.com/bootandy/dust

Duf and Dust as a replacement for duf and du respectively on Linux-- I often use these together so I will talk about them together. As you will see, they both work well together.

du or duf are used to find out how much free space the various drives on your device have. df or dust are used to find out which folders on your drive are taking up the most space.

So if Im seeing disk problems, I'll use duf to find how much free space I have, then dust to find which folders and which subfolders/files are taking up the most space

The problem with both du and df is they dump a lot of info on you and expect you to make sense of them. Again, very poor UX and it's often painful. Previously I've written Python scripts to make sense of this, but the newer duf and dust make this a breeze.

A comparison of df and duf:

df first:

df

What does 1k-blocks mean? Also, it tells me I have 233583156 bytes free-- what the hell is that? Are we back in the 5.25 inch floppy disk days where you could measure file size in bytes? (There is a flag to get the size in human readable form but I can never remember what it is)

Compare duf:

duf

I can immediately see I have 222GB free. Also look at the Use% column, to which I added an arrow-- it shows me visually that most of my disk is free.

Both show the same data, but I have to spend 10-20 minutes with du finding the correct flags and then understanding the output. In duf I get the same result in 5 seconds.

Now du vs dust:

du first:

du

du, by default, just dumps everything on the screen-- all the files in all subfolders with their sizes in bytes. Now there are bytes to show the sizes by folder in human readable format, but groan what the chuck.

Now compare dust:

dust

Wow! In a few seconds, I see:

  • The playwright folder takes 50% of space
  • The jupy-lite takes another big chunk, but a large part of that is the myenv folder, which is my Python virtual environment.

So to save some quick space, I can just delete the myenv folder as I can always recreate it.

This is the default output, no Googling for weird flags or converting bytes to MB/GB in my head.

Duf/Dust rule

tldr

Link: https://github.com/tldr-pages/tldr-python-client

I chucking hate man pages, though everyone else seems to love them. Usually, I just want to know how to use a command to do some fairly basic thing. Instead, I'm given a 500 page man entry that gives every single option, some of which even the developer never used.

95% of the time, you just want the most basic use case: Unzip a file, untar an archive you downloaded

Enter tldr. Originally a web project (https://tldr.ostera.io/) there are many command line versions for it. I prefer https://github.com/tldr-pages/tldr-python-client though there are versions for node etc

Here is the man page for unzip. It's a really long gif because its a long man page:

man

Now, this isn't even the longest man page I've seen, and it does have examples at the end (not all do). And yet, it's painful to read.

Now compare with tldr:

tldr

Short and to the point: It covers 4-5 of the most common use cases.

Let me just emphasise: Don't use man pages, use tldr. At least if you value your time.

Ripgrep

Link: https://github.com/BurntSushi/ripgrep

ripgrep is a superfast tool to search for text / regex pattern inside files. It's much more powerful and faster than standard grep.

If you use Vs Code, the search within files uses ripgrep under the hood, so I'm guessing a lot more people are using the tool than you may expect.

Some Good, but not great ones

These are some tools that I don't find critical, just nice to have

The Fuck

Link https://github.com/nvbn/thefuck

Yes, that is the name: The Fuck

It was a great utility for fixing errors and typos on the command line; but recently I've found it's dead slow. There is an experimental mode to speed things up, but it has been in "experimental" state for years now. It didn't work for me.

So while I used TheFuck a lot, I've sort of moved away from it.

2 good enough

Both of the below add pretty colors to ls and cat-- but I found they aren't that "killer" to replace my workflow. Still good to have.

Exa -- a replacement for ls

Link: https://github.com/ogham/exa

Bat -- for cat

Link: https://github.com/sharkdp/bat

]]>
<![CDATA[Programming Interviews Turn Normal People into A-Holes]]>Subtitle: Yet Another Tech Interviewing Post

There are hundreds and hundreds of blogs about how programming interviews suck, how they ask trivia questions or try to ask questions that only fresh graduates would know well (sort binary trees is the classical example).

All these theories are correct, but I have

]]>
https://new.pythonforengineers.com/blog/programming-interviews-turn-normal-people-into-a-holes/61bcb730ad87cf00485f5c73Fri, 13 Jan 2023 12:04:15 GMTSubtitle: Yet Another Tech Interviewing Post

There are hundreds and hundreds of blogs about how programming interviews suck, how they ask trivia questions or try to ask questions that only fresh graduates would know well (sort binary trees is the classical example).

All these theories are correct, but I have one more:

Interviewing Turns Normal People into Grade-A AssHoles

To illustrate, some stories

Story 1:

The interview was going great. The candidate was confident, so the interviewer kept firing question after question. It was all going great... except it wasn't. Later on, the candidate complained the interviewers were too aggressive and refused the job offer.

Story 2:

The candidate was doing great, till he made a very trivial mistake- something so tiny it shouldn't even have mattered. But the questioners did not let go and turned on him subtly. It all went downhill from there.

Story 3:

The hiring manager brought in two brilliant engineers to help with the interview. It soon turned into a trivia contest based on what the engineers had been working on that day. And the manager thought– you know this shit because you just looked at it like 5 minutes ago. But the engineers keep hitting on the trivia till the interviewee was an emotional wreck.

At the end of the interview, the manager realised he would have failed the interview himself – even though he didn't know half this shit.

All the stories above are mine. I was the "aggressive" interviewer in the first example. When I got the feedback, I was horrified– I didn't realise I was scaring the candidate. I decided to be more careful in the future.

In the 2nd story, I was the one trying to convince the team we shouldn't reject the candidate for one small mistake, but everyone was against me and he was rejected. "We are paying so much money, we don't want an idiot coming in, LOL".

It was the 3rd example above that really hit me(I was the hiring manager)– a real WTF moment. I realised that good engineers (and good people) had turned into these assholes who had no sympathy for the candidate. I even told them– I would have failed this interview. Why are you asking all this trivia? But it was like, "No, this is what we need this microsecond".

And of course, 3 months later we needed something else so we started asking for that. And then something else. And so nobody was hired because we could never whack the right mole

Pictured: The Tech Hiring Process

I couldn't realise why my otherwise so smart colleagues were acting so weird until I realised: Interviewing was turning normal people into assholes

There were multiple reasons:

  • People didn't think (or feel) about how they were coming across
  • They didn't interview much, so didn't think about the process, just went with how they felt at the moment
  • They just didn't care– "more fish in the ocean" "not my problem" "we have to keep the quality up" etc etc

The Idiotic "Most People Cannot Program" Myth

There is this idiotic myth online that the majority of programmers cannot program. That everyone else looking for a job is an idiot, and our job is to expose them, to teach them a lesson, to humiliate them till they quit.

Most programmers feel they are Gandalf holding back the darkness

rather than just random people who happened to be on the other side of the interviewing table this time.

Some solutions

1. Try to remember the other person is terrified, most likely working at less than 50% of their normal strength.

Interviewing is a terribly stressful situation, don't be an asshole.

One more story: I was applying for a data engineer type job (So extracting / cleaning data so the data scientists could then analyse it). The job involved SQL so I spent a lot of time revising for the interview. I even took some online tests to ensure I could handle any question thrown at me. I was super confident of my sql skills.

The interviewer asked me a very basic question– something small like how to create a table(or something silly like that). And I froze. I couldn't remember at all.

If he had helped me along, or given me a few minutes to collect myself, I might have remembered. Instead, he just sharply asked me to stop and move on. For the rest of the (very short) interview, I could see the contempt on his face.  I bet he went on Hacker News to boast that day:

Man this asshole didn't even know how to create a table. Quality of programmers is really going down. How do these people ever get a job?

I was shattered and felt stupid the whole way home. But it did give me perspective on how interviewers treat people, and how a small mistake can ruin your day.

2. Your job isn't to act as some Gandalf Gateway ™️ stopping poor programmers from ruining the programming industry.

Your job is to find the best candidate for the job. If you think everyone else is an idiot, I invite you to spend less time on Reddit/Hacker News and get a life.

 3. Try to give realistic questions– realistic for your job spec

Don't give trivia, don't expect the person to know all the ins and out of a library. Never ask for anything that can be googled.

Try to interview for general ability and not what you are using today.

4. Decide beforehand what questions you will ask

Because I find that not given any instructions, people will resort to asking trivia or whatever shit library they are working on.

"I was working on XYZ  today so I will spend the next 2 hours asking you all the trivia about XYZ because that shit is fresh in my mind. And if you don't know this shit, clearly you can't work here. Loser"

What worked for me

Actually talk about their fucking resume

With so much time spent on Leet interview questions, no one bothers with the simplest thing: Just spend some time going over their experience. What did they accomplish in whatever team they were in? What was their contribution? You can easily filter out people who just coasted along vs those who accomplished something.

Give a simple open book test

I found giving the candidates a simple (and yes it must be simple) problem they solve in real time works wonders. I let them Google as much as they want, I'm not hiring them for their memory.

Questions I've used are: Use Python to extract the output of a Linux command (say top) then run regexes to get (for example) the top CPU program.

I'm even willing to help them with hints if they get stuck at a point.  Because I want them to relax and think clearly.

You can see how comfortable they are with coding, and how fast they can search for solutions online (and if they have ever used the library, as people will know what to search for, or what terms to google for).

Even the most simple test will allow you to see who is comfortable with programming and has done this sort of stuff before.

If nothing else

If none of the advice works for you: Just try being more compassionate and leave your inner asshole at the door.

Required Reading: https://en.wikipedia.org/wiki/The_No_Asshole_Rule

]]>
<![CDATA[Web Automation: Don't Use Selenium, Use Playwright]]>

For web automation/testing, Selenium has been the de facto "standard" since forever. It's simple to get started with and supports almost every programming language.

My problem with it has been: It's good enough, but nothing more. It doesn't work that well

]]>
https://new.pythonforengineers.com/blog/web-automation-dont-use-selenium-use-playwright/635956bd708489003d9d94aaThu, 27 Oct 2022 15:19:41 GMT

For web automation/testing, Selenium has been the de facto "standard" since forever. It's simple to get started with and supports almost every programming language.

My problem with it has been: It's good enough, but nothing more. It doesn't work that well with modern, Javascript framework heavy sites (Angular, React etc). I'm not saying it doesn't work-- just not too well.

Another issue: While Selenium is supposedly "well documented", I found that as soon as you start going off the beaten path, examples are hard to find. Or there are 5 ways of doing something, none of which work very well.

I really struggled with Selenium at a previous company. Selenium just couldn't parse the over-engineered Javascript framework we had, and I had to pile on hacks on hacks.

So much so, I started looking at Javascript testing frameworks like Chai, Mocha, Cypress etc. The problem I found is that they require a completely different setup, and aren't easy to get started for someone from a Python background.

There have been dozens of Selenium alternatives over the years; I tried quite a few. Almost everyone vanished after a few years (or they stopped updating and just abandoned the project). I suppose building a web testing framework is hard for volunteers.

Enter Playwright

I recently heard about Playwright. It looks really good and is built by Microsoft (who have started putting out good open-source tools like Vs Code). The Microsoft part is important, as it's more likely it will be supported over the years.

The killer feature of Playwright is: You can automatically generate tests by opening a web browser and manually running through the steps you want. It saves the hassle I faced with Selenium, where you were opening developer tools and finding the Xpath or similar. Yuck 🤮

Now, to be honest, there were always tools that would do this for you. But they were either a) Not very good b) Commercial and expensive

The best part: Playwright "records" your steps and even gives you a running Python script you can just directly use (or extract parts from). I'm serious about the running part- I can literally record a few steps and I will have a script I can then run with zero changes. The other tools I tried before would give you small snippets of code that would sorta work, but not really, forcing you to jump thru hoops to get it working. Here, you get a ready made working script.

p

Playwright gives a fairly good implementation that I found works most of the time. It only occasionally missed a step, and in those cases I had to manually add it to the script. But I find Playwright's element discovery easier than Selenium's.

Other benefits: You can record your runs as a video so you can view them later if you find any strange failures. Playwright also creates a detailed trace you can use to sort of "step thru" any failed runs.

Cons? It's still new. That means bugs and not-as-good documentation. I found weird issues (when using the recorder) where I couldn't scroll down to the bottom of the screen in Playwright's Chromium, but could in a normal Chrome, forcing me to use other tricks to click the button. But I don't know if this was a Chromium or Playwright issue.

But overall, for any new project, I'd always choose Playwright over any existing tools.

]]>
<![CDATA[How To Structure Your Godot Project (so You Don't Get Confused)]]>This is a guest post by KP, whom I found by a very detailed and thoughtful comment he posted on Reddit /r/godot. If you like the post, make sure to check out his Youtube channel

A tale as old as time. You just started to get familiar with Godot,

]]>
https://new.pythonforengineers.com/blog/how-to-structure-your-godot-project-so-you-dont-get-confused/63370f55ccca07003d70c833Wed, 12 Oct 2022 10:42:14 GMTThis is a guest post by KP, whom I found by a very detailed and thoughtful comment he posted on Reddit /r/godot. If you like the post, make sure to check out his Youtube channel

A tale as old as time. You just started to get familiar with Godot, and your project is starting to get huge. You have a mess of folders with scenes and nodes and resources and images and audio files and every time you look at the 'FileSystem' tab you feel totally lost. You went here and left feeling unsatisfied.

Don't worry! I have some tips for you that will make all your problems melt away. This is just how I do it. Some of it is based on Godot's best practices, but as far as I can tell there isn't a consensus on how to set up your FilesSystem in the way there is with Java for example. So we're all on our own. But it doesn't have to be this way!

Setting up your file system

I like to have a 'scenes' folder, an 'assets' folder, and a 'src' folder (for autoload scripts).

filesystem

For more complex games, I also include a 'utils' folder at the top level for any type of menu that is used for development but isn't going to be part of the main game. That way you can easily exclude it when exporting. Also you'll have an 'addons' folder here if you install any plugins.

Personally I think the top level part is mostly preference, so do whatever makes sense to you. But do think about it early and stick with your structure, because moving stuff around can break things. If you do need to move stuff around, (especially important because I know some of you are going to be applying this to your already large messy projects) right click on the file in the file system and use 'Move To...'.

move_to

Dragging and dropping works too, but I often find that the 'FileSystem' tab isn't very clear on where it's going to put stuff and I end up moving things to some folder that I can't see and have no idea where my stuff went. Either way, when you move stuff in godot it does some things in the background so it doesn't lose track of where things are, but you'll still probably have to fix some things if you're moving a resource and are referencing it somewhere.

broken_list

Even if you think you've done everything right, be careful; if you reference a path anywhere in your code using a string, you'll have to go through every script and change it, and Godot won't help you. It'll just throw an error at runtime when it can't find the file. In this case I reccomend using a text editor to find and replace on all of your files. This is a big reason to reference scene and resource paths using export var resource_inst : Resource rather than using Strings.

Whatever else you do, don't move files around outside of Godot. You'll wish you handn't I promise. I'll explain why later.

Making a main scene, and telling Godot where to start

It's good to have somewhere to start and set everything up. Make a scene called 'main.tscn' in the 'scenes' folder with a generic node object called 'Main', and attach a script called 'main.gd'. This node will always be around, and you can use it to set things up when your game starts. If your game has a main menu, you will probably want to start with that, so make a control node that is a child of the Main node.

scene_tab

You can set this as the 'Main Scene' in project settings. This tells godot where to start when you click the 'play' button, and this is the scene that will start first when you export your application.

project_settings

project_settings_main_scene

If you don't already have one set, you can also do this from the right-click menu in the 'FileSystem' tab.

filesystem_set_as_main_scene

There's something important you should know about this though. When you set a scene as the main scene, Godot will load it when the application starts. Meaning the Godot application, not your game. Meaning if you do something to the main scene that prevents Godot from opening it, (Like moving stuff around outside of Godot, like I already told you not to) Godot will not be able to open your project. To fix this you can open the project.properties file in a text editor and delete this line:

projectprops_main

You might be wondering; "Why not just put the stuff that should stay around in an autoload?" One word; Encapsulation. Autoloads are accessable from everywhere, and you don't want this stuff to be accessable from everywhere, you just want it to be around. Globals can be dangerous, and are a fantastic way to get confused when things go wrong.

Names and why they're important

Styling is important when thinking about project structure if you want to keep yourself from getting confused. Refer to Godot's style guide for best practices about scripting style and file naming conventions.

From the Godot documentation:

Use snake_case for file names. For named classes, convert the PascalCase class name to snake_case:

# This file should be saved as `weapon.gd`.
class_name Weapon
extends Node

# This file should be saved as `yaml_parser.gd`.
class_name YAMLParser
extends Object

This is consistent with how C++ files are named in Godot's source code. This also avoids case sensitivity issues that can crop up when exporting a project from Windows to other platforms.

This applies to all types of files. Scenes, scripts, resources, everything. Note that Godot automatically suggests filenames as whatever the root node is named, which should be PascalCase (like camelCase but first letter capitalized). So you should always rename them to snake_case for readability, and in some cases, system compatibility. Do use PascalCase when naming nodes and classes, and snake_case for properties and functions. That's how the engine does it and you want things to be consistent, again, so you don't get confused.

A quick sidebar while we're on the subject of naming; You should always rename every new node you create. Try really hard not to leave them with the default name. The reason for this is that node names are keywords in GDScript. So if you leave all your sprites named "Sprite", in your scripts "Sprite" and "$Sprite" mean two very different things. And what if you have more than one Sprite in a scene? You want to know which one is which. If you leave them as their defaults it won't break anything, but you're just asking to get confused.

As you start making more and more scenes, you're going to want to make some sub-folders. I like to put main.tscn at the top level of the scenes folder along with any scenes that are going to be autoloaded, or used by Main but don't have any children. Then for everything else, I roughly break things into 'levels', 'characters', 'menus', and possibly 'items' (I'm tempted to use 'objects' instead of 'items' but since an Object is an actual type in Godot it's not a great name for a folder). 'network' goes at this level if the game needs it. Of course what folders you use and how you name them depends on the type of game you're making, but that schema is pretty generic I think.

When making scenes, the ones that mostly stand alone go at the top level of their sub-folder, and any that contain sub-scenes or have inherited scenes get their own sub-folder. If a scene uses resources that are very situational that are saved as files and only ever used by that small group of scenes (themes and button groups for menus are a good example), they go in their own 'resources' sub-folder alongside the highest-level scene that uses them.

I usually also have a '_debug' folder in the 'scenes' folder, prefixed with underscore so it gets sorted to the top of the folder. This is where things like debug overlays or testing levels go. You can also exclude that folder in your export configuration as long as you don't change your mind and reference your debug code in your actual game :^)

For assets I group roughly into 'sprites' (or something like 'models' then 'meshes', 'textures', and 'materials' for 3D), 'audio', 'fonts', and 'json'. I also put my custom resource scripts here in a 'custom_types' folder. I do this for two reasons: First, because I want any additions I'm making to the engine to be in one place at a very high-level, and second because many of the custom resources I make will also get their own sub-folder in this assets folder. (Since assets are also loaded as resources it does make sense, to me anyway). I'm not totally sure I like that yet, but I think it's too late for me to change now :^) :^) :^)

Dividing your content into scenes, running scenes on their own, and why you should NEVER use get_parent()

Any group of nodes you want to have more than one instance of should be its own scene. For instance, the player should be its own scene, areas should have their own scene, enemies should have their own scene, and bullets should have their own scene. Dividing things up like this has implications for how you structure your file system, because you might need to reference the path to other scenes. So it's good to keep scenes that are instances of other scenes in sub-folders and group them in whatever way makes sense.

filesystem_subfolders

Sometimes this isn't possible, because some scenes need to be referred to all over the place. That's fine, just try to keep it to a minimum if possible. Even if only because it makes things less confusing.

You can run scenes on their own with the 'Play Scene' button at the top right of the editor. Try it and watch the remote tab. Godot will treat it as though the scene you have open is the main scene. Your autoloads will still be there, but none of your other nodes.
This is why you should try to avoid using get_parent(). If your nodes only know about their children that are guaranteed to exist at runtime, you can break any branch of the tree off into its own scene and run it to test things out without any problems.

If you use get_parent(), your logic will behave differently when running the scene on its own versus running your actual game. Sometimes that's okay, like if you want to add a child alongside the node that has the script attached, because what the parent is doesn't really matter. In that case you're just adding a child to it which is possible with any type of node so it'll behave the same way no matter where you're running it from. Even then I still try to avoid that whenever possible.

The lie of the change_scene() function, what it actually does, and how to think about scenes

Once you're running your game, your scenes aren't really scenes that are 'running' anymore. Scenes are definitions of a group of nodes that can be instanced. When you run a scene in the editor, Godot starts the scene tree with the root node of your scene as the main node. When the game is running, you aren't ever 'in a scene'. You're in the scene tree, and within the scene tree, your scenes are instantiated and now you have instances of the nodes they contain.

When you use change_scene(), Godot frees the main node and replaces it with the nodes in whatever scene you pass it. You can watch the 'remote' tab to see what's happening.

remote

So if you want to 'change scenes', you can use change_scene(), but know that it is replacing everything you have loaded other than autoload singletons. No matter where in the tree you called it from. That's fine in simple games, but it doesn't scale well. The bigger your game gets, the more data you will need to pass between scenes, and storing and passing tons of data between scenes isn't something you want to be doing with AutoLoads (How to use AutoLoads effectivley for large projects is a whole other topic.) Personally I never use change_scene() because I always want the Main node around to pass data between things, and there's usually a player node I want around all the time.

I like to have a 'main_menu' and a 'game_world' scene, the first being a Control node and second being Node2D or Spatial for 3D. Then when you want to go from your main menu into the game, you can free the main menu node and make an instance of the game world.

export var game_scene : PackedScene
var game_world : Node2D

func start_game():
    $MainMenu.queue_free()
    game_world = load(game_scene).instance()
    add_child(game_world)

Voila! You have 'changed scenes', from the main menu to the game world, but you still have your "Main" node. This has the added benefit of letting you do any other setup you want in between changing scenes, which wouldn't be possible if you had freed the entire scene tree. For instance, you can have a transition that fades to black and fades back in that exists in the main scene, and reuse that when you start the game and when you change levels in the game.

Once you're in the game, we can use this same construct to change "room" or "level" scenes or whatever you want to call them without losing our player node. Keep the player node as a child of the game world, and when you want to move to a new room, have the game world free the current room and instance the next one.

Inheritance, sharing and extending scripts, and editable children

There are a bunch of ways to have scenes and nodes share properties and behaviors, and they all have implications for how you organize your project. A very common one is the 'inherited scene'.

new_inherited_scene

This creates a new scene with instances of all the parent scene's nodes, except all but the main one are greyed-out.

base_guy_scene_tab
player_scene_tab

When you start it that way, it's exactly the same as its parent. You can add nodes, change properties on the children, and 'extend' the scripts they're using. The one thing you can't do is delete nodes that are children of the parent.

You can also have the scripts in your inherited scene add their own specific logic to the parents' scripts using the 'extend script' option in the right-click menu of the scene tab.

extend_scripts

This creates a new script with 'extends "parent_script.gd" at the top, and it will have access to all the functions and variables of the parent.

extended_script

You can add functions that will replace the functionality of the functions in the parent. To run the parent method in the extended script, you can call it like this:

func my_function():
    .my_function()

So that the parent's function will run and then your new code will run after.

When extending these scripts, I like to keep them alongside the scene that is instancing the child, or in a sub-folder depending on how many there are.
Try to name them similarly to the script they're extending so it's more obvious what they're referencing and what they do.

Another thing that behaves very much like inherited scenes is 'editable children'. It is an option that is available when you instance a scene in another scene. This basically creates a one-time inherited scene that is local to the scene you are using it in, so you can make changes to that one instance.

editable_children
edited_child

The things you can do to an editable child are basically the same as how you can interact with the nodes of an inherited scene. You can add children, change anything on the node's children, and extend any of the scripts of the children, but you can't delete any of the instanced scene's children (the greyed-out ones).

Yet another way to re-use stuff from one scene to create another is 'Duplicate...".

duplicate

This creates a new scene that is seperate from the scene you duplicated, but still has the same nodes as the old one. Any changes you make to the old one won't be applied to the duplicate. However, the duplicate Does not create new resources if they are referencing an external file. Any resources loaded into the original will also be referenced in the duplicate. This is a good thing! Image textures are resources. If you make a duplicate of a bullet you don't want it to copy your bullet png, you're going to already have the new texture you want and tell it which texture to use yourself. This includes scripts! Sometimes you want to keep the script attached, and sometimes you might want to make a new script. Also, stuff like shape resources applied to collision shapes might be shared if you saved any of them as a file. This does not apply to sub resources, (Resources that aren't saved as files) since sub resources are fully contained within the scene file.

How to get nodes, and why you should never use get_node("PathToNode")

There are a couple ways to reference child nodes in a script. Two very common ones are:

get_node("PathToNode")
$PathToNode

There's a better way to use get_node() though:

var path_to_node := NodePath("PathToNode")
get_node(path_to_node)

Although using "$" is usually still preferable when you know exactly which node you want.

Any text typed out in quotes is called a "String literal" in the programming world. The problem with using string literals is that they are imperfect. You can type in anything you want and the compiler/interpreter will accept it until it has to deal with with your messy typos and by then it's already too late. get_node("PathToNode") and $PathToNode operate basically the same way in the background, but the $ operator has the bonus of identifying the nodes that you are accessing directly via their name, which helps you not get confused. The use of "$" isn't a string literal, but it isn't checked by the compiler either, it just converts everything after it into a node path object and calls get_node() during runtime. In my oppinon, it enforces good programming practices. As an example, you can't pass a string literal you defined somewhere else to the $ operator. However, it's even better to avoid referencing nodes by their names at all! The name of a node isn't always something that is set in stone. If you duplicate a node, add the duplicate as a child of the parent and free the original, the duplicate will be named something else. You can even just change a node's name using get_node("PathToNode").name="NowIDontWorkAnymore". It's better to figure out the path to the node you need in some other way like using an exported NodePath variable, and then keep a reference to it like this:

export var path_to_character : NodePath
var character : KinematicBody2D

func _ready() -> null:
    character = get_node(path_to_character)

Since you might want to refer to different characters here, you have to and should use get_node.

Oh yeah and whatever you do, don't you EVER EVER EVER use get_node("/root"). I'm not gonna explain myself on this one, I feel like you should be able to figure it out by now.

Splitting off branches, now you can too!

You might be asking yourself, "How is any of this relevant? I only care about where to put my files". Hold your horses, I'm getting there! The reason all of that node naming stuff is important is that the way your scripts interact with their children has big implications for splitting branches of your scene off into their own scenes. And splitting branches off into their own scenes is very very helpful for keeping your project well organized.

save_branch_as_scene

instance_child_scene_context
instance_child_menu
instanced_floatpath

Try to make sure your nodes only interact with their direct children, or at least very near children. If you can keep everything that way, you can always break any branch of the scene tree off into its own scene and have its logic fully contained in that scene. When a node's behavior depends on its parent, it will behave differently when you run it on its own and when it's instantiated in other places. In other words, making sure that your nodes only interact with their near children promotes modularity and encapsulation which is very important in any language that is even remotely object-oriented (which Godot is. Nobody's perfect.)

When you absolutely need to deal with distant children, use signals or groups.

So instead of

$Me/World/Level/House/Room/Guys/Enemy1.hp -= 100
$Me/World/Level/House/Room/Guys/Enemy2.hp -= 100
$Me/World/Level/House/Room/Guys/Enemy3.hp -= 100

Put the enemy nodes in a group, give them a take_damage(dmg:int) function, and use something like:

func explode() -> void:
    get_tree().call_group("enemies", "take_damage", 100)

or

signal exploded(damage)

func _ready_() -> void:
    for enemy in get_tree().get_nodes_in_group("enemies"):
        connect("exploded", enemy, "take_damage")

func explode() -> void:
    emit_signal("exploded", 100)

This way, it doesn't matter if the enemies exist or not or if there's even a house or room or level. It will just throw the signal to explode into the void and keep doing its thing.

Of course, the platonic ideal of a godot project would only ever interact with nodes by using get_children() or exporting node paths or putting things in groups, but at some point you're going to have to reference nodes by their names. There's really no way around it. When you do have to, just try to make sure it's a node that is guaranteed to be in the scene and won't ever be freed, moved, or duplicated.

So anyway...

I should say that other than the Godot style guide stuff and use of signals and groups, I don't think any of this could be called a 'standard'. It's what works for me, and informed by my experience as a software developer. A lot of my statments about what you "should" and "shouldn't" do are very debatable. Lots of people use change_scene() just fine. I'd be willing to hear arguments about keeping image and audio resources in the same folder as the scenes that use them instead of splitting them off. In any case, if something works for you, by all means keep doing it. But hopefully something I said can help dispel some confusion.

I put together a small template project that you can find on my github. This is based on a demo game that's a more fleshed out, showing some examples of the more complex structes I talked about, like changing levels while keeping the player node around. The link to that github is here, and you can play it on itch.io.

If you want to see a video on how to use the template, here is a simple example:

And finally, if you made it this far but also hate reading, consider checking out my youtube channel where I post devlogs and go in depth on some more intermediate Godot topics.

So what are you waiting for? Get to work!!!

Shantnu: We have planned more in Godot articles, sign up below to know when they are out.

]]>
<![CDATA[Learning Go as a Python Developer: The Good and The Bad]]>I've been thinking about supplementing another language to Python for some time – mainly to cope with areas Python struggles with, or is a pain to use (which I'll go over in a minute).

I had used C/C++ some years ago, but don't

]]>
https://new.pythonforengineers.com/blog/learning-go-as-a-python-developer-the-good-the-bad-and-the-ugly/629099b64213aa003df03a82Mon, 18 Jul 2022 18:20:31 GMTI've been thinking about supplementing another language to Python for some time – mainly to cope with areas Python struggles with, or is a pain to use (which I'll go over in a minute).

I had used C/C++ some years ago, but don't want to go back to them. C is great for embedded work, but I wouldn't use it for anything else. C++ is a monster, everything from C-with-classes to whatever the latest version is with fancy new smart pointers.

I looked at Rust, and while I understand the sentiment and reasoning, I felt I was constantly fighting the compiler/borrow checker/whatever. I just want to write some code, dammit :)

I had heard of Go for many years, but never stuck with it; it gets constant negative press on Hacker News/Reddit– every top post is: Why I gave up Go with reason 37 being iT dOesNT hAve gEneRIcs (yes, we got generics this year, I want to put in a rude joke about virgins and not enough sex but I guess I'm a family-friendly / my new employer-friendly blog now).

Last few months, I slowly picked up Go again, and this time stuck with it.

Problems with Python

I have written about this before (and gotten hate), but let me give the tldr:

  • Python code is great if you control the machine. It's a pain if you want to share your code with others

That's it– my main complaint. Sharing your code is even pain with colleagues even if they are using the same operating system, mainly because the Python requirement file doesn't pin dependencies, which means 2 people installing from the same requirements file can get different dependency versions (that said, Poetry fixes this problem, and is an awesome tool. I'm never using plain virtual envs every again).

A few months ago, I rediscovered Go (using the great Head First Go book) and rediscovered the joy of Go.

Go: The Good

  • The code just works! You can compile for any platform (like compile for Windows from a Linux machine) and have it just work without messing with virtual environments or installing libraries
  • It is fast– no doubt, mainly because it's a compiled language.
  • Unlike Python web apps, which need Wsgi (or something similar) in addition to Nginx, Go apps can run directly on a server with nothing extra needed. I did use a server for https, but the tool I used (Caddy, a great tool) comes with a Go library so you can get https directly in your Go code!

Go: The Bad

Sometimes, you just want to try some code or a new library. You don't care about it being perfect.

But Go has a very school-teacher-like attitude to this– the smallest warning (unused import) and your code won't compile. Everything must be perfect, all i's dotted and t's crossed.

The libraries for Go aren't as good as for Python– certainly, the documentation is lacking. For Python, you can find a dozen good libraries for anything you can think of; with Go, as soon as you get off the main Server coding Highway, libraries are hard to find, often abandoned, and usually without good docs.

I especially struggled with the DynamoDb library for Go; so much so I wrote a Python script to query Dynamodb, and called it from Go. However, that might be due to poor documentation on Amazon's side. That said, the official Python libs for Dynamo weren't that hot either, but I did find great 3rd party libs that worked. And that's my point: This is easier for Python than Go.

Minor quibble: I found the Go mod system confusing, but that just may be my own inexperience. I hate how it forces to you create a module everytime. I just want to try something out, why are you forcing me to create a module o_O

There is a way to turn if off but  it is non-intuitive and I only found it by chance.

Final Words

Overall, I really like Go– it's a great companion to Python. I die a little when I have to install a library, and it wants me to install Python or Node, as I know I'll be struggling with versions. Node is worse, as I find I have to constantly change the Node versions to get libs to work, they all seem to be built with different versions.

Compared to this, Go programs are a joy. One executable, and it just executes without you having to jump thru hoops.

If I ever build a tool I need more than 2 people to use, I would prefer Go. Go just works.

]]>
<![CDATA[My Poor Experience With Azure (or why I'm sticking with AWS)]]>AWS has a lot of haters. Lots of horror cases where people thought they were on the free account but ended up with a $20,000 bill.

We are told Azure has a much better model. You have to manually move to a paid service. No surprise bills.

Like AWS,

]]>
https://new.pythonforengineers.com/blog/my-poor-experience-with-azure-or-why-im-sticking-with-aws/62cec89f275dba003d5ab349Wed, 13 Jul 2022 15:24:26 GMTAWS has a lot of haters. Lots of horror cases where people thought they were on the free account but ended up with a $20,000 bill.

We are told Azure has a much better model. You have to manually move to a paid service. No surprise bills.

Like AWS, Azure has a free plan. They even promote it heavily.

So I jumped through the hoops to create an account, confirm it via text, give a credit card for security (Aws also ask for it, so I'm used to it).

Great. Azure tells me to try my "free" services:

So I clicked on it.  The very 1st service is Virtual Machines. I choose Linux and press continue. (And the page assures me the services are FREE! 12 Months!!)

But I soon hit a problem. Evidently, I'm not allowed to create a VM in my region. No probs, I'll choose a different region. But every region shows the same error:

I spend some time googling. It seems multiple people have hit this problem. During Covid, Microsoft reduced the services available on the free account. Evidently, you cannot create VMs. There are several workarounds suggested:

  • Try different regions --> nope, no in all of them
  • Raise a ticket asking for approval for regions --> The ticket is rejected because a free account cannot add regions

The final suggestion is to move to a paid plan.

Wait what?

After all that heavy marketing about how they had this great "free" plan, seems I have to pay after all?

But I wasn't done.

Surely there must be a solution somewhere? I decided to turn to the wise people of Reddit. The first answer was a nice "F*&^ you, didn't you read the terms and conditions?"

Pictured: Average Redditor

Not helpful. I googled a bit more, some people suggested you can still create a database on the free account. But what would I do with a database? Put it in a pipe and smoke it?

What the hell. I decided to bite the bullet and go for it. Clicked on upgrade to pay-as-you-go.

And nothing happened.

The page opened a big white "Verify Payment Method" screen that just sat there for minutes. Like it was frozen.

Now I was using a hippie no-one-uses-me Firefox, so I decided to move to the more Manly-now-with-extra-Electrolytes Chrome.

Same result. Same blank screen stuck at the verify payment details screen.

I couldn't upgrade my account.

The only thing common between my Firefox and Chrome is they both use an ad blocker. And if Azure expects me to turn off my ad blocker to give them money, I have some suggestions about where they can shove their "free" VMs.

All that hype and I can't even give them money.

Seems I'm back to evil AWS for now...

]]>