Skip to content

podcast

Set Up a Django Server with Apache Virtual Host and Python Virtual Environment

Hello and welcome back to another episode of Continuous Improvement, the podcast where we explore tips, tricks, and strategies to help you improve your skills and workflows. I'm your host, Victor, and today we're going to dive into a step-by-step guide on setting up a Django application for production.

But before we begin, a quick reminder to subscribe to our podcast and follow us on social media to stay updated with the latest episodes. Alright, let's jump right in!

Setting up a Django application for production can be a bit daunting, but don't worry, I've got you covered. I've broken down the process into simple steps to make it easier for you to follow along. So let's get started.

Step one, assuming you already have your CentOS or Ubuntu instance running and Python installed, create a folder for your project and set the appropriate permissions. You can do this by running the following commands:

sudo mkdir /opt/yourpath/projects
sudo chown $USER /opt/yourpath/projects

Step two, if you haven't already initialized your project, you can do so by installing Django and starting your project with the following command:

python -m pip install Django
django-admin startproject APPNAME /opt/yourpath/projects/APPNAME

Remember to replace 'APPNAME' with your desired project name.

Moving on to step three, by default, the Django development server runs on port 8000. You can start the server with the command:

python manage.py runserver

Great! Now that we have our project set up, let's prepare it for production. Step four involves editing the 'settings.py' file with a few configurations. Set 'DEBUG' to False, 'ALLOWED_HOSTS' to ['*'], 'STATIC_URL' to '/static/' and 'STATIC_ROOT' to the appropriate directory path.

After making these changes, it's time for step five. We need to collect and build the static files of our Django project. Run the following command in your project's directory:

python manage.py collectstatic --noinput

The next step, step six, is to serve your web application via the Apache web server. Assuming you have Apache2 installed, enable virtual hosts for your project and create a virtual host configuration file. You can create the file by running:

touch /opt/yourpath/apache2/conf/vhosts/project-vhost.conf

In this file, you'll need to specify the WSGIDaemonProcess for your Django application. Make sure to replace all instances of 'APPNAME' with your actual project name.

To enable HTTPS, you can create another virtual host configuration file with a similar content structure. Use the command:

touch /opt/yourpath/apache2/conf/vhosts/project-https-vhost.conf

Now that we've updated the configurations, it's time for step seven - restarting the Apache server. This will allow your Django site to become operational.

Lastly, in step eight, we'll isolate Python dependencies within a virtual environment to avoid any version conflicts or dependency issues. Inside your project directory, run the following commands:

pip install virtualenv
virtualenv venv
source venv/bin/activate

This creates a folder named 'venv' that contains all your Python executables. Any subsequent 'pip install' commands will affect only this folder.

Once you've activated the virtual environment, go back and edit 'project-vhost.conf' and 'project-https-vhost.conf'. Update the 'python-home' path in both files to point to the 'venv' folder, like this:

WSGIDaemonProcess APPNAME python-home=/opt/yourpath/projects/APPNAME/venv python-path=/opt/yourpath/projects/APPNAME

Make sure to replace 'APPNAME' with your project name.

And that's it! You've successfully set up your Django application for production. Now, all that's left is to navigate to your public IP address and you should see your Django page up and running.

If you encounter any issues along the way, remember to check the Apache server error log for troubleshooting. You can do this by running:

tail /opt/yourpath/apache2/logs/error_log

That wraps up today's episode on setting up a Django application for production. I hope you found this guide helpful and that it saves you time and effort in the future.

Remember, continuous improvement is all about learning, growing, and finding better ways to do things. If you have any questions or topics you'd like me to cover in future episodes, feel free to reach out to me on social media. Thank you for tuning in to Continuous Improvement, and until next time, keep improving!

Stop Downloading Apps for Everything

Hello and welcome to "Continuous Improvement," the podcast where we explore ways to improve ourselves and the world around us. I'm your host, Victor, and in today's episode, we'll be diving into a topic that affects both developers and users alike - the growing irrelevance of mobile apps.

It's no secret that apps have become an integral part of our lives. We rely on them for various tasks, from ordering food to tracking our workouts. But have you ever stopped to think about the effectiveness of those apps?

According to a recent blog post I came across, apps are becoming increasingly irrelevant. Why download and clutter your device with ineffective software when your browser can serve as an adequate substitute? It seems the primary benefit of using apps lies in data gathering and ad serving, which favor tech giants rather than us, the end-users.

As a user, it's difficult to determine which app is superior and genuinely offers value. The post suggests that the competency of the programmer behind an app plays a significant role in its effectiveness. Skilled programmers invest time in refining their work - constructing test cases, simplifying complex problems, and thinking deeply about the subject at hand.

On the other hand, less skilled developers may lack the necessary will or talent to achieve anything significant. This leads to a sea of apps where quality is questionable at best.

So, what happens when a developer makes an error? Well, the repercussions are often minor. At worst, they might receive a poor rating on the app store, but for the most part, there are no lasting consequences. Some developers might patch the flaws, while others may introduce new ones. It's a constant cycle of trial and error, leaving us, the users, with unreliable software and little control over its quality.

But the problem doesn't solely lie with poor developers. Even talented programmers find themselves trapped in an increasingly irrational industry. They lack the time to specify requirements or plan thoroughly, and are under immense pressure to churn out new features constantly. All this to keep up with the industry's demand for constant innovation.

Creating robust, well-tested software that handles all possible states is a challenge in itself. And to top it off, developers have to deal with constant interruptions, like Slack messages and progress report meetings, which only add to their struggles. Companies often hire experienced full-stack engineers to complete the work, only to find that it's prohibitively expensive.

Users, on the other hand, are largely unaware of these challenges. They simply want reliable and efficient apps, but are often burdened with spaghetti code and countless defects. It's no wonder that many talented developers transition to project management roles, seeking less stress, higher income, and more predictable hours. The ambition to transform the world through better software wanes.

But who can we blame for this dysfunctional software engineering culture? The post argues that tech behemoths like Apple, Google, Facebook, and Amazon have shaped the industry with their platforms and policies. As we spend more time on their platforms, they become more successful, further encouraging a race to the bottom in terms of software quality.

So, what's the solution? The author advises software developers, like myself and many of you listening, to opt out of this race. True professionals should take pride in their carefully crafted work and refuse to be treated like mere code monkeys.

And that brings us to the end of today's episode of "Continuous Improvement." I hope you found this discussion thought-provoking, whether you're a developer or a user of mobile apps.

Remember, the power for change lies in the hands of those who care about the quality of software we use every day. Let's strive for better, more reliable apps that truly serve us.

Join me next time as we delve into another topic of continuous improvement. Until then, take care and keep striving for excellence.

Install Ubuntu 20.04 LTS on MacBook Pro 14,1

Hello and welcome to "Continuous Improvement," the podcast that helps you embrace change and optimize your life. I'm your host, Victor, and in today's episode, we'll be discussing Ubuntu 20.04 and how to install it on a MacBook Pro 14.1 model.

Ubuntu 20.04 has just been released, and I couldn't wait to give it a try. In this episode, I'll be sharing what works, what doesn't, and how to work around those issues. So, let's dive in!

To begin, you'll want to download a copy of the Ubuntu 20.04 ISO image from ubuntu.com. Once you have the image, use Etcher to create a bootable USB drive. Don't worry, I'll include the download links in the show notes.

Now, when you start booting from the USB drive, you might notice that the trackpad doesn't work. But don't worry, there's a workaround. You can either use an external mouse or continue the installation via keyboard. We'll fix the driver issue later.

Once you're in the Ubuntu operating system, you'll find that the keyboard with backlight, screen display and graphics card, WiFi connectivity, USB ports, and battery all work out of the box. That's great!

But, there are a few things that don't work by default. Let's go through them and their workarounds:

  • Speakers: To fix this, you can use external headphones or HDMI on an external monitor. If you prefer fixing it with a driver, you can find the instructions and the driver in the show notes.

  • Trackpad: As mentioned earlier, you can use an external mouse during installation. But if you want to use the trackpad, you can install the driver by following the link provided.

  • Bluetooth: If you're facing Bluetooth issues, don't worry. There's a driver available that will solve the problem. Just follow the instructions in the show notes.

  • Camera: Another issue you might face is with the camera. But fret not, there's a driver available to fix it. You can find all the details in the show notes as well.

Once you've resolved these issues, you can make additional customizations to enhance your Ubuntu experience. For example, switching to dark mode, displaying battery percentage, installing GNOME Tweaks, Ubuntu restricted extras, and the Atom editor. Again, the instructions for these customizations will be in the show notes.

One final tip I have for you is to disable the trackpad while typing. This can greatly improve your typing experience. You'll find the command to do so in the show notes.

Remember, while Ubuntu offers endless customization options, it's essential to proceed with caution. Enthusiasm can sometimes lead to crashing your system if you're not careful. If you have any questions or need help, feel free to reach out to me. I'm here to assist you on your Ubuntu journey.

That's it for today's episode of "Continuous Improvement." I hope you found this information helpful. Embrace change, optimize your life, and keep striving for continuous improvement. I'm Victor, your host, signing off. See you next time!

Handling Browser Close Events with JavaScript

Welcome back to another episode of "Continuous Improvement". I'm your host, Victor, and today we'll be discussing an important aspect of user experience on the web - preventing accidental page exits. Have you ever been in a situation where you were filling out a form or making a payment, and accidentally closed your browser, losing all your progress? Well, we have a solution for you.

In this episode, we'll dive deep into the implementation of a confirmation dialog using the beforeunload event in JavaScript. This will help you prompt users with a warning before they close their browsers, ensuring they are aware of their unsaved changes. So, let's get started!

[BACKGROUND FADES]

First, let's take a look at what the confirmation dialog actually looks like. In different browsers, it can vary slightly in appearance. In Chrome, for example, it may look like this. [DESCRIBING CHROME DIALOG]

And in Firefox, it may appear slightly different. [DESCRIBING FIREFOX DIALOG]

[BACKGROUND FADES]

So, how can you implement this dialog on your web page? It's actually quite simple. Just add the following code to your JavaScript file:

window.addEventListener('beforeunload', (event) => {
  // Cancel the event as specified by the standard.
  event.preventDefault();
  // Chrome requires returnValue to be set.
  event.returnValue = '';
});

By adding this event listener, you're informing the browser to trigger the confirmation dialog when the user attempts to close the browser, refresh the page, or click the back button. The event.preventDefault() cancels the event, ensuring the dialog is shown, and the event.returnValue = '' satisfies Chrome's requirements.

[BACKGROUND FADES]

It's important to note that the beforeunload event will only trigger if the user has interacted with the page in some way. If they haven't, the event won't activate. Once you've implemented this functionality, the confirmation dialog will keep users from inadvertently leaving the page without saving their changes or completing their transaction.

[BACKGROUND FADES]

But what if you want to remove the confirmation dialog at some point? Maybe after the user has saved the form or completed the payment. Well, you can easily do that too. Just use the following code:

window.removeEventListener('beforeunload', callback);

This line of code will remove the event listener, so the confirmation dialog no longer appears when attempting to leave the page. However, remember to replace callback with the actual function or arrow function you used in the addEventListener method.

[BACKGROUND FADES]

It's worth mentioning that the purpose of this confirmation dialog is to remind users to save their changes before leaving, and it doesn't provide any way to determine whether the user chose to stay or leave the page. So keep that in mind while implementing it in your project.

[BACKGROUND FADES]

And that's a wrap for today's episode of "Continuous Improvement". We hope you found this topic helpful in enhancing user experience on your website. Remember, implementing a confirmation dialog using the beforeunload event can prevent users from accidentally closing their browsers and losing their progress. For more detailed information and additional resources, you can check out the MDN Web Docs.

Thank you for listening to "Continuous Improvement". I'm Victor, your host, and I'll catch you in the next episode. Until then, happy coding!

Import npm Modules into AWS Lambda Function

Welcome to "Continuous Improvement," the podcast where we explore the world of software development and discuss ways to enhance our skills. I'm your host, Victor, and in today's episode, we'll be diving into the topic of deploying Node.js Lambda functions on Amazon Web Services (AWS).

Have you ever found yourself in a situation where you wanted to import a third-party library, like lodash, into your Node.js Lambda function on AWS? In today's blog post, we learned that the online editor on AWS doesn't provide a straightforward way to achieve this. But fear not! There's a workaround that I'd like to share with you.

The first step is to create a folder on your local machine and copy the index.js file into it. Once that's done, open up your terminal and navigate to the folder you just created.

In the terminal, run npm init . This command initializes your project and generates a package.json file. This file will keep track of all the dependencies required for your Lambda function.

Next, we need to install the third-party library, in this case, lodash. Run npm install lodash --save in the terminal. This will add lodash as a dependency and update the package.json file accordingly.

Now that we have the library installed, let's use it in our index.js file. Add the following line of code at the beginning of the file:

    let _ = require('lodash');

With lodash successfully imported into our Lambda function, it's time to prepare for deployment. To do this, we need to zip the entire folder, including the node_modules directory. In the terminal, use the zip -r function.zip . command to accomplish this.

The final step is to deploy the zip file to AWS using the AWS CLI tool. In your terminal, type aws lambda update-function-code --function-name yourFunctionName --zip-file fileb://function.zip. Here, make sure to replace yourFunctionName with the actual name of your function.

If everything goes smoothly, the deployment should be successful, and you'll see a confirmation message in the terminal, indicating that the update was completed.

That's it! You've now learned how to deploy a Node.js Lambda function with third-party dependencies on AWS. Remember, continuous improvement is essential in the ever-evolving world of software development.

Thank you for joining me on this episode of "Continuous Improvement." I hope you found the information helpful. If you have any questions or suggestions, feel free to reach out. Don't forget to subscribe to our podcast for more exciting topics and stay tuned for the next episode.

Fix WordPress Plugin Installation Permission Issue

Hello and welcome to "Continuous Improvement," the podcast where we explore solutions to the everyday challenges we encounter in our digital lives. I'm your host, Victor.

In today's episode, we're going to delve into a common issue faced by WordPress users when installing plugins. Have you ever come across the error message, "Installation failed: Download failed. Destination directory for file streaming does not exist or is not writable?" Well, fear not, because we have the solution for you.

Recently, while working on my WordPress website, I encountered this very issue. After some investigation, I discovered that the problem lay within the permissions of the content folder. This happened because I had been editing files as a superuser, using the "sudo su" command, while the installation required write access for the "ec2-user."

So, let's get to the solution. Assuming you are setting up on AWS EC2 instances and logged in as the "ec2-user," and assuming that your WordPress installation is located in the "/var/www" path, you'll need to execute the following command to change the ownership:

Open up your terminal and type:

"sudo chown -R ec2-user:apache /var/www."

This command changes the ownership of the WordPress directory to the "ec2-user" and the "apache" group. After executing this command, you should now be able to successfully install your desired plugin.

And there you have it! A simple solution to a common WordPress installation problem. By changing the ownership of the directory, we ensure that the correct user has the necessary write permissions to complete the plugin installation.

Remember, continuous improvement is all about finding solutions to the obstacles we encounter along our digital journeys. If you have any questions or suggestions for future topics, please feel free to reach out to us.

Thank you for tuning in to this episode of "Continuous Improvement." I hope you found the solution to the WordPress plugin installation error helpful. Until next time, keep improving!

Fix WordPress with All Pages Returning 404 Not Found

Welcome to another episode of "Continuous Improvement," the podcast where we delve into common issues and their solutions for web development. I'm your host, Victor. In today's episode, we'll be discussing a perplexing problem with WordPress and how to resolve it. So let's dive right in.

A while ago, I stumbled upon a strange error with WordPress. While the homepage loaded fine, all the other pages returned a dreaded "Not Found" error, claiming that the requested URL was not found on the server. Puzzled, I began investigating the issue, suspecting it might have to do with the .htaccess file. However, after spending hours troubleshooting, I realized that the problem lay elsewhere.

To fix this particular issue, I discovered that the solution lies in editing the httpd.conf file. You can find it by typing the following command:

sudo vi /etc/httpd/conf/httpd.conf

This command will open the configuration file in the built-in text editor, "vi." Once opened, search for the section that starts with:

<Directory "/var/www/html">

In this section, you'll need to modify the configuration from AllowOverride None to:

AllowOverride All

This change allows the server to inherit the .htaccess settings, ensuring that all the pages render correctly. Once you've made the alteration, it's time to restart the server. You can do this by executing the command:

sudo systemctl restart httpd

And voila! After performing these steps, all your WordPress pages should now load without any issues.

And there you have it! The solution to the perplexing issue of WordPress pages returning a "Not Found" error. By modifying the httpd.conf file and changing the configuration from AllowOverride None to AllowOverride All, we allow the server to read the .htaccess file and enable the proper rendering of all pages.

I hope you found this episode of "Continuous Improvement" insightful. Stay tuned for more troubleshooting tips and web development solutions. If you have any questions or suggestions for future episodes, feel free to reach out. Until then, keep improving and happy coding!

Fixing Endless Redirection with HTTPS Settings in WordPress When Using AWS Load Balancer

Hello, and welcome to another episode of Continuous Improvement. I'm your host, Victor. Today, we're going to discuss a common issue that many WordPress users face when setting up their websites on AWS EC2 instances. Specifically, we'll dive into the problem of endless redirection loops and how to fix them. So, let's get started!

Imagine this. You set up your WordPress blog on two AWS EC2 instances located in different availability zones. To manage the traffic, you wisely configure an Elastic Load Balancer (ELB) to redirect all HTTP requests to HTTPS. However, you encounter a roadblock when your requests keep looping endlessly, resulting in an error stating "too many redirections." Frustrating, right?

Luckily, there's a simple solution to this problem. Let me walk you through it step by step. Here's what you need to do.

First, open up your wp-config.php file. This file holds important configurations for your WordPress website. Look for the following lines of code:

define('WP_SITEURL', 'https://' . $_SERVER['HTTP_HOST'] . '/');
define('WP_HOME', 'https://' . $_SERVER['HTTP_HOST'] . '/');

You'll notice that these lines specify the WP_SITEURL and WP_HOME values using HTTPS. While this seems like the correct approach, it actually creates the endless redirection loop that we're trying to solve.

So, here's the fix. Add the following line to your wp-config.php file:

$_SERVER['HTTPS'] = 'on';

By adding this line, you're explicitly telling WordPress that HTTPS is enabled. This bypasses the endless redirection loop issue and ensures a smooth user experience.

And there you have it! A simple solution to a potentially frustrating problem. Configurations like these can be challenging to troubleshoot, and you might spend hours searching for a solution. But with this fix, you'll save valuable time and get your website up and running smoothly.

I hope you found this episode helpful. If you have any other WordPress-related issues or questions, feel free to reach out to me on our podcast's website or social media channels. Remember, continuous improvement is all about learning, growing, and overcoming challenges one step at a time.

Thank you for listening to Continuous Improvement. I'm your host, Victor, signing off. Stay curious, keep improving, and until next time!

Installing PHP 7.2 Instead of PHP 5.4 on Amazon Linux 2

Hello everyone, and welcome to another episode of Continuous Improvement, the podcast where we explore different solutions to common problems in the tech world. I'm your host, Victor. In today's episode, we'll be talking about a common issue that many of us face when setting up an Amazon Linux 2 AMI server and trying to install PHP.

And we're back. So, imagine this: you've just launched a brand new Amazon Linux 2 AMI server, and you're excited to get started. But as you try to install PHP using the usual command yum install php, there seems to be a problem. The version it installs is PHP 5.4.16, and you realize that the latest version is PHP 7.2. What do you do?

Well, fear not! There is a solution to this dilemma. You can enable PHP 7.2 via Amazon Linux Extras. Let me walk you through it.

First, open up your terminal and run the following command with sudo privileges:

sudo amazon-linux-extras enable php7.2

This command will enable PHP 7.2 through the Amazon Linux Extras. Once it's enabled, we can proceed with the installation.

But hold on, there's one more step before we install PHP 7.2. We need to clean the metadata. Run the following command:

yum clean metadata

This will ensure that we have the most up-to-date information for the installation process. Now, we can finally install PHP 7.2 along with some additional packages that we'll need. Run this command:

yum install php-cli php-pdo php-fpm php-json php-mysqlnd

And that's it! You've successfully installed PHP 7.2 on your Amazon Linux 2 AMI server. To double-check, run php -v in your terminal, and you should see something like this:

PHP 7.2.28 (cli) (built: Mar 2 2020 19:38:11) ( NTS )

And there you have it, a step-by-step solution to upgrade your PHP version on an Amazon Linux 2 AMI server. Remember, keeping your software up to date is essential for security and performance reasons.

Thank you for tuning in to this episode of Continuous Improvement. If you found this information helpful, please consider subscribing to our podcast and leaving us a review. If you have any suggestions for future topics or questions, feel free to reach out to us on our website or social media channels.

Until next time, I'm Victor, your host, signing off.

Reading Large Files Using Node.js

Hello everyone and welcome back to another episode of Continuous Improvement. I'm your host, Victor, and today we're going to tackle a common problem that many of us face when dealing with large datasets: reading massive log files.

Recently, I had to analyze a massive dataset consisting of log files and when I tried opening it in Excel, my trusty laptop simply froze. Frustrating, right? Luckily, I found a solution using Node.js that I want to share with you today.

So, let's dive into the problem. Imagine you have a small file and you want to read its content using a script. You might use something like this:

[Script excerpt: fs.readFile]

This script works perfectly fine for small files. However, when it comes to large files, you might encounter an error like the following:

[Script error excerpt]

Ouch, that's definitely not what we want. But fear not, there's a solution! Instead of using fs.readFile, we can leverage Node.js's native readline library to tackle larger files.

Here's how it works:

[Script excerpt: readline.createInterface]

With this approach, we create a readline interface for our large file, setting the input as the file stream and the output as the standard output. Then, we can use the line event to process each line of the file, and the pause event to indicate when we're done reading the file.

By processing the file line by line, you can perform various operations like parsing the lines into JSON or incrementing a counter. And once the file has been completely read, you can display the final results.

And there you have it! Using this Node.js readline approach, you can now process massive datasets without running into buffer errors. If you want to dive deeper, I highly recommend checking out the official documentation for the Node.js Readline API.

I hope you found this solution helpful. Remember, continuous improvement is all about finding better ways to solve problems and streamline our processes.

Thank you for tuning in to Continuous Improvement. I'm Victor, your host, and I'll see you in the next episode. Keep improving!