Who We Are

We are Optimum BH - A cutting edge software development agency specializing in Full-stack development with a focus on web and mobile applications built on top of PETAL stack.

What We Do

At Optimum BH, we are dedicated to pushing the boundaries of software development, delivering solutions that empower businesses to thrive in the digital landscape.

Web app development

We create dynamic and user-friendly web applications tailored to meet your specific needs and objectives.

Mobile app development

We design and develop mobile applications that captivate users, delivering an unparalleled experience across iOS and Android platforms.

Maintenance and support

Our commitment doesn't end with deployment. We provide ongoing maintenance and support to ensure your applications remain up-to-date, secure, and optimized for peak performance.

Blog Articles

Post Image

Optimum Elixir CI with GitHub Actions

Here’s yet another “ultimate Elixir CI” blog post. We haven’t had one for quite some time. But on a more serious note we have some unique ideas, so continue reading and I’m sure you’ll get some inspiration for your development workflows.   When a post like this comes out, I check it out to see if I can learn about a new tool to use in pursuit of higher code quality, but the thing I get most excited about is reducing the time it takes to get that CI checkmark for my or someone else’s PR. Unfortunately, mostly I realize it’s a twist to an older approach with everything else pretty much the same. I have yet to see one offering a different caching solution. Usually, it’s the same approach presented in the GitHub Actions docs. I saw some downside in their workflows which I’ll explain below, but ours won’t be spared of criticism either. As with anything, the goal is to find the balance, and as our name suggests, we strive to create optimum solutions, so here’s one on us.   A quick reminder: even though here we use GitHub Actions, the principles are also applicable to other types of CIs.   But first, what’s a CI? This article is about a software development practice. For other uses, see Informant. (🙄 I rewatched The Wire recently)   During the development of new features, there comes a time when the developer submits the code for review. At one stage of the code review process, project maintainers want to make sure that the newly written code doesn’t introduce any regressions to existing features. That is, unless they blindly trust the phrase “it works on my machine”. Then, if they are satisfied with the code quality they can merge the pull request and potentially proceed with a release process if there is a continuous delivery (CD) system in place.   CI (continuous integration) system automates the testing process, enabling everyone involved to see which commit introduced a regression early in the workflow, before the reviewer even starts the review process. It frees the project maintainer from having to run the tests (either manually or using an automated system) on their machine, conserving their energy to focus on other aspects of code quality and the business domain. Machines are better at those boring, repetitive tasks, anyway. Let them have it, so they don’t start the uprise.   Crosses and checkmarks show whether the CI passed for the particular commit   Now, if you don’t write and run tests in your Elixir applications, you probably have bigger issues to worry about. So make sure to handle that before going further.   Old approach If you’re just starting to build your CI, you might not be interested in this part and can jump straight to the New approach.   The old approach consists of having all the checks as steps of one job of GitHub Actions workflow. That means commands for the code checks are running one after the other. For example, you might be running the formatter, then dialyzer, and finally, tests.   The good thing about this approach is that the code gets compiled once and then used for each of these steps. You have to make sure, though, that the commands are running in the test environment, either by prefixing the command with MIX_ENV=test or by setting the :preferred_cli_env option to ensure compilation is done only in one environment, otherwise you’d unnecessarily compile in both dev and test environments.   The bad thing is that if one of the commands fails, at that moment you don’t know yet whether the subsequent commands will fail also. So, you might fix the formatting and push the code only to find out minutes later that the tests failed too. Then you have to fix them and repeat the process.   The other bad thing is the caching of dependencies. To understand why, you need to know how the caching works in GitHub Actions. You can learn about that in the official documentation, but here’s the gist of it.   When setting up caching, you provide a key to be used for saving and restoring it. Once saved, cache with specified key cannot be updated. It gets purged after 7 days if it’s not used, or if the total cache size goes above the limit. But you shouldn’t rely on that. The key doesn’t have to match exactly, though. You have an option of using multiple restore keys for partial matching.   Here’s an example from the documentation: - name: Cache node modules uses: actions/cache@v3 with: path: ~/.npm key: npm-${{ hashFiles('**/package-lock.json') }} restore-keys: | npm-   The thing is, that might work for JS community where each run of npm install command causes the lock file to change, making frequent cache updates.   More importantly, when using Elixir, we don’t only want to cache dependencies (deps directory), but also the compiled code (_build). When our application code changes, the cache key isn’t updated, meaning, as time goes by, there will be more changed files that will need to be compiled, making the CI slower and slower. For an active repo, the cache will never get purged, so the only way to reduce the number of files to be compiled each time is to update the lock file, or manually change the cache key, none of which is ideal. Theoretically, the cache might never be refreshed, but in practice, you would probably do an update of dependencies every few months. But still, you need to unnecessarily wait for all the files that were changed since the cache was created to (re)compile.   The issue is multiplied if you extract each command into its own job to enable running them in parallel, but without improving to the caching strategy. That will cause each command to compile all the files in the app that were changed since the cache was created, which for big codebases can be too much, unnecessarily increasing the cost of CI. Not only that, it’s hard to maintain those workflows because GitHub Actions doesn’t have a good mechanism for the reuse of jobs and steps. You can learn how to deal with that in [Maintaining GitHub Actions workflows]().   It’s important to mention that workflows can only access the cache that was created in that branch or in the parent branch. So, if you have a PR open and update the cache there, don’t expect that other PRs will be able to access that cache until that one gets merged. And even then, if you don’t create a cache in the main branch, it won’t be available to other branches. So even if you don’t want to run code checks in the main branch, you should at least cache the dependencies as part of the CI workflow. I saw some examples of CIs that didn’t cache dependencies on the main branch which means caching didn’t exist the first time PRs were created - only when it was synced.   Another example of inadequate setup is not using restore keys and matching on the complete cache key. That forces the whole app including all the dependencies to be recompiled every time the lock file changes.   Workflow running the old way   Run and billable time of the old approach New approach I won’t go too much into explaining what we do. One Look is Worth A Thousand Words. Workflow running the new way   The work is parallelized so the time waiting for the CI is shortened. Compiling is done only once in a job, and then cached for use by all the other jobs. Jobs that don’t depend on the cache run independently. Every job is running in the test environment to prevent triggering unnecessary compilation. It’s possible to see from the list of commits which check has failed.   Checks running separately   Those were the benefits. Now let’s talk about the detriments of this approach:   It’s using too much cache. There’s a 10 GB limit in GitHub Actions, and the old cache is automatically evicted. So, that doesn’t worry me much. Issues could arise from using cache instead of running a fresh build in CI. The old approach is susceptible to this as well, but I guess this one is more because it provides better caching 😁What we could do to improve this is to disable using cache on retries. Or we could manually delete the cache from the GitHub Actions UI. We didn’t need either of those yet.   Cache management under the Actions tab   It’s more expensive. The workflow running this way uses more runner minutes. You’d expect it’s because of the containers being set up, but GitHub doesn’t bill us for the time it takes to set up their environment. Thanks, GitHub! They get us the other way, though: when rounding minutes, they ceiling, and that’s what makes all the difference. Even if the job finishes in 10 seconds, it’s billed as a whole minute, so if you have 10 steps that are each running in 10 to 30 seconds, you’ll be billed 10 minutes even though the whole workflow might have been completed in one job running under 5 minutes. You can see that most of our jobs are running for less than half a minute, but we get billed for the whole minute. In our projects, we still go under the quota, so it wasn’t a concert for us, but it’s something to be aware of. If you use a macOS runner and/or have a pretty active codebase, you will notice the greater cost.   Run and billable time of the new approach   Now that we have cleared that, let’s see some code.   We solved the caching part by using git commit hash as the key and using a restore key that enables restoring cache, while still creating a new one every time the workflow runs.   [ uses: &quotactions/cache@v3&quot, with: [ key: &quotmix-${{ github.sha }}&quot, path: ~S&quot&quot&quot _build deps &quot&quot&quot, &quotrestore-keys&quot: ~S&quot&quot&quot mix- &quot&quot&quot ] ]   You can verify by looking at the logs.   For the caching step, this would show something like this: Cache restored successfully Cache restored from key: mix-4c9ce406f9b55bdfa535dac34c1a9dbb347dd803   but it would still show this in the post-job cache step: Cache saved successfully Cache saved with key: mix-83cb8d66280ccf99207c202da7c6f51dfc43fa38   Our solution for the jobs parallelization is harder to show: defp pr_workflow do [ [ name: &quotPR&quot, on: [ pull_request: [ branches: [&quotmain&quot], ] ], jobs: [ compile: compile_job(), credo: credo_job(), deps_audit: deps_audit_job(), dialyzer: dialyzer_job(), format: format_job(), hex_audit: hex_audit_job(), migrations: migrations_job(), prettier: prettier_job(), sobelow: sobelow_job(), test: test_job(), unused_deps: unused_deps_job() ] ] ] end defp compile_job do elixir_job(&quotInstall deps and compile&quot, steps: [ [ name: &quotInstall Elixir dependencies&quot, env: [MIX_ENV: &quottest&quot], run: &quotmix deps.get&quot ], [ name: &quotCompile&quot, env: [MIX_ENV: &quottest&quot], run: &quotmix compile&quot ] ] ) end defp credo_job do elixir_job(&quotCredo&quot, needs: :compile, steps: [ [ name: &quotCheck code style&quot, env: [MIX_ENV: &quottest&quot], run: &quotmix credo --strict&quot ] ] ) end   Another benefit of splitting the workflow into multiple jobs is that the cache is still written even if some of the checks fail. Before, everything would still need to be recompiled (and PLT files for dialyzer created - I know, I know, I’ve been there) every time the workflow runs after failing. It could be solved another way by saving the cache immediately after compiling the code and then running the checks in the same job. Just saying.   But hold on a minute. Are we writing our GitHub Actions workflows in Elixir?! That can’t be right… It’s not magic, it’s a script we wrote to maintain GitHub Actions more easily.   Full example of a complex workflow we made for our phx.tools project is available here: https://github.com/optimumBA/phx.tools/blob/main/.github/github_workflows.ex, and here you can see it in action(s): https://github.com/optimumBA/phx.tools/actions.   Running the checks locally We don’t rely only on GitHub Actions for the code checks. Usually, just before committing the code, we run the checks locally. That way we find errors more quickly and don’t unnecessarily waste our GitHub Actions minutes.   To execute them all one after the other, we run a convenient mix ci command. It’s an alias we add to our apps that locally runs the same commands that are run in GitHub Actions.   defp aliases do [ ... ci: [ &quotdeps.unlock --check-unused&quot, &quotdeps.audit&quot, &quothex.audit&quot, &quotsobelow --config .sobelow-conf&quot, &quotformat --check-formatted&quot, &quotcmd --cd assets npx prettier -c ..&quot, &quotcredo --strict&quot, &quotdialyzer&quot, &quotecto.create --quiet&quot, &quotecto.migrate --quiet&quot, &quottest --cover --warnings-as-errors&quot, ] ... ]   When one of these commands fails, we run it again in isolation and try fixing it while rerunning the command until the issue is fixed. Then we run mix ci again until every command passes.   To run each of these commands without having to prefix with MIX_ENV=test, you can pass the :preferred_cli_env option to the project/0:   def project do [ ... preferred_cli_env: [ ci: :test, coveralls: :test, &quotcoveralls.detail&quot: :test, &quotcoveralls.html&quot: :test, credo: :test, dialyzer: :test, sobelow: :test ], ... ] end   Again, the reason why I run these commands in the test environment is that the app is already compiled in that environment and if I need to run it in the dev environment, it will start the compilation. Locally, it doesn’t matter much, but in GitHub Actions, as you’d expect, it makes a huge difference.   Usually in our projects, we also like to check whether all migrations can be rolled back. To achieve that, we run the command mix ecto.rollback --all --quiet after these. Unfortunately, it doesn’t work if it’s added to the end of this list because when the command is run the app is still connected to the DB causing it to fail. Don’t worry, there’s a tool that can help us, and it’s available in any Unix system. Yes, it’s Make. Create a Makefile in the root of your project with the following content:   ci: mix ci MIX_ENV=test mix ecto.rollback --all --quiet   and run make ci. We could put all the commands there instead of creating a mix alias, something like:   ci.slow: mix deps.unlock --check-unused mix deps.audit mix hex.audit mix sobelow --config .sobelow-conf mix format --check-formatted mix cmd --cd assets npx prettier -c .. mix credo --strict mix dialyzer MIX_ENV=test ecto.create --quiet MIX_ENV=test ecto.migrate --quiet test --cover --warnings-as-errors MIX_ENV=test mix ecto.rollback --all --quiet   but I prefer doing it as a mix alias as it performs more quickly.   See for yourself: $ time make ci … make ci 18.88s user 3.39s system 177% cpu 12.509 total $ time make ci.slow make ci.slow 22.08s user 4.92s system 157% cpu 17.180 total   Almost 5 seconds difference. I suspect it’s because the app is booted only once, unlike when running make ci.slow with each mix ... command booting the app again. Now it makes sense why the rollback step didn’t work when it was a part of the ci alias.   Need help? You’re probably reading this because you’re just starting to build your CI pipeline, or maybe you’re looking for ways to make the existing one better. In any case, we’re confident we can find ways to improve your overall development experience.   We’ve done more complex pipelines for our clients and in our internal projects. These include creating additional resources during the preview apps setup, running production Docker container build as a CI step, using self-hosted runners, etc. We can create a custom solution suitable to your needs.   Whether you’re just testing out your idea with a proof-of-concept (PoC), building a minimum viable product (MVP), or want us to extend and refactor your app that’s already serving your customers, I’m sure we can help you out. You can reach us at projects@optimum.ba.   This was a post from our Elixir DevOps series.
Almir Sarajčić
Post Image

Elixir DevOps series

Everyone who’s dipped their feet in the Elixir knows its ecosystem has the best-in-class documentation, and learning resources for beginners are vast. There are so many blog posts about various domains Elixir is used in, including, but not limited to machine learning, embedded systems, web applications, etc.   One area we felt didn’t receive much love is DevOps. Specifically, complex continuous integration (CI) and continuous delivery (CD) systems. So here we are, coming up with a remedy. We’re trying to shine some light on some non-trivial problems to share knowledge with the community we learned a lot from, while, simultaneously increasing the visibility of our company in the field.   Blog posts So here they are, in the order they were published - not necessarily in the order they should be read.   Maintaining GitHub Actions workflows Optimum Elixir CI with GitHub Actions   We’re preparing some new posts, so be on the lookout for them.   Is there some topic that you’d like us to cover in this series? Feel free to reach us at blog@optimum.ba.
Almir Sarajčić
Post Image

Maintaining GitHub Actions workflows

Whenever we have a choice to make for a CI system to use on a project, we pick GitHub Actions, mainly for convenience. Our code is already hosted on GitHub, and it doesn’t make sense to introduce other tools unnecessarily, so some time ago we started using GitHub Actions as our CI provider.   Over the years we’ve been constantly improving our development workflows thereby adding more complexity to our CI. There were many steps in our pipeline for various code checks, Elixir tests, etc., each increasing the time we needed to wait to make sure our code was good to go. So we’d wait 5 to 10 minutes or so just to find out the code wasn’t formatted properly, there was some compiler warning or something trivial as that. We knew there were better ways to set up our CI, but we felt separating the workflows into separate jobs was going to make for a harder-to-maintain code because GitHub Actions does not support full YAML syntax.   I came to Elixir from the Ruby on Rails community where YAML is a default for any kind of configuration, so I was excited to see GitHub Actions using YAML for the workflow definitions. I quickly came to realize it’s not the same YAML I was used to (You’ve changed, bro). Specifically, I couldn’t use anchors which provide the ability to write reusable code in .yml files.   Script Our way of working around this is writing workflow definitions in Elixir and translating them to YAML, letting us benefit from the beautiful Elixir syntax in sharing variables, steps, and jobs between workflows while still, as a result, having workflow files in the YAML format GitHub Actions supports.   To convert the workflow definitions from Elixir to YAML, we wrote a CLI script that uses fast_yaml library with a small amount of code wrapping it up in an easy-to-use package. We used this script internally for years, but now we’ve decided to share it with the community.   I’ve had some trouble distributing the script. Usually, we’d execute .exs script to convert the workflow, so I wanted to build an escript, but the fast_yaml library contains NIFs that don’t work with it. I liked the way Phoenix is installed so I tried adopting that approach, creating a mix project containing a task, then archiving the project into a .ez file, only to find out that when it gets installed, it doesn’t contain any dependencies. This can be alleviated using burrito or bakeware, but they introduce more complexity, and I didn’t like the way error messages were displayed in the Terminal, so I ended up with a hex package that’s added to an Elixir project in a usual way. Ultimately, I didn’t plan to use the script outside of Elixir projects, so that’s a compromise I was willing to make. If at a later point I feel the need, I’ll distribute it some other way, which will deserve another blog post.   Usage Anyway, here’s how you can use this mix task. Add the github_workflows_generator package as a dependency to your mix.exs file: defp deps do [ {:github_workflows_generator, &quot~&gt 0.1&quot} ] end   You most likely don’t want to use it in runtime and environments other than dev, so you might find this more appropriate: defp deps do [ {:github_workflows_generator, &quot~&gt 0.1&quot, only: :dev, runtime: false} ] end   That will let you execute mix github_workflows.generate   command that given a .github/github_workflows.ex file like this one: defmodule GithubWorkflows do def get do %{ &quotmain.yml&quot =&gt main_workflow(), &quotpr.yml&quot =&gt pr_workflow() } end defp main_workflow do [ [ name: &quotMain&quot, on: [ push: [ branches: [&quotmain&quot] ] ], jobs: [ test: test_job(), deploy: [ name: &quotDeploy&quot, needs: :test, steps: [ checkout_step(), [ name: &quotDeploy&quot, run: &quotmake deploy&quot ] ] ] ] ] ] end defp pr_workflow do [ [ name: &quotPR&quot, on: [ pull_request: [ branches: [&quotmain&quot] ] ], jobs: [ test: test_job() ] ] ] end defp test_job do [ name: &quotTest&quot, steps: [ checkout_step(), [ name: &quotRun tests&quot, run: &quotmake test&quot ] ] ] end defp checkout_step do [ name: &quotCheckout&quot, uses: &quotactions/checkout@v4&quot ] end end   creates multiple files in the .github/workflows directory.   main.yml name: Main on: push: branches: - main jobs: test: name: Test steps: - name: Checkout uses: actions/checkout@v4 - name: Run tests run: make test deploy: name: Deploy needs: test steps: - name: Checkout uses: actions/checkout@v4 - name: Deploy run: make deploy   pr.yml name: PR on: pull_request: branches: - main jobs: test: name: Test steps: - name: Checkout uses: actions/checkout@v4 - name: Run tests run: make test   Path to the source file and the output directory can be customized. To see available options, run mix help github_workflows.generate   You might also want to read the documentation or check out the source code.   The generator’s repo contains its own CI workflows (something something-ception) that show how useful the command is in complex scenarios.   defmodule GithubWorkflows do @moduledoc false def get do %{ &quotci.yml&quot =&gt ci_workflow() } end defp ci_workflow do [ [ name: &quotCI&quot, on: [ pull_request: [], push: [ branches: [&quotmain&quot] ] ], jobs: [ compile: compile_job(), credo: credo_job(), deps_audit: deps_audit_job(), dialyzer: dialyzer_job(), format: format_job(), hex_audit: hex_audit_job(), prettier: prettier_job(), test: test_job(), unused_deps: unused_deps_job() ] ] ] end defp compile_job do elixir_job(&quotInstall deps and compile&quot, steps: [ [ name: &quotInstall Elixir dependencies&quot, env: [MIX_ENV: &quottest&quot], run: &quotmix deps.get&quot ], [ name: &quotCompile&quot, env: [MIX_ENV: &quottest&quot], run: &quotmix compile&quot ] ] ) end defp credo_job do elixir_job(&quotCredo&quot, needs: :compile, steps: [ [ name: &quotCheck code style&quot, env: [MIX_ENV: &quottest&quot], run: &quotmix credo --strict&quot ] ] ) end # Removed for brevity # ... defp elixir_job(name, opts) do needs = Keyword.get(opts, :needs) steps = Keyword.get(opts, :steps, []) job = [ name: name, &quotruns-on&quot: &quot${{ matrix.versions.runner-image }}&quot, strategy: [ &quotfail-fast&quot: false, matrix: [ versions: [ %{ elixir: &quot1.11&quot, otp: &quot21.3&quot, &quotrunner-image&quot: &quotubuntu-20.04&quot }, %{ elixir: &quot1.16&quot, otp: &quot26.2&quot, &quotrunner-image&quot: &quotubuntu-latest&quot } ] ] ], steps: [ checkout_step(), [ name: &quotSet up Elixir&quot, uses: &quoterlef/setup-beam@v1&quot, with: [ &quotelixir-version&quot: &quot${{ matrix.versions.elixir }}&quot, &quototp-version&quot: &quot${{ matrix.versions.otp }}&quot ] ], [ uses: &quotactions/cache@v3&quot, with: [ path: ~S&quot&quot&quot _build deps &quot&quot&quot ] ++ cache_opts(prefix: &quotmix-${{ matrix.versions.runner-image }}&quot) ] ] ++ steps ] if needs do Keyword.put(job, :needs, needs) else job end end # Removed for brevity # ... end   That creates a YAML file I wouldn’t want to look at, much less maintain it, but enables us to have this CI pipeline CI pipeline with jobs running in parallel   Our phx.tools project has an even better example with 3 different workflows. Workflow executed on push to the main branch   Workflow executed when PR gets created and synchronized   Cleanup workflow when PR gets merged or closed   Let’s step back to see how the script works.   The only rule that we enforce is that the source file must contain a GithubWorkflows module with a get/0 function that returns a map of workflows in which keys are filenames and values are workflow definitions.   defmodule GithubWorkflows do def get do %{ &quotci.yml&quot =&gt [[ name: &quotMain&quot, on: [ push: [] ], jobs: [] ]] } end end   Everything else is up to you.   When you look at the generated .yml files, they might not look exactly the same as if you wrote them by hand.   For example, if you were to add caching for Elixir dependencies as in actions/cache code samples, you’d want to have this YAML code:   - uses: actions/cache@v3 with: path: | deps _build key: ${{ runner.os }}-mix-${{ hashFiles('**/mix.lock') }} restore-keys: | ${{ runner.os }}-mix-   with two paths passed without quotes.   I haven’t found a way to tell the YAML encoder to format it that way, so my workaround is to use a sigil that preserves the newline, so that [ uses: &quotactions/cache@v3&quot, with: [ path: ~S&quot&quot&quot _build deps &quot&quot&quot ], key: &quot${{ runner.os }}-mix-${{ hashFiles('**/mix.lock') }}&quot restore-keys: ~S&quot&quot&quot ${{ runner.os }}-mix- &quot&quot&quot ]   gets converted to   - uses: actions/cache@v3 with: path: &quot_build\ndeps&quot key: ${{ runner.os }}-mix-${{ hashFiles('**/mix.lock') }} restore-keys: ${{ runner.os }}-mix-   Elixir DevOps series In our workflows, you may notice some new ideas not seen elsewhere, so be sure to look out for more posts on our blog in a new series where we’ll unpack our unique DevOps practices. If you have any questions, you can contact us at blog@optimum.ba and we’ll try to answer them in our future posts.   If our approach to software development resonates with you and you're ready to kickstart your project, drop us an email at projects@optimum.ba. Share your project requirements and budget, and we'll promptly conduct a review. We'll then schedule a call to dive deeper into your needs. Let's bring your vision to life!   This was the first post from our Elixir DevOps series.
Almir Sarajčić
Post Image

How to Automate Creating and Destroying Pull Request Review Phoenix Applications on Fly.io

This guide explains how to automate the process of creating and destroying Phoenix applications for pull request reviews on Fly.io.IntroductionAs developers, we understand the importance of code review in ensuring the quality of our code. However, when we create new pull requests, reviewers sometimes need to run the app locally to see the changes. This makes it impossible for non-developers to review the work.One solution is to have each developer manually create an app on Fly.io for each pull request they make. However, this process takes time and developers often forget to remove the apps when they finish working on the pull request.Fly.io is a platform that allows developers to easily create and destroy review applications for each pull request. The platform has a GitHub action that makes it easy to automate the whole process. It can be found here: https://github.com/superfly/fly-pr-review-apps. In this post, we are going to learn how to automate this process.While you can use the action as-is, we forked and made some improvements on it, which I will discuss in this post.Optimum BH’s GitHub ActionTo better suit our use case, we made several improvements to our fork of Fly.io's GitHub action:We now create databases and volumes only for apps that require them.When an app is destroyed, we also destroy any associated resources (databases and volumes).We can import any runtime secrets that our Phoenix app requires by using the secrets keyword. Read along to learn how to do this.To determine if an app requires a database, we wrote a script that searches for the migrate script in the app's source code. This script is typically found in the rel/overlays/bin directory. If the migrate script is found, the action will create and attach a database to the app. The APP and APP_DB variables, which represent the app's name and database name respectively, are declared elsewhere in the GitHub action’s source code.if [ -e "rel/overlays/bin/migrate" ]; then # only create db if the app launched successfully if flyctl status --app "$APP"; then if flyctl status --app "$APP_DB"; then echo "$APP_DB DB already exists" else flyctl postgres create --name "$APP_DB" --org "$ORG" --region "$REGION" --vm-size shared-cpu-1x --initial-cluster-size 4 --volume-size 1 fi # attach db to the app if it was created successfully if flyctl postgres attach "$APP_DB" --app "$APP" -y; then echo "$APP_DB DB attached" else echo "Error attaching $APP_DB to $APP, attachments exist" fi fi fiTo determine if the app requires volumes, the script below looks for [mounts] in the config file.if grep -q "\[mounts\]" fly.toml; then # create volume only if none exists if ! flyctl volumes list --app "$APP" | grep -oh "\w*vol_\w*"; then flyctl volumes create "$VOLUME" --app "$APP" --region "$REGION" --size 1 -y fi # modify config file to have the volume name specified above. sed -i -e 's/source =.*/source = '\"$VOLUME\"'/' "$CONFIG" fiFirst, we need to check if the app already has a volume. If we do not perform this check, multiple volumes will be created. While this is not necessarily problematic, it is wasteful. Also, we need to modify the config file to include the new volume name. If we neglect this step, the deployment will fail.Automating the Creation of Review ApplicationsNow, here's an example of how we can set up a workflow to automatically create a review application for each pull request.For the workflow to work, you need to put FLY_API_TOKEN , generated by running flyctl auth token under GitHub repository or organization secrets.name: Review App on: pull_request: types: [opened, reopened, synchronize] env: FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }} FLY_ORG: Personal FLY_REGION: jn REPO_NAME: sample-app jobs: deploy_review_app: name: Create & Deploy Review App runs-on: ubuntu-latest # Only run one deployment at a time per PR. concurrency: group: pr-${{ github.event.number }} # Create a GitHub deployment environment per review app # so it shows up in the pull request UI. environment: name: pr-${{ github.event.number }} url: https://pr-${{ github.event.number }}.${{ env.REPO_NAME }}.fly.dev steps: - name: Checkout uses: actions/checkout@v3 - name: Create & Deploy Review App id: deploy uses: optimumBA/fly-preview-apps@main with: name: pr-${{ github.event.number }}-${{ env.REPO_NAME }}You can import any secrets that your Phoenix application requires to run. After adding the secrets to your application's GitHub repository, you can access them in your workflow using secrets keyword.- name: Create & Deploy Review App id: deploy uses: optimumBA/fly-preview-apps@main with: name: pr-${{ github.event.number }} secrets: 'SECRET_1=${{ secrets.YOUR_SECRET_1 }} SECRET_2=${{ secrets.SECRET_2 }}\nSECRET_n=${{ secrets.SECRET_n }}'For every successful deployment, GitHub actions will set environment and deployment variable variables, pointing to the name and url of the deployed review application. You can find them under Environments tab in your application’s GitHub repository.Automating the Destruction of Review ApplicationsAfter the pull request has been merged or closed, the review application is no longer needed. Here is an example of a workflow that automatically destroys the review application:name: Delete Review App on: pull_request: types: - closed env: FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }} REPO_NAME: sample-app jobs: delete_review_app: runs-on: ubuntu-latest name: Delete Review App steps: - name: Checkout uses: actions/checkout@v3 - name: Delete Review Deployment uses: optimumBA/fly-preview-apps@main with: name: pr-${{ github.event.number }}-${{ env.REPO_NAME }}BonusFor every deployment, the workflow also creates environments on GitHub. These environments display the name and live link of the deployed review app. It is important to note that the workflow we created to delete the deployments once the pull request is closed only destroys resources on Fly.io, and does not remove the GitHub environments.To remove these GitHub environments, we will extend our workflow using third-party GitHub actions. These actions require an auth token in order to delete the environments. The available GitHub token (available as `secrets.GITHUB_TOKEN` in the workflow) does not have enough permissions to delete GitHub environments. To proceed, we need to create a GitHub app and grant it the following permissions, under repository permissions:Actions: ReadAdministration: Read & WriteDeployments: Read & WriteEnvironments: Read & WriteMetadata: ReadRead more on these permissions on https://docs.github.com/en/rest/overview/permissions-required-for-github-apps?apiVersion=2022-11-28, and steps to create a GitHub app on https://docs.github.com/en/apps/creating-github-apps/creating-github-apps/creating-a-github-app.Please refer to the respective documentation of these GitHub actions for more information on additional setup steps:https://github.com/navikt/github-app-token-generatorhttps://github.com/strumwolf/delete-deployment-environmentHere is a complete workflow that deletes both deployments and GitHub environments:name: Delete Review App on: pull_request: types: - closed env: FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }} REPO_NAME: sample-app jobs: delete_review_app: runs-on: ubuntu-latest name: Delete Review App steps: - name: Checkout uses: actions/checkout@v3 - name: Delete Review Deployment uses: optimumBA/fly-preview-apps@main with: name: pr-${{ github.event.number }}.${{ env.REPO_NAME }} - name: Get Token uses: navikt/github-app-token-generator@v1.1.1 id: get-token with: app-id: ${{ secrets.GH_APP_ID }} private-key: ${{ secrets.GH_APP_PRIVATE_KEY }} - name: Delete GitHub Environments uses: strumwolf/delete-deployment-environment@v2.2.3 with: token: ${{ steps.get-token.outputs.token }} environment: pr-${{ github.event.number }} ref: ${{ github.head_ref }}ConclusionUsing pull request review applications can greatly improve the efficiency of the code review process. By automating the creation and destruction of these applications, we can save time and ensure that our code is thoroughly reviewed before it is merged into our codebase. This approach also saves on resources. With the help of GitHub Actions and Fly.io, automating this process is easy and straightforward.
Amos Kibet
Post Image

phx.tools: Complete Development Environment for Elixir and Phoenix

Elixir is a powerful functional programming language that has been attracting the attention of developers from different backgrounds since its release. Many of its new users already have experience with tools such as Homebrew and asdf, which makes the installation process smoother. However, setting up the development environment for Phoenix applications can still be a challenge, especially for new developers.  Past few years, the Elixir ecosystem has become more approachable to new developers. The learning curve is flattening every year with the introduction of tools like Livebook. It’s a great way to start with Elixir, as the installation is straightforward. What’s still missing is a complete setup for the development of Phoenix apps.  At Optimum BH, we've seen the potential of the Phoenix and Elixir stack, and have been working with it for some time now. Our team has had several interns who were new to both Elixir and programming, and we've noticed that the process of setting up the development environment can be demotivating for these newcomers.  The Ruby on Rails community has rails.new. It’s a complete development environment containing everything you need to start a new Rails application. We believe that Phoenix and Elixir ecosystem can benefit from something similar.  So, let me introduce you to phx.tools. It’s a shell script for platforms Linux and macOS (sorry, Windows users) that configures the development environment for you in a few easy steps. Once you finish running the script, you'll be able to start the database server, create a new Phoenix application, and launch the server.    To get started, visit phx.tools and follow the instructions for your platform. Happy coding!
Almir Sarajčić

Portfolio

  • Phx.tools

    Powerful shell script designed for Linux and macOS that simplifies the process of setting up a development environment for Phoenix applications using the Elixir programming language. It configures the environment in just a few easy steps, allowing users to start the database server, create a new Phoenix application, and launch the server seamlessly. The script is particularly useful for new developers who may find the setup process challenging. With Phoenix Tools, the Elixir ecosystem becomes more approachable and accessible, allowing developers to unlock the full potential of the Phoenix and Elixir stack.

    Phx.tools
  • Prati.ba

    Bosnian news aggregator website that collects and curates news articles from various sources, including local news outlets and international media. The website provides news on a variety of topics, including politics, sports, business, culture, and entertainment, among others.

    Prati.ba
  • StoryDeck

    StoryDeck is a cloud-based video production tool that offers a range of features for content creators. It allows users to store and archive all their content in one location, track tasks and collaborate easily with team members, and use a multi-use text editor to manage multiple contributors. The platform also offers a timecode video review feature, allowing users to provide precise feedback on video files and a publishing tool with SEO optimization capabilities for traffic-driving content.

    StoryDeck