% Optional argument [#1]: Size modifier (e.g., , ) % #2: Opening delimiter % #3: Closing delimiter % #4: Content
% Common sets % Real numbers % Integers % Natural numbers % Rational numbers % Complex numbers
% Probability and statistics % Expectation % Variance % Covariance % Probability measure % Indicator function
% Linear algebra % Matrix% Vector % Trace % Rank % Range (image) % Projection
% Calculus and analysis % For integrals, e.g., f(x) x % Partial derivative \newcommand{[2]}{ #1} % Partial derivative w/o fraction% Second partial derivative % Gradient % Divergence % Curl
% Set theory % Set% Set builder notation % Union % Intersection % Symmetric difference
% Logic and proofs % Implies % If and only if % End of proof % Contradiction
% Norms and inner products % Norm% Inner product
% Common functions % Minimization problem % Maximization problem % Argument minimum % Argument maximum
% Subject to constraints % Sign function % Span of a set
% Formatting % Absolute value % Parentheses % Brackets % Floor function% Ceiling function
% Asymptotic notations % Big O notation % Small o notation % Big Omega notation % Big Theta notation
% Commonly used in algorithms and complexity % Polynomial time % Polylogarithmic time
% Additional probability notations % Independent and identically distributed % Distributed as
% Fourier transform % Fourier transform % Inverse Fourier transform
% General math % Display style
I have been using Github workflows for a while now, and there are just so many things to learn about. There were a couple of things I learned the hard way, either due to lack of documentation or being unobvious. So I decided to compile a few of them into this post and attempted to present them in a way that is easy to understand. My style of writing is that I defer some of the details to the inline comments within the code, so be sure to go through them as well. You can find the full code examples here .
1 Workflow Dispatch with Inputs
The workflow_dispatch syntax is to manually trigger a workflow. It has applications like running a deployment or release workflow on demand. It’s also a great way to test the workflows while developing them. What perhaps is not well-known is that you can specify all types of inputs such as dropdowns, checkboxes, text inputs, etc. I found this very useful when I wanted to deploy to different environments. The example below demonstrates using several input types:
name: Workflow Dispatch with Inputs
on:
push:
branches: [main]
workflow_dispatch:
inputs:
# Arguments
artifact_name:
type: string # Text field
description: 'Artifact identifier'
required: true
data_source:
type: choice # Dropdown selection
description: 'Select data source'
default: 'sample'
options:
- sample
- test
- production
send_notification:
type: boolean # Checkbox
description: 'Send email notification'
default: true
In this example, the workflow is triggered on every push to the main
branch, but it can also be manually triggered with the workflow_dispatch
event. To distinguish between the two within the code logic, we can add a configuration step to use the github.event_name
context variable in a conditional expression like:
jobs:
configure:
id: configure
runs-on: ubuntu-latest
steps:
- name: Resolve Inputs
run: |
data_source="${{ github.event_name == 'push' && 'production' || inputs.data_source }}"
... other logic ...
Here, if the workflow is triggered by a push event, it’ll use production
as the data source. But if it’s triggered manually, it’ll use the value provided in the data_source
input field.
We can also write this more concisely as:
data_source="${{ inputs.data_source || 'production' }}"
This works because if inputs.data_source
is not provided, it will be null
and is evaluated to false
. The logical OR operator (||
) will then return the right-hand side value, which is production
in this case. Some may argue this is less readable, so I suppose it’s a matter of preference. I personally prefer the second version.
Manual workflow runs all share the same generic title (the workflow name) which can be confusing when multiple users trigger the same workflow but from different branches or with different inputs. You can customize the title using the run-name
keyword for better visibility. I think the following line should be added to every workflow by default:
run-name: >
${{ github.event_name == 'workflow_dispatch'
&& format('{0} from "{1}"', github.workflow, github.ref_name) || '' }}
This will set the title to include the branch name, but only if the workflow is triggered manually. For any other trigger events (e.g. push), it’ll use Github’s default run name. You can customize the format string to include workflow inputs or other context variables as needed.
2 Reusable Workflows
Reusable workflows are workflows that can be called from other workflows. They can accept inputs similar to the workflow_dispatch
inputs, and can provide outputs that can be used in the calling workflow. This is useful for sharing common logic across multiple workflows, such as a release process that does testing, building and creating artifacts. With a good configuration setup, this workflow can be reused for PR testing, deployment, and even manual releases.
This is a minimal example with a single input and output:
name: Reusable Workflow Example
on:
# `workflow_call` makes the workflow callable from other workflows
workflow_call:
inputs:
run_tests:
type: boolean
description: 'Run tests before building'
required: true
# Need to explicitly define the outputs here
outputs:
example_output:
description: 'An example output from the reusable workflow'
value: ${{ jobs.job1-example.outputs.example_output }} # Reference the output from the job
# In most cases, it's also useful to manually trigger these type of workflows
# Unfortunately, we need to duplicate the input definitions here for the dispatch trigger
workflow_dispatch:
inputs:
run_tests:
type: boolean
description: 'Run tests before building'
required: true
jobs:
job1-example:
runs-on: ubuntu-latest
# Output of the job is defined here, so it can be referenced as an output of the workflow
outputs:
# The job output itself is defined in one of the steps (so many references, I know)
example_output: ${{ steps.example_step.outputs.hello_message }}
steps:
- name: Example Step
id: example_step
run: |
echo "This is an example step" echo "hello_message=Hello from reusable workflow!" >> $GITHUB_OUTPUT
To call this workflow from another workflow, we can use the uses
keyword and pass the inputs:
name: Main Workflow
on:
push:
branches: [main]
jobs:
call-reusable-workflow:
uses: ./.github/workflows/example2_reusable_workflow.yaml # Path to the reusable workflow
with: # Specify the inputs (if any) (also, why is this not called `inputs`?!)
run_tests: true
secrets: inherit # Inherit secrets from the calling workflow if needed
print-outputs:
needs: call-reusable-workflow
runs-on: ubuntu-latest
steps:
- name: Print the outputs
run: |
echo "Output from reusable workflow: ${{ needs.call-reusable-workflow.outputs.example_output }}"
3 Github Environments
Github Environments are a way to define different deployment environments within a repository and can be used to set environment-specific secrets, protection rules, etc.
- Instead of having repository-wide secrets, we can have environment-specific secrets. Most of the time you would want to use the same secret name across different environments, so you can access them using the same syntax in the workflow.
- The syntax to access an environment secret is
${{ secrets.SECRET_NAME }}
, which is no different from the repository secrets. However, the value of the secret is determined by the environment the job is running in at runtime. - We can also define environment “variables” in the same settings page where we define the secrets. The only difference between a variable and a secret is that the variable is not encrypted and you can see its value in the settings page. The syntax to access an environment variable is
${{ vars.VARIABLE_NAME }}
.
Here is an example of a job that deploys to an environment, where production
is a predefined environment in the repo settings:
jobs:
deploy:
runs-on: ubuntu-latest
environment: production # Specify the environment here
steps:
- name: Deploy to Production
# This is just an example. Don't actually echo secrets in real workflows!
# But Github will most certainly mask the secrets in the logs
run: |
echo "Deploying to production with secret ${{ secrets.TEST_SECRET }}" echo "Environment variable XYZ: ${{ vars.XYZ }}"
4 Dynamic Parallel Jobs
Matrix strategy is a powerful feature to execute multiple jobs in parallel with different configurations, with use-cases like testing over multiple versions of a language or operating system, building multiple docker images, deploying to several environments, etc.
The examples you can find in the official documentation are usually simple, such as:
job1:
# Define the matrix parameters
strategy:
matrix:
python-version: ['3.9', '3.10', '3.11']
os: [ubuntu-latest, windows-latest, macos-latest]
environment: [staging, production]
# We can access individual parameters using dot notation
name: Test on ${{ matrix.os }} with Python ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
environment: ${{ matrix.environment }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
The way a matrix works is that it creates a job for every combination of the parameters specified, resulting in a total of 3 (python-version) x 3 (os) x 2 (environment) = 18 jobs. While this has its own applications, sometimes we may need to specify complex configuration for each individual job that is not symmetrically derived from the cartesian product of the parameters. One example would be dynamic construction of the matrix based on the user inputs, or the branch the workflow is triggered on, which brings me to the next point.
To create a dynamic matrix construction in a configuration step, we can create an array of JSON objects where each object represents the job configuration. Let’s consider an example where we want to deploy a service to different regions depending on the input parameters:
name: Multi-Region Deployment
on:
push:
branches: [main]
workflow_dispatch:
inputs:
include_eu:
type: boolean
description: 'Deploy to EU regions for GDPR compliance'
default: false
deployment_scope:
type: choice
description: 'Deployment scope'
default: 'regional'
options:
- regional
- global
jobs:
configure-regions:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.configure.outputs.matrix }}
steps:
- name: Build Regional Matrix
id: configure
run: |
set -euxo pipefail
# Base regions for all deployments
# The `regions` variable will be a JSON array of job configurations
regions=()
regions+=('{"region": "us-east-1", "instance_type": "t3.medium"}')
regions+=('{"region": "us-west-2", "instance_type": "t3.medium"}')
# Add EU regions if needed
# Production always includes EU (when on main branch)
if [[ "${{ inputs.include_eu }}" == "true" || \
"${{ github.ref }}" == "refs/heads/main" ]]; then
# Append EU region configs
regions+=('{"region": "eu-west-1", "instance_type": "t3.large", "compliance": "gdpr"}')
regions+=('{"region": "eu-central-1", "instance_type": "t3.large", "compliance": "gdpr"}')
fi
# Add Asia-Pacific for global rollouts
if [[ "${{ inputs.deployment_scope }}" == "global" ]]; then
regions+=('{"region": "ap-southeast-1", "instance_type": "t3.small"}')
fi
# Output as JSON array
echo "matrix=$(IFS=,; echo "[${regions[*]}]")" >> $GITHUB_OUTPUT
deploy-regions:
needs: configure-regions
runs-on: ubuntu-latest
strategy:
matrix:
# 'fromJSON' is a Github function to convert a JSON string to an object
service: ${{ fromJSON(needs.configure-regions.outputs.matrix) }}
environment: [development, staging] # This is fixed in this example
environment: ${{ matrix.environment }}
steps:
- name: Deploy to Region
run: |
echo "Deploying to ${{ matrix.service.region }} with ${{ matrix.service.instance_type }}"
# EU regions need special compliance handling
if [[ "${{ matrix.service.compliance }}" == "gdpr" ]]; then
echo "Applying GDPR compliance configurations" fi
Service
configurations are dynamically constructed, while the environment
array is fixed. You can go crazy with this logic and create any number of jobs with different configurations.
5 Job Dependencies and Skipping
Jobs in a workflow can depend on each other, and by default, a job won’t run if its dependencies fail. However, sometimes we may want to intentionally skip a job based on a logic but without canceling the dependent jobs. Consider an example where Job C depends on Job B, which in turn depends on Job A (A -> B -> C). Job B may be skipped based on a condition in Job A, but we still want Job C to run in this scenario. The code below demonstrates how to achieve this:
jobs:
job_a:
runs-on: ubuntu-latest
outputs:
run_b: ${{ steps.configure.outputs.run_b }}
steps:
- name: Configure
id: configure
run: |
# Check if branch is main
run_b=${{ github.ref == 'refs/heads/main' }}
echo "run_b=$run_b" >> $GITHUB_OUTPUT job_b:
runs-on: ubuntu-latest
# Dependency on job A
needs: job_a
# `if` condition as to whether to run this job. Here, it's based on the output of job A
# Note that the output is a string, so we compare it to string 'true'
if: needs.job_a.outputs.run_b == 'true'
steps:
- run: echo "This is job B, which runs only if job A's condition is met."
job_c:
runs-on: ubuntu-latest
# Dependency on job B
needs: job_b
# Only run if job B didn't fail and wasn't cancelled
if: ${{ !failure() && !cancelled() }}
steps:
- run: echo "This is job C, which runs only if job B was either successful or skipped."
The reason we use a complicated expression like ${{ !failure() && !cancelled() }}
is because Github doesn’t provide a built-in way to check if a job was skipped. The failure()
function returns true if a dependency job failed, and cancelled()
returns true if the workflow was cancelled. By combining the inverse of these two conditions, we ensure that the job runs only if the previous job was either successful or skipped.
6 Exfiltration of Secrets
Recently I wanted to take a backup of the secrets of one of the repositories I own. As you may know, the secrets cannot be viewed after they are created, and can only be updated or deleted. I was aware that the most straightforward way to do this was to use a workflow that prints out the secrets to the console, but I was surprised to learn that this is also a security vulnerability that can be exploited by malicious actors, given that it doesn’t require admin or owner-level permissions.
All you need to do is to be able to trigger any workflow, whether it’s through a PR workflow that is automatically triggered, or by manually running a workflow that has the workflow_dispatch trigger. But let’s leave the security discussion for another time. I just want to note that I take no responsibility for any incorrect or misuse of this information, and you should only use it for legitimate purposes such as backing up your own secrets or testing your workflows.
My preferred way to accomplish this is to modify an existing workflow that has the workflow_dispatch
trigger in a non-main branch, encrypt the secrets in the workflow based on an input key and then print them to the console. The encryption is a safety measure to ensure that the secrets are not exposed in the logs, and can only be decrypted by the user who has access to the input key. We’ll use the Fernet encryption from the cryptography
package to encrypt the secrets. You first need to generate a key locally using the following code:
from cryptography.fernet import Fernet
= Fernet.generate_key()
key print(key.decode())
Then you can use the following workflow to backup the secrets:
name: Backup Secrets
on:
workflow_dispatch:
inputs:
key:
description: 'Encryption key for secrets backup'
required: true
jobs:
backup:
runs-on: ubuntu-latest
steps:
- run: |
pip install cryptography
python -c '
import os
import json
# Use Fernet encryption
from cryptography.fernet import Fernet
fernet = Fernet(os.environ["KEY"].encode())
# Convert the env variables to JSON and encrypt
params = json.dumps(dict(os.environ))
print("Backing up secrets...")
print(fernet.encrypt(params.encode()).decode())
'
env:
KEY: ${{ inputs.key }}
# Expose the secrets as environment variables
SECRET_1: ${{ secrets.SECRET_1 }}
SECRET_2: ${{ secrets.SECRET_2 }}
SECRET_3: ${{ secrets.SECRET_3 }}
This prints out the encrypted secrets to the console, which can then be decrypted using the same key by:
from cryptography.fernet import Fernet
= Fernet('your-encryption-key')
fernet = 'your-encrypted-data' # from the workflow logs
encrypted_data = fernet.decrypt(encrypted_data.encode()).decode()
decrypted_data print(decrypted_data)
The reason that this is safe (as of writing this article) is because Github doesn’t expose the input parameters in the logs, so we can be confident the secrets cannot be decrypted by anyone having the ability to view the run logs. But this may change in the future so a safer way to accomplish the same outcome is to make the encryption key a secret as well, and then pass it on to the workflow as an environment variable.
jobs:
backup:
runs-on: ubuntu-latest
steps:
- run: |
...
env:
KEY: ${{ secrets.ENCRYPTION_KEY }} # Use a secret for the encryption key
...
Reuse
Citation
@online{sarang2025,
author = {Sarang, Nima},
title = {Github {Workflow} {Tips} and {Tricks}},
date = {2025-08-07},
url = {https://www.nimasarang.com/blog/2025-08-07-github-workflow-tips/},
langid = {en}
}