Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 30 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@


## Objective
The objective of this repo is to create an automated DevSecOps styled repo for use in k8s.
The objective of this repo is to create an automated DevSecOps styled repo for use in k8s. This should contain automated code reporting and fixes, and the ability to build docker images in a workflow pipeline via Github Actions


## Technical Requirements
Expand All @@ -12,20 +12,33 @@ The objective of this repo is to create an automated DevSecOps styled repo for u
- Scan for CVEs and remedy them. As much as it is incredibly simple to use `docker scout` here, it's probably more efficient to use Trivy imo. It would practically be a no brainer decision in automated pipelines/GH Actions.
- I included a bad version of python 'requests' in my requirements.txt to show a critical vuln.

## Docker Image Overview
## Docker Overview
This image uses the following on base ubuntu22.04:
- R base
- Python 2.7
- Python 3.12
- Python 3.10

### User Story / Implementation Notes
Why am I using ubuntu22.04? It's still LTS and supports python2 + python3. You could definitely do a multi-stage build but sometimes it's not about the actual buildtime but the actual quality of our final output build artifact. You may sacrifice image size for build time, and vice versa in some cases.
## User Story / Implementation Notes
###Why am I using ubuntu22.04 and not a multi-stage build?

Still LTS and supports python2 + python3. You could definitely do a multi-stage build but for the sake of having something to talk about, I wanted to talk about how this could be improved on.


Right now with no cache, the image builds locally in about 35s according to docker buildkit. Obviously, if I was not using shared git runners and was in enterprise Github Org - the runners may indeed be much faster using self-hosted runners.

I generally find myself leaning on the Actions Runner controller [helm chart](https://artifacthub.io/packages/helm/actions-runner-controller/actions-runner-controller "helm chart") for increased build times on the dedicated runners in the Action itself.

At some level with this challenge, there is a few limitations not having access to -
- A real production grade k8s cluster
- Enterprise Github Org(Github Security SARIF report posting only works in Enterprise Orgs within private repos). It'd be nice to use Trivy to post to this.
- Some kind of ALB, ingress route setup,etc publicly exposable endpoint for the Service that goes to the Deploy. (the challenge specifically asked for me to touch on this).




## Minikube setup
**This can be run locally with minikube for testing purposes, and to verify the k8s comptability and run forever pod. Had to do it from minikube for this demo,imo**.

### Minikube setup
This can be run locally with minikube for testing purposes, and to verify the k8s comptability and 'run forever' pod.
- Please follow the approrpirate minikube install for your OS from [the official source](https://minikube.sigs.k8s.io/docs/start/)
- Load the image with `minikube image load sadminriley/python-test`
- Verify you've loaded the image locally if needed with the following cmds:
Expand All @@ -34,18 +47,20 @@ minikube ssh
docker@minikube:~$ crictl images|grep python-test
```
- Apply the k8s Deployment + Service with `kubectl apply -f ops/`
- Expose the Deployment itself:
- Expose the service via the Deployment itself:
```
kubectl expose deployment python-swish-r-deploy --port 8080 [9:29:54]
kubectl expose deployment python-swish-r-deploy --port 8080
service/python-swish-r-deploy exposed
```
- You can also forward the port via `kubectl port-forward svc/python-swish-r-deploy 8080:8080`
- Optionally, open a shell on the container and run python to verify the port works:
```
kubectl exec -it deploy/python-swish-r-deploy -- bash [10:14:30]
- Optionally, open a shell on the container and run python to verify the port works

`kubectl exec -it deploy/python-swish-r-deploy -- bash`

**I just launched a basic http.server in the image via shell to demonstrate this.**

`appuser@python-swish-r-deploy-6959f9c5c6-ktv58:/app$ python3 -m http.server 8080
Serving HTTP on 0.0.0.0 port 8080 (http://0.0.0.0:8080/) ...`

appuser@python-swish-r-deploy-6959f9c5c6-ktv58:/app$ python3 -m http.server 8080
Serving HTTP on 0.0.0.0 port 8080 (http://0.0.0.0:8080/) ...
```