Security Monkey deployment with CloudFormation template

netflix-security-monkey-overview-1-638In order to give back to the Open Source community what we take from it (actually from the Netflix awesome engineers), I wanted to make this work public, a CloudFormation template to easily deploy and configure Security Monkey in AWS. I’m pretty sure it will help many people to get their AWS infrastructure more secure.

Security Monkey is a tool for monitoring and analyzing the security of our Amazon Web Services configurations.

You are maybe thinking on AWS CloudTrail or AWS Trusted Advisor, right? This is what the authors say:
“Security Monkey predates both of these services and meets a bit of each services’ goals while having unique value of its own:
CloudTrail provides verbose data on API calls, but has no sense of state in terms of how a particular configuration item (e.g. security group) has changed over time. Security Monkey provides exactly this capability.
Trusted Advisor has some excellent checks, but it is a paid service and provides no means for the user to add custom security checks. For example, Netflix has a custom check to identify whether a given IAM user matches a Netflix employee user account, something that is impossible to do via Trusted Advisor. Trusted Advisor is also a per-account service, whereas Security Monkey scales to support and monitor an arbitrary number of AWS accounts from a single Security Monkey installation.”

cloud-formationNow, with this provided CloudFormation template you can deploy SecurityMonkey pretty much production ready in a couple of minutes.

For more information, documentation and tests visit my Github project: https://github.com/toniblyx/security_monkey_cloudformation

How to restrict by regions and instance types in AWS with IAM

The use case is easy, and if you work with AWS I’m pretty sure that you have faced this requirement at some point: I don’t want a certain group of users of a particular AWS account to create anything anywhere. I had to configure the security of one of our AWS accounts to only allow users to work with EC2 and a few other AWS services in only two regions (N. Virginia and Ireland in this case). In addition to that, and to keep our budget under control, we wanted to limit the instance types they can use, in this example we will only allow to use EC2 instances that are not bigger than 16GB of RAM (for a quick view of all available EC2 instances types see http://www.ec2instances.info).

Thanks to the documentation and AWS Support, I came across this solution (as an example). The only issue is that, at the moment, we can not hide features in the AWS Console, but at least AWS Support is very clear and supportive on that. They know how challenging is IAM for certain requirements.

Go to IAM -> Policies -> Create Policy -> Create Your Own Policy and use the next json code or in this gist link  as reference to write your own based on your requirements. After that you have to attach that policy to the role/user/group you want to.

Hope this helps.

Forensics in AWS: an introduction

Spanish version here.

AWS is always monitoring unauthorized usage of their/our resources up in the cloud. If you have dozens of services running on AWS, at some point, you are likely to be warned about a security issue due to a variety of reasons like accidentally sharing a Key in Github, server misconfiguration making it easily exploitable, services with vulnerabilities, DoS or DDoS, 0days, etc… So be ready to perform a forensics and/or incident response to your AWS infrastructure.

 

Remember, in case of a security incident keep calm and follow a predefined procedure, don’t leave the process to a random behavior because probably your boss or you are nervous and unable to wait. It is always much better to have a proven guide to follow that just follow your intuition (you will use your intuition later).

 

WARNING: you may have come to this article in a desperate try looking for a solution in Google, in this case I recommend you to test all commands mentioned here before in your lab environment. You should have an incident response and forensics guide with some information like this before the incident actually happens.

 

In this article I want to write up some recommended steps and also tips and tricks we have “eventually” done. I assume you have AWS command line tools installed correctly otherwise look at here http://docs.aws.amazon.com/general/latest/gr/GetTheTools.html. All commands are based on Linux EC2 compromised server but most of the “aws” CLI commands can also be used for Windows servers (not tested though). If you are wondering if you may perform all actions mentioned below using AWS Console UI, yes, but I think using command line is faster and straightforward to follow in case of an incident:

 

1) Disable or delete the Access Key. If your AWS Access Key has been compromised (AWS will let you know in the communication or in case you noticed that thru a different manner, i.e.: looking at your code published in GitHub)

aws iam list-access-keys
aws iam update-access-key --access-key-id AKIAIOSFODNN7EXAMPLE \
--status Inactive --user-name Bob
aws iam delete-access-key --access-key AKIDPMS9RO4H3FEXAMPLE \
--user-name Bob

2) In case of compromised Key, check if new and unexpected resources have been spin up using the compromised key, in all regions. It is common to see that someone used your compromised Key to launch EC2 instances in any other AWS region, so check all of them looking for new and suspected instances. Here an example to look for new instances launched in us-east-1 since March 9th 2016:

aws ec2 describe-instances --region us-east-1 \
--query 'Reservations[].Instances[?LaunchTime>=`2016-03-9`][].{id: InstanceId, type: InstanceType, launched: LaunchTime}'

3) Contact AWS Support and let them know about the security incident, they are always willing to help and give advice. They also may scale to AWS Security Team if needed.

4) Isolate the forensic instance, in this case when I talk about YOUR.IP.ADDRESS.HERE, it could be your office public IP or an intermediate hosted server to hop off or to do the analysis:

  • Create a security group to isolate your instance, note the difference between EC2-Classic and EC2-VPC, take note of Group-ID
aws ec2 create-security-group --group-name isolation-sg \
--description “Security group to isolate EC2-Classic instances”
aws ec2 create-security-group --group-name isolation-sg \
--description “Security group to isolate a EC2-VPC instance” \
--vpc-id vpc-1a2b3c4d \
# where vpc-1a2b3c4d is the VPC ID that the instance is member of
  • Set a rule to allow SSH access from your public IP only, but first we have to know our public IP:
dig +short myip.opendns.com @resolver1.opendns.com
aws ec2 authorize-security-group-ingress --group-name isolation-sg \
--protocol tcp --port 22 --cidr YOUR.IP.ADDRESS.HERE/32
aws ec2 authorize-security-group-ingress --group-id sg-BLOCK-ID \
--protocol tcp --port 22 --cidr YOUR.IP.ADDRESS.HERE/32 \
# note the difference between both commands in group-name \
and group-id, sg-BLOCK-ID is the ID of your isolation-sg
  • In EC2-Classic Security Groups don’t support outbound rules. However, for EC2-VPC Security Groups, outbound rules can be set with these commands:
aws ec2 revoke-security-group-egress --group-id sg-BLOCK-ID \
--protocol ‘-1’ --port all --cidr ‘0.0.0.0/0’ \
# removed rule that allows all outbound traffic
aws ec2 authorize-security-group-egress --group-id sg-BLOCK-ID \
--protocol ‘tcp’ --port 80 --cidr ‘0.0.0.0/0’ \
# place a port or IP if you want to enable some other \
outbound traffic otherwise do not execute this command.
  • Apply that Security Group to the compromised instance:
aws ec2 modify-instance-attribute --instance-id i-INSTANCE-ID \
--groups sg-BLOCK-ID \
# where sg-BLOCK-ID is the ID of your isolation-sg
aws iam put-user-policy --user-name MyUser --policy-name MyPowerUserRole \
--policy-document file://C:\Temp\MyPolicyFile.json

5) Tag instance to mark it as under investigation:

aws ec2 create-tags --resources i-INSTANCE-ID \
--tags “Key=Environment, Value=Quarantine:REFERENCE-ID”

6) Save instance/s metadata:

  • Information about the compromised instance:
aws ec2 describe-instances --instance-ids i-INSTANCE-ID > forensic-metadata.log
or
aws ec2 describe-instances --filters “Name=ip-address,Values=xx.xx.xx.xx”
  • Console output, can be useful depending on the attack but you should have a centralized/dedicated log server outside each instance.
aws ec2 get-console-output --instance-id i-INSTANCE-ID

7) Create Snapshot of the volume/s on the compromised instance/s for forensics analysis:

aws ec2 create-snapshot –-volume-id vol-xxxx \
–-description “IR-ResponderName- Date-REFERENCE-ID”

That snapshot won’t be changed or mounted, we will work with a Volume.

8) Now we can follow 2 paths: Stop the instance.

aws ec2 stop-instances --instance-ids i-INSTANCE-ID
  • or Leave it running, if we can, then isolate it from inside (iptables) and dump its RAM memory to a file using LiME.

9) Create a Volume from the taken snapshot to be used later for analysis:

  • Consider using –region us-east-1 –availability-zone us-east-1a –volume-type standard with your own setup.
aws ec2 create-volume --snapshot-id snap-abcd1234
  • Now take note of your new volume:
aws ec2 describe-volumes

10) Mount that volume with your favorite forensics distribution and Run the investigation.

I will add more information in next blog post but I think this is a good introduction.

If you want to learn much more on this topic, I will be giving an online training about AWS, GCE and Azure Forensics in Spanish with Securizame, more info here.

Some cool references and good reads:

https://securosis.com/blog/my-500-cloud-security-screwup

https://securosis.com/blog/cloud-forensics-101

http://www.slideshare.net/AmazonWebServices/sec316-your-architecture-w-security-incident-response-simulations

http://sysforensics.org/2014/10/forensics-in-the-amazon-cloud-ec2/

 

Docker Security Tools: Audit and Vulnerability Assessment

Dec 1st 2015: first version of this article published
Dec 2nd 2015: UPDATED OpenSCAP section with Atomic scan information and references
Dec 7th 2015: UPDATED Twistlock section, after a session/demo with the vendor. Conclusions updated.
Dec 14th 2015: UPDATED OpenSCAP section with a link of a demo made by @ianmiell
Dec 16th 2015: UPDATED the tools list with a new one called Scalock. Updated the conclusion section as well.
Dec 17th 2015: UPDATED Scalock section after some corrections they made me by email (thanks guys btw). I also fixed some typos.
April 7th 2017: UPDATED Scalock renamed to Aqua Security
Let’s suppose you are working in Security. Now, your company decides to run some applications in containers, they choose Docker, after some weeks or months testing it they want to go live, and suddenly someone says “should we do a security audit before going to production?”, the rest of the story is you and an audit to a Docker environment.
You can use all your existing arsenal and procedures your are familiar to audit the application running in the containers (file permissions, logs, etc.) but what about the containers, images, dockerfiles, docker servers or even the clustering and orchestration platform? This article is about that.
Considerations for this particular audit:
  1. Check if images and packages inside images are up-to-date and are free of security vulnerabilities.
  2. Audit automatization, we must be able to automatize all checks. That will save us a precious time and we can run it as often as we require, forget about to do it manually unless you are just testing or learning.
  3. Container links and volumes. If you use read-only filesystem in your running container “docker diff” can help you to find issues.
  4. The bigger an image is the harder the audit will be, reduce as much as you can the size of your images.
  5. The host kernel is the shared point between all containers in the same server, keep that kernel up-to-date.
Once said that, I want to give you an overview of the existing tools I have found to achieve your duty mentioned above. I have probably missed other tools, if so, please point me to them in the comments.
  1. Docker Bench for Security:
    • Description: The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers in production. Those checks are based on all recommendations taken from the CIS Docker 1.6 Benchmark document.
    • Focus: mostly Docker server and few tips for images and containers.
    • Language: Shell script
    • Methodology: Run the script in the same server where Docker is running or from a container. It will create a shell report with INFO, WARN or PASS alerts.
    • License: Apache 2.0
    • Installation/usability level: Easy
    • Demo/Presentationhttps://youtu.be/8mUm0x1uy7c?t=18m15s
    • More about audit and vulnerabilities assessment from Docker Inc:
      • Project Nautilus: presented during Docker CON 2015 in Barcelona: https://www.youtube.com/watch?v=fLfFFtOHRZQ& Project Nautilus, the new image scanning and vulnerability detection service for official repos on Docker Hub. As in @diogomonica words “Nautilus is already working on the background on all the official images”. Nautilus looks for any suspicious piece of software. Is not depending on public vulnerabilities data bases nor based on Linux distros, instead, it looks for vulnerabilities using their own data base. We will have more information soon and probably a closer look by Q1 2016. (Thanks Diogo for the info).
    • My comments: From the Docker server/daemon configuration point of view this is the best tool you can use to make sure you are in the right path. Definitely I would use this tool but in conjunction with others, keep reading.
  2. OpenSCAP Container Compliance:
    • Description: Based on the same philosophy as its parent project OpenSCAP that supports CVE scan, multiple report formats and custom policies. Specific instructions and packages for RedHat 7 are here. Note: SCAP is U.S. standard maintained by National Institute of Standards and Technology (NIST). The OpenSCAP project is an open source collection of tools for implementing and enforcing this standard.
    • Focus: Images and Containers
    • Language: Shell script
    • Methodology: run the oscap-docker command against an image or container and get the results on a very helpful and descriptive html report.
    • License: GPL v3
    • Installation/usability level: Easy
    • Demo/Presentation: https://zwischenzugs.wordpress.com/2015/12/14/888/
    • My comments: If you use RedHat/Fedora/CentOS based containers this is highly recommended for you.
    • UPDATE (Dec 2nd 2015): If you use Atomic they have recently released a new feature that allows you to scan containers for vulnerabilities using OpenSCAP, see this blog post here and code here.
  3. CoreOS Clair:
    • Description: Clair is a container vulnerability analysis service. It works as an API that analyzes every container layer to find known vulnerabilities using existing package managers such as Debian (dpkg), Ubuntu (dpkg), CentOS (rpm). It also can be used from the command line as showed here. It provides a list of vulnerabilities that threaten a container, and can notify users when new vulnerabilities that affect existing containers become known. It is being used by https://quay.io/
    • Focus: Images and Containers
    • Language: Go
    • Methodology: Used via API or command line it extract all layers of the image, notifies if vulnerabilities are found whenever they found it because it stores all the information in a data base, it also manages its own vulnerability database updates from known vulnerability sources.
    • License: Apache v2
    • Installation/usability level: Hard
    • Demo/Presentationhttps://coreos.com/blog/vulnerability-analysis-for-containers/
    • My comments: I couldn’t make it work in CentOS 7.1. I will add more info here as soon as I got something new.
  4. Banyan Collector:
    • Description: the BanyanOps guys are who started a the discussion about the huge amount of vulnerable images available in Docker Hub and that was responded in detail by @jpetazzo here. As the author says “it is a framework for Static Analysis of Docker container images”. That means that is does more than security analysis.
    • Focus: Images
    • Language: Go
    • Methodology: Even though it can run in a container, banyan collector can run form command line and connect to a given Docker registry to perform its analysis. See how it works in detail here.
    • License: Apache 2.0
    • Installation/usability level: Medium-Hard
    • Demo/Presentation: N/A
    • My comments: It is very oriented to check registries more than a pure vulnerability assessment tool.
  5. Lynis:
    • Description: Lynis is a Linux, Mac and Unix security auditing and system hardening tool that includes a module to audit Dockerfiles. It also shows some Docker server statistics and check permissions.
    • Focus: Dockerfile
    • Language: Shell script
    • Methodology: just run Lynis with the proper options and Dockerfile path and Lynis will take a look to the files installed and some other parameters inside the file.
    • License: GPL v3
    • Installation/usability level: N
    • Demo/Presentation:
    • My comments: You can hit two birds with one stone but not really useful for docker audit yet. I know the author is willing to add more support to Docker.
  6. Twistlock:
    • Description: As in the author words: Twistlock scans container images in registries, on developer workstations, or on production servers. We detect and report vulnerabilities in the Linux distribution layer, app frameworks, and even your customer app packages. In addition to the Open Source threat feeds it uses commercial threat feeds. Their solution also offers access control to actions based in users and groups and a very interesting Runtime defense that allows to monitor and act upon security based in roles, behaviors, compliance, malicious actions and more.
    • Focus: images, containers, packages. Made for Docker and Kubernetes or Mesos.
    • Language: Shell script, Javascript and Go.
    • Methodology: it uses NIST to find CVEs and the Docker CIS for vulnerability assessment. It does more than just that, features like advanced access control, runtime defense, monitoring and continuous integration. A container called defender has to run in every host and a central console collect and manages all of them from a central location.
    • License: commercial depending on number of hosts. Free Developer Edition up to 2 hosts without support.
    • Installation/usability level: Not tested, I have seen a live presentation and demo run by de vendor.
    • Demo/Presentationhttps://www.youtube.com/watch?v=SMCYHFDfSzk
    • My comments: Nothing much to say since I could’t play with it or see it in action. I will add more info once I have something else. I have had a meeting with the vendor and have a better view about what the product is, and it is the most complete solution I have seen so far. They cover enterprise grade security, they are starting and is a brand new product with just a few customer, the product has a big room to improve and add new features but it is covering in a smart way most of the requirements at this moment and with enough granularity that allow us to improve Docker security. Finally it is important to highlight that it is not just an auditing tool, it is a managed security tool for Docker.
  7. Bitnami Stacksmith:
    • Description: it is a tool to quickly generate custom Dockerfiles (as per Bitnami words: a declarative API to create containers), is not intended to be a security tool but it has that cool feature that helps you to detect outdated and vulnerable components while building your Dockerfiles or even in existing containers built in Stacksmith. It sends you an email when a compoenent has to be updated.
    • Focus: Dockerfiles, images and containers.
    • Language: unknown
    • Methodology: it uses an external public CVE scores https://cve.mitre.org DB to find CVEs of the given components for vulnerability assessment.
    • License: SaaS
    • Installation/usability level: Easy
    • Demo/Presentation: https://www.youtube.com/watch?v=4A24pD-P_N4
    • My comments: As SaaS it seems to be a very easy tool, from the security point of view it gives the user a clear view of the status of the container components which is very helpful to figure out if we have vulnerable or outdated containers.
  8. Dockscan
    • Description: a brand new tool, in a very early stage, released 2 weeks ago, it was presented at BlackHat Europe Arsenal. As per the author: Dockscan is a vulnerability assessment and audit tool for Docker and container installations. It will report on Docker installation security issues as well as Docker container configurations. The tool helps both system administrator administering Docker to help them secure Docker, as well as security auditors and penetration testers who need to audit Docker installation.
    • Focus: Docker server
    • Language: Ruby
    • Methodology: it uses some the existing CIS Docker 1.6 Benchmark best practices. Can work in local and remote Docker installations.
    • License: GPL v2
    • Installation/usability level: easy
    • Demo/Presentation: N/A
    • My comments: It has a very short list of features yet but looks interesting, I would keep an eye on it but not to be used as a mature tool by now.
  9. Drydock:  (do not confuse it with Dry-dock cluster)
    • Description: As per the author: drydock is a Docker security audit tool written in Python. It was initially inspired by Docker Bench for Security but aims to provide a more flexible way for assessing Docker installations and deployments. drydock allows easy creation and use of custom audit profiles in order to eliminate noise and false alarms. Reports are saved in JSON format for easier parsing. drydock makes heavy use of docker-py client API to communicate with Docker. It is based on CIS Docker 1.6 Benchmark.
    • Focus: Docker server and containers
    • Language: Python
    • Methodology: it uses some the existing CIS Docker 1.6 Benchmark best practices to check server configuration options.
    • License: GPL v2
    • Installation/usability level: Easy
    • Demo/Presentation: N/A
    • My comments: It is in a very early stage of development yet, seems to be ahead of Dockscan. Let’s see what’s next with this tool. Not mature enough to consider as a player.
  10. Batten:
    • Description: Hardening and auditing tool for docker hosts and containers. It is pretty much the same as Drydock or Docker Bench for Security.
    • Focus: Docker server and containers
    • Language: Go
    • Methodology: run as container and check the server and containers following the  CIS Docker 1.6 Benchmark.
    • License: MIT
    • Installation/usability level: Easy
    • Demo/Presentation: N/A
    • My comments: Nothing different to what Drydock or Docker Bench for Security does.
  11. Scalock (now known as Aqua Security):
    • Description: By the author: Scalock secures every stage of the container lifecycle. Scalock provides a comprehensive security solution for virtual containers by adding visibility and control to containerized environments, enabling organizations to scale-out without security limitations even on a very large scale. We support major container platforms, including Docker, CoreOS, VMWare and Microsoft Windows. Secures virtualized containers on every level: containers, hosts and applications.
    • Focus: images, containers, packages. Made for Docker and Kubernetes, CoreOS, VMWare and Microsoft Windows.
    • Language: Go and C/C++.
    • Methodology:It works pretty much in the same way as Twistlock does, using a central server and agent containers running in privileged mode on every Docker host. It uses Docker Bench for server configuration security best practices, it also uses public vulnerabilities DB to check outdated packages (RPMs and/or Debs) and code libraries (Java, Python, PHP, NodeJS, etc.) inside containers and images using their own scanner database. It can also control AuthZ/AuthN and implements runtime defense to protect containers from other containers, users or attackers. They use their own kernel module to improve the container isolation.
    • License: commercial depending on number of hosts. In BETA status right now.
    • Installation/usability level: Not tested, I have seen a live presentation and demo run by de vendor. It looks straighforward to use.
    • Demo/Presentation: N/A
    • My comments: They contacted me after I published this article. They show me more or less what the product can do and how it looks like. It is the biggest competitor of Twistlock at this momment but it is in a very early stage. As its competitor it has a huge room to improve and to add more security capabilities once they are coming to Docker like user namespaces. It is not just an auditing tool, it does that correctly it is a runtime defense tool as well.
Conclusion:
  • What of these tools would you use? Considering the early stage of most of them, I would use Docker Bench for Security, OpenSCAP and probably Bitnami Stacksmith. And I would keep and eye on the others. UPDATE: After the meeting I have had with @mwithrow, Director of Architecture at Twistlock, and see the product details I think that it is the most complete solution so far. See Twistlock section for details. UPDATE: no doubt that by now we have to keep an eye to Scalock as well, I’m really looking forward to see their move since Docker is announcing new security features every month or less.
  • Most of these tools are very new with months or weeks since they were released, there is a big room to improve them and adapt them to a more enterprise scale security. It is a good starting point to address audit and vulnerabilities assessment of our container ecosystem regardless it is a production, test or development environment. Looking forward to see what have to say the big security vendors about it.
  • In favor of all of them, I have to say that is hard to keep them updated since Docker is growing really quick and releasing versions with a bunch of new features (including security improvements) almost every week. So it is a tough race just try to keep any of these tools up to date.  I guess is the price to pay working with emerging technology.
  • In other post I would like to discuss more in detail about security in orchestration and how to achieve a proper audit on solutions like Kubernetes.
  • What about incident response? That’s another good point to cover in a blog post.
  • There are more coming, I’m looking forward to see what the big fishes have to say about it (Google, MS, AWS, etc.).

The 10 commandments to avoid disabling SELinux

Well, they are 10 ideas or commands actually 😉
Due to my new role at Alfresco as Senior DevOps Security Architect, I’m doing some new cool stuff (that I will be publishing here soon) and also learning a lot and helping a little bit with my knowledge on security to the DevOps team.
One of the goals I promised myself was to “never disable SELinux”, even if that means to learn more about it and spend time on it. I may say that it’s being a worth it investment of my time and here you go some results of it.

This article is not about what is or what is not SELinux, you have the Wikipedia for that. But a brief description could be: a MAC (Mandatory Access Control) implementation in Linux that prevents a process to access to other processes or files that is supposed to not to have access (open, read, write files, etc.)

If you are here is because you want to finally start using SELinux and you are really interested on make it work, to tame this wild horse. Let me just say something, if you are really worry about security and have dozens of Linux servers in production, keep SELinux enabled, keep it “Enforcing”, no question.
Once said that, here is my list. It is not an exhaustive list, I’m looking forward to see your insights in the comments:
  1. Enable SELinux in Enforcing mode:
    • In configuration files (need restart)
      • /etc/sysconfig/selinux (RedHat/CentOS 6.7 and older)
      • /etc/selinux/config (RedHat/CentOS 7.0 and newer)
    • Through commands (no restart required)
      • setenforce Enforcing
    • To check the status use
      • sestatus # or command getenforce
  2. Use the right tools. To do cool things you need cool tools, we will need some of them:
    • yum install -y setools-console policycoreutils-python setroubleshoot-server
    • policycoreutils-python comes with the great semanage command, the lord of the SELinux commands
    • setools-console comes with seinfosesearch and sechecker among others
    • from setroubleshoot-server package we will use sealert to easily identify issues
  3. Get to know what is going on: Dealing with SELinux happens mostly during installation, configuration and tests of Linux services. Therefore, in case something in your system is not working properly or in the same manner as with SELinux disabled. When you are configuring and installing a service or application on a server and something is not working as expected, not starting as it should to, you always think “Damn SELinux, let’s disable it”. Forget about that, you have to check the proper place to see what is going on with it: the Audit logs. Check /var/log/audit/audit.log and look for lines with “denied”.
    • tail -f /var/log/audit/audit.log | perl -pe ‘s/(\d+)/localtime($1)/e’
    • the perl command is to convert the Epoch time (or UNIX or POSIX time) inside the audit.log file to human readable time.
  4. See the extended attributes in the file system that SELinux use:
    • ls -ltraZ # most important here is the Z
    • ls -ltraZ /etc/nginx/nginx.conf will show:
      • -rw-r–r–. root root system_u:object_r:httpd_config_t:s0 /etc/nginx/nginx.conf
      • where system_u: is the user (not always a user of the system), object_r: role and  httpd_config_t: is the object type, other objects can be a directory, a port or socket and types of an object can be a config file, log file, etc.; finally s0 means the level or category of that object.
  5. See the SELinux attributes that applies to a running process:
    • ps auxZ
      • You need to know this command in case of issues.
  6. Who am I for SELinux:
    • id -Z
      • You need to know this command in case of issues.
  7. Check, enable or disable defined modes (enforcing or permissive) per deamon:
    • getsebool -a # list all current status
    • setsebool -P docker_connect_any 1 # allow Docker to connect to all TCP ports
    • semanage boolean -l # is another alternative command
    • semanage fcontext -l # to see al contexts where SELinux applies
  8. Add a non default directory or file to be used by a given daemon:
    • For a folder used by a service, i.e.: change Mysql data directory:
      • Change your default data directory in /etc/my.cnf
      • semanage fcontext -a -t mysqld_db_t “/var/lib/mysql-default/(/.*)?”
      • restorecon -Rv /var/lib/mysql-default
      • ls -lZ /var/lib/mysql-default
    • For a new file used by a service, i.e.: a new index.html file for Apache:
      • semanage fcontext -a -t httpd_sys_content_t ‘/myweb/web1/html/index.html’
      • restorecon -v ‘/myweb/web1/html/index.html’
  9. Add a non default port to be used by a given service:
    • i.e.: If you want nginx to listen in other additional port:
      • semanage port -a -t http_port_t -p tcp 2100
      • semanage port -l | grep  http_port # check if the change is effective
  10. Spread the word!
    • SELinux is not easy but writing easy tips make people using it and making the Internet a safer place!