Setting up Elastic Server

Before starting with any configuration we need to make sure the hostname is set.

[[email protected] ~]# hostnamectl 
    Static hostname: localhost.localdomain
 Transient hostname: elastic.linuxsysadmins.local
          Icon name: computer-vm
            Chassis: vm
         Machine ID: 36618758588646fb9bd7e5ceb0e73a70
            Boot ID: e737fa66fb1148bcaadb142ca2dc39d3
     Virtualization: kvm
   Operating System: CentOS Linux 7 (Core)
        CPE OS Name: cpe:/o:centos:centos:7
             Kernel: Linux 3.10.0-1062.el7.x86_64
       Architecture: x86-64
[[email protected] ~]#

Firewalls for Elastic Server

The required port in Elastic server as follows.

# firewall-cmd --add-port=9200/tcp --permanent
# firewall-cmd --add-port=5601/tcp --permanent
# firewall-cmd --reload
# firewall-cmd --list-all

Right after that, Import the GPG key for the new repository and create the repo file.

# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
# cat > /etc/yum.repos.d/elastic.repo << EOF
 [elasticsearch-7.x]
 name=Elasticsearch repository for 7.x packages
 baseurl=https://artifacts.elastic.co/packages/7.x/yum
 gpgcheck=1
 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
 enabled=1
 autorefresh=1
 type=rpm-md
 EOF

Installing Elastic

Once after adding the repo install Elastic Search. The package size will be around 475 MB.

# yum install elasticsearch-7.6.1 -y

Output for reference

Installing : elasticsearch-7.6.1-1.x86_64                                                                                                                                      1/1 
 NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
  sudo systemctl enable elasticsearch.service
 You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service
 Created elasticsearch keystore in /etc/elasticsearch
   Verifying  : elasticsearch-7.6.1-1.x86_64                                                                                                                                      1/1 
 Installed:
   elasticsearch.x86_64 0:7.6.1-1                                                                                                                                                     
 Complete!
[[email protected] ~]#

By default, Elastic search will listen on loopback (127.0.0.1) localhost interface.

# vim /etc/elasticsearch/elasticsearch.yml
network.host: 192.168.0.131
node.name: elastic
cluster.initial_master_nodes: ["elastic"]

Start and enable the Elasticsearch service

# systemctl daemon-reload
# systemctl enable elasticsearch.service
# systemctl start elasticsearch.service
[[email protected] ~]# systemctl status elasticsearch.service 
 ● elasticsearch.service - Elasticsearch
    Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
    Active: active (running) since Sat 2020-03-14 12:13:23 +04; 2min 13s ago
      Docs: http://www.elastic.co
  Main PID: 1342 (java)
    CGroup: /system.slice/elasticsearch.service
            ├─1342 /usr/java/jdk-13.0.2/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Df…
            └─1934 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
 Mar 14 12:13:12 elastic.linuxsysadmins.local systemd[1]: Starting Elasticsearch…
 Mar 14 12:13:12 elastic.linuxsysadmins.local elasticsearch[1342]: Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and … release.
 Mar 14 12:13:23 elastic.linuxsysadmins.local systemd[1]: Started Elasticsearch.
 Hint: Some lines were ellipsized, use -l to show in full.
[[email protected] ~]#

Let’s verify the cluster health, It should show the status as “green”.

[[email protected] ~]# curl 192.168.0.131:9200/_cat/health?v
 epoch      timestamp cluster       status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
 1584126047 19:00:47  elasticsearch green           1         1      6   6    0    0        0             0                  -                100.0%
[[email protected] ~]#

Few Tweaks for ElasticSearch

Configure Java path and Heap memory for elastic search.

# vim /etc/sysconfig/elasticsearch
JAVA_HOME=/usr/java/jdk-13.0.2
ES_HEAP_SIZE=4g
MAX_OPEN_FILES=65535

Adding Filebeat Template

Once the Elasticsearch server up and running switch back to wazuh server where filebeat installed and run the below command to add the filebeat template.

# filebeat setup --index-management -E setup.template.json.enabled=false

[[email protected] ~]# filebeat setup --index-management -E setup.template.json.enabled=false
 ILM policy and write alias loading not enabled.
 Index setup finished.
[[email protected] ~]#

Once we run the above command we should get the output on the elastic search server log /var/log/elasticsearch/elasticsearch.log as shown below.

[2020-03-13T16:41:19,832][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elastic] adding template [filebeat-7.6.1] for index patterns [filebeat-7.6.1-*]

Let’s verify the same by running curl from filebeat server.

[[email protected] ~]# curl http://192.168.0.131:9200
 {
   "name" : "elastic",
   "cluster_name" : "elasticsearch",
   "cluster_uuid" : "hivhkmBPRXq5ihLx0KLRgg",
   "version" : {
     "number" : "7.6.1",
     "build_flavor" : "default",
     "build_type" : "rpm",
     "build_hash" : "aa751e09be0a5072e8570670309b1f12348f023b",
     "build_date" : "2020-02-29T00:15:25.529771Z",
     "build_snapshot" : false,
     "lucene_version" : "8.4.0",
     "minimum_wire_compatibility_version" : "6.8.0",
     "minimum_index_compatibility_version" : "6.0.0-beta1"
   },
   "tagline" : "You Know, for Search"
 }
[[email protected] ~]#

Look’s good as of now. Let’s move forward.

Setting up Kibana

To visualize the events and archived store in elastic search let us use Kibana. Install with the latest version of Kibana by running

# yum install kibana -y

Right after, install the wazuh plugin for kibana by downloading the zip file.

# cd /tmp/
# wget https://packages.wazuh.com/wazuhapp/wazuhapp-3.11.4_7.6.1.zip
# cd /usr/share/kibana/
# sudo -u kibana bin/kibana-plugin install file:///tmp/wazuhapp-3.11.4_7.6.1.zip
[[email protected] kibana]# sudo -u kibana bin/kibana-plugin install file:///tmp/wazuhapp-3.11.4_7.6.1.zip 
 Attempting to transfer from file:///tmp/wazuhapp-3.11.4_7.6.1.zip
 Transferring 25031565 bytes………………..
 Transfer complete
 Retrieving metadata from plugin archive
 Extracting plugin archive
 Extraction complete
 Plugin installation complete
[[email protected] kibana]#

Make changes to kibana configuration to listen from anywhere outside of the box.

# vim /etc/kibana/kibana.yml

Moreover, we need to set the IP of the elastic search server by editing the same config file.

server.port: 5601
server.host: "192.168.0.131"
server.name: "elastic"
elasticsearch.hosts: ["http://192.168.0.131:9200"]
pid.file: /var/run/kibana.pid
logging.dest: /var/log/kibana/kibana.log
logging.verbose: false

Create the PID file and log file with respective ownership and permissions.

# touch /var/run/kibana.pid
# chown kibana:kibana /var/run/kibana.pid
# mkdir /var/log/kibana/
# touch /var/log/kibana/kibana.log
# chown -R kibana:kibana /var/log/kibana/*

Increase the kibana Heap Memory

# vim /etc/default/kibana
 
NODE_OPTIONS="--max-old-space-size=4096"

Add the port 5601 as HTTP service in SELinux.

[[email protected] ~]# semanage port -a -t http_port_t -p tcp 5601

Finally, Enable and restart the kiabana service.

# systemctl daemon-reload
# systemctl enable kibana.service
# systemctl start kibana.service
[[email protected] ~]# systemctl status kibana.service 
 ● kibana.service - Kibana
    Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
    Active: active (running) since Sat 2020-03-14 12:13:10 +04; 2min 24s ago
  Main PID: 970 (node)
    CGroup: /system.slice/kibana.service
            └─970 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
 Mar 14 12:13:10 localhost.localdomain systemd[1]: Started Kibana.

Configure the wazuh plugin with credentials and make sure to use https.

# vim /usr/share/kibana/plugins/wazuh/wazuh.yml
hosts:
 default:
  url: https://192.168.0.130
  port: 55000
  user: babinlonston
  password: xxxxxxxxx 

Access Kibana Graphical Dashboard

Navigate to http://elastic.linuxsysadmins.local:5601 to access the Kibana dashboard.

The status of the API can be checked from the Kibana graphical interface.

Click on the left side Wazuh icon, by following in right-hand side click on the gear icon to get the API configuration window.

  1. Wazuh Manager server.
  2. The IP address of the Wazuh server.
  3. Port of the API.
  4. User account used for API communication.
  5. Status of the API communication.

Now, Let me add my all physical servers as an agent.

Agent Installation for Debian Based servers

Install the Agent

Few of my physical servers are need to be monitored, let’s add all of those Debian based physical servers to Wazuh Manager. To start with the agent setup will begin with resolving the required dependencies.

# apt-get install curl apt-transport-https lsb-release gnupg2

[email protected]:~# apt-get install curl apt-transport-https lsb-release gnupg2
 Reading package lists… Done
 Building dependency tree       
 Reading state information… Done
 apt-transport-https is already the newest version (1.8.2).
 gnupg2 is already the newest version (2.2.12-1+deb10u1).
 lsb-release is already the newest version (10.2019051400).
 curl is already the newest version (7.64.0-4+deb10u1).
 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
[email protected]:~#

Add the GPG key for Wazuh agent, By following add the repository

# curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add -
# echo "deb https://packages.wazuh.com/3.x/apt/ stable main" | tee /etc/apt/sources.list.d/wazuh.list
# apt-get update

Start the agent installation by running

# apt-get install wazuh-agent
[email protected]:~# apt-get install wazuh-agent
 Reading package lists… Done
 Building dependency tree       
 Reading state information… Done
 wazuh-agent is already the newest version (3.11.4-1).
 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
[email protected]:~#

Adding client

To add the client we need to be in Wazhu Manager. While running the mange_agents command, make sure to use -a option with the client IP address or hostname and use -n for what name to be listed in Wazhu Manager.

# /var/ossec/bin/manage_agents -a 192.168.0.11 -n pve1
[[email protected] ~]# /var/ossec/bin/manage_agents -a 192.168.0.11 -n pve1
 
 Wazuh v3.11.4 Agent manager.         *
 The following options are available: * 
 
 (A)dd an agent (A).
    (E)xtract key for an agent (E).
    (L)ist already added agents (L).
    (R)emove an agent (R).
    (Q)uit.
 Choose your action: A,E,L,R or Q: 
 Adding a new agent (use '\q' to return to the main menu).
 Please provide the following:
 A name for the new agent:    * The IP Address of the new agent: Confirm adding it?(y/n):
 Agent added with ID 003. 
 manage_agents: Exiting.
[[email protected] ~]#

List and verify all the added agents

[[email protected] ~]# /var/ossec/bin/manage_agents -l
 Available agents: 
    ID: 003, Name: pve1, IP: 192.168.0.11
[[email protected] ~]#

Now, we need to extract the key for an agent using its ID number.

[[email protected] ~]# /var/ossec/bin/manage_agents -e 003
 Agent key information for '003' is: 
 MDAzIHB2ZTEgMTkyLjE2OC4wLjExIGFhYjBhMGRhYmY4OWFhZjBhMzI5MTk4ZTFkMzg4ODUxODRjOGM0YjQ0NDQ5YTU5N2EyYWYyYTAzNTNhMWExZTY=
[[email protected] ~]#

Once the key is ready, switch to client-side and run the manage_agents command with -i option to import the authentication key.

[email protected]:~# /var/ossec/bin/manage_agents -i MDAzIHB2ZTEgMTkyLjE2OC4wLjExIGFhYjBhMGRhYmY4OWFhZjBhMzI5MTk4ZTFkMzg4ODUxODRjOGM0YjQ0NDQ5YTU5N2EyYWYyYTAzNTNhMWExZTY=
 Agent information:
    ID:003
    Name:pve1
    IP Address:192.168.0.11
 Confirm adding it?(y/n): y
 Added.
[email protected]:~#

That’s it, let’s restart the agent.

# systemctl restart wazuh-agent

We have added with few agents and they are active.

That’s it, we have completed with Wazhu client-side setup.

Troubleshooting and fixes.

If Elastic search or Kibana failed to start, troubleshoot with below commands.

# journalctl _SYSTEMD_UNIT=kibana.service
# journalctl -f _SYSTEMD_UNIT=kibana.service

Solution for JavaScript heap Out of memory for Kibana

If we miss with increasing the heap memory for Java we may encounter with below errors.

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

Reference URL: https://groups.google.com/forum/#!topic/wazuh/Ji7-l3Y8JK8

  Mar 13 17:35:45 elastic.linuxsysadmins.local kibana[11128]: <--- JS stacktrace --->
  Mar 13 17:35:45 elastic.linuxsysadmins.local kibana[11128]: ==== JS stack trace =========================================
  Mar 13 17:35:45 elastic.linuxsysadmins.local kibana[11128]: 0: ExitFrame [pc: 0x1b22d6a5be1d]
  Mar 13 17:35:45 elastic.linuxsysadmins.local kibana[11128]: Security context: 0x20f68519e6e9 
  Mar 13 17:35:45 elastic.linuxsysadmins.local kibana[11128]: 1: add [0x20f685191831](this=0x22d475c05fc9 ,0x11a62cea3299 )
  Mar 13 17:35:45 elastic.linuxsysadmins.local kibana[11128]: 2: prepend_comments [0xd64b2eba221] [/usr/share/kibana/node_modules/terser/dist/bundle.min.js:1] [bytecode=0x142c02cb7e9 
  Mar 13 17:35:45 elastic.linuxsysadmins.local kibana[11128]: 3: /* anonym…
  Mar 13 17:35:45 elastic.linuxsysadmins.local kibana[11128]: FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
  Mar 13 17:35:45 elastic.linuxsysadmins.local kibana[11128]: 1: 0x8fa090 node::Abort() [/usr/share/kibana/bin/../node/bin/node]
  Mar 13 17:35:45 elastic.linuxsysadmins.local kibana[11128]: 2: 0x8fa0dc  [/usr/share/kibana/bin/../node/bin/node]
  Mar 13 17:35:45 elastic.linuxsysadmins.local kibana[11128]: 3: 0xb0052e v8::Utils::ReportOOMFailure(v8::internal::Isolate, char const, bool) [/usr/share/kibana/bin/../node/bin/nod
  Mar 13 17:35:45 elastic.linuxsysadmins.local kibana[11128]: 4: 0xb00764 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate, char const, bool) [/usr/share/kibana/bin/.

Solution for Warning to connect the Wazhu API

If your Kibana Dashboard shows Warning to connect with Wazhu API or with below error.

3099 - ERROR3099 - Some Wazuh daemons are not ready in node 'node01' (wazuh-db->failed)

Switch to Wazhu Manager server and restart the services.

# systemctl restart wazuh-api
# systemctl restart wazuh-manager

Few more troubleshooting we have gone through are as follows.

[[email protected] ~]# curl -X GET "192.168.0.131:9200/_cat/indices/.kib*?v&s=index"
  health status index                  uuid                   pri rep docs.count docs.deleted store.size pri.store.size
  green  open   .kibana_1              bfgsdC23TWCcdiTCqZsOHg   1   0          0            0       230b           230b
  green  open   .kibana_task_manager_1 47p0T3PjTaKRzmhOv1IWOg   1   0          0            0       283b           283b
[[email protected] ~]#
[[email protected] ~]# curl -XDELETE http://192.168.0.131:9200/.kibana_1
{"acknowledged":true}
[[email protected] ~]# 

We will keep updating this troubleshooting steps while we face similar issues in future.

Conclusion

The Wazuh OpenSource security tool provides production-ready software for analyzing our logs. Will come up with more articles related to Wazuh. Subscribe to our newsletter and stay with us.

LEAVE A REPLY

Please enter your comment!
Please enter your name here