All Topics

Top

All Topics

Hi All,I am running a dashboard which returns the total count(stats count) of field mentioning Severity=ok or Severity=Critical. The requirement is if atealst one field value is Severity=Critical, ... See more...
Hi All,I am running a dashboard which returns the total count(stats count) of field mentioning Severity=ok or Severity=Critical. The requirement is if atealst one field value is Severity=Critical, the color of the panel should turn to Red otherwise Green when Severity=Ok.   Can someone please suggest.
Hello I have sources that contain white spaces and I wand to count them What is the regex to find all the sources with spaces ?   Thanks
As mentioned in https://advisory.splunk.com/advisories/SVD-2023-0606 under "Mitigations and Workarounds"  users can protect themselves from log injections via ANSI escape characters in general, by d... See more...
As mentioned in https://advisory.splunk.com/advisories/SVD-2023-0606 under "Mitigations and Workarounds"  users can protect themselves from log injections via ANSI escape characters in general, by disabling the ability to process ANSI escape codes Above statement is very generic statement and we cannot find any articles in internet about how to do the same. Take for e.g SSH putty is  generally used by most users. Above statement says to disable process of ANSI escape codes from this terminal app ? If that is the case. Where can we find the disabling ANSI escape codes documentation This is just one generic example As this is mitigation and workaround we also need to be carefully weigh Pros/Cons of it Please correct me If my understanding is wrong ?   Any help/pointers in making us understand the above point will be of great help  
i have a problem with the timestamp when i parsing the data, i want the date to start with 28/04/2023 and end with 03/05/2023 but it start with 30/04 then 29/04 and end with 28/04, who can i start th... See more...
i have a problem with the timestamp when i parsing the data, i want the date to start with 28/04/2023 and end with 03/05/2023 but it start with 30/04 then 29/04 and end with 28/04, who can i start the data with 28/04 and not 30/04
Receiving below errors, can someone help with the solution: Streamed search execute failed because: Error in 'lookup' command: Script execution failed for external search command '/splunk/var/run/s... See more...
Receiving below errors, can someone help with the solution: Streamed search execute failed because: Error in 'lookup' command: Script execution failed for external search command '/splunk/var/run/searchpeers/xxxx/apps/utbox/bin/ut_shannon.py'
Hello, we plan to try Kafka as data collector and we'd like to know if we should keep our HF to receive HEC inputs for Kafka data or send directly to indexers, about 200-300gb per day? Looks like... See more...
Hello, we plan to try Kafka as data collector and we'd like to know if we should keep our HF to receive HEC inputs for Kafka data or send directly to indexers, about 200-300gb per day? Looks like HF is better for filtering before indexing. Thanks.  
Hi, I would like to know if it is possible to perform a search in Splunk to find out if "rex" is used in all my dashboard searches. Kind regards Marta  
Hello, I have this code for a clickable chart to show details about the bar graph printed bellow: index="" host= sourcetype=csv source=C:\\....\\* | dedup source iswID iswCQR iswCC | table iswID... See more...
Hello, I have this code for a clickable chart to show details about the bar graph printed bellow: index="" host= sourcetype=csv source=C:\\....\\* | dedup source iswID iswCQR iswCC | table iswID iswTitle iswCQR iswCCsource | where iswCQR !="" | eval YYYY_CW_DD=split(source,"\\") | eval YYYY_CW_DD=substr(mvindex(YYYY_CW_DD, mvcount(YYYY_CW_DD)-1),1,11) | eval test1=if((iswCC="New Requirement") and (iswCQR !="No" and iswCQR !="Quoted" and iswCQR !="Accepted"), 1,0) | stats sum(test1) as "New requirement without No, Quoted, Accepted" by YYYY_CW_DD | where YYYY_CW_DD="$date2$" | where "New requirement without No, Quoted, Accepted"="$yaxis2$" The drilldown token is set as $date2$ = $click.value$ and $yaxis2$=$click.name2$ The chart code is almost the same as this, besides the addition of the drilldown tokens   The goal is to show in the details table all the ID's that are summed inside the bar graph, for instance, the 86 values in the bar (showed in the print bellow)  should return the 86 IDs listed in the table.   How can I do that?
Installed the app, could launch and get to the search page. However am unable to execute any search that has exportalerts command. I get the error message : "Error in 'exportalerts' command: Cannot f... See more...
Installed the app, could launch and get to the search page. However am unable to execute any search that has exportalerts command. I get the error message : "Error in 'exportalerts' command: Cannot find program 'exportalerts' or script 'exportalerts'."
Hi Everyone When I click on an area on the map, link to another dashboard,  how to setting ? such as the picture, when i click on Beijing, link to dashborad A , Click on Shanghai, link to dashbor... See more...
Hi Everyone When I click on an area on the map, link to another dashboard,  how to setting ? such as the picture, when i click on Beijing, link to dashborad A , Click on Shanghai, link to dashborad B how to setting ?  
I am trying to make first two columns of a table output to be sticky...I can do one by using      <html> <style> #myTable th:first-child,td:first-child { left:0; z-index: 999... See more...
I am trying to make first two columns of a table output to be sticky...I can do one by using      <html> <style> #myTable th:first-child,td:first-child { left:0; z-index: 9999; position: sticky; }     The above code works for one column on the left..But I want two to be sticky
I'm trying to do a simple query to get a hostname from events in a different sourcetype. I have a event in sourcetype A, which don't have a field "host_name". This field is present in sourcetype B. T... See more...
I'm trying to do a simple query to get a hostname from events in a different sourcetype. I have a event in sourcetype A, which don't have a field "host_name". This field is present in sourcetype B. The index is the same, let's call it X. Both events can be matched through the field "sensor_id". I want to retrieve the field "process_command_line" from sourcetype A and host_name from sourcetype B, for the events that match the same "sensor_id" field. Here's a sample query that works:     index=X sourcetype=B [search index=X sourcetype=A | table sensor_id] | table sensor_id host_name     However, I also need to retrieve the process_command_line, which is only present in sourcetype A. If I add that to the subsearch, it retrieves zero results:     index=X sourcetype=B [search index=X sourcetype=A | table sensor_id process_command_line] | table sensor_id host_name process_command_line     Any idea how can I retrieve all three fields?    
Register here and ask questions below. This thread is for the Community Office Hours session on Splunk Observability and OpenTelemetry on Wed, September 27, 2023 at 1pm PT / 4pm ET.    This is your... See more...
Register here and ask questions below. This thread is for the Community Office Hours session on Splunk Observability and OpenTelemetry on Wed, September 27, 2023 at 1pm PT / 4pm ET.    This is your opportunity to ask questions related to your specific Observability challenge or use case.   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will go in order of the questions posted below, then will open the floor up to live Q&A with meeting participants. If there’s a quick answer available, we’ll post as a direct reply.   Look forward to connecting!
Register here and ask questions below. This thread is for the Community Office Hours session on Splunk IT Service Intelligence (ITSI) on Wed, September 13, 2023 at 1pm PT / 4pm ET.    This is your ... See more...
Register here and ask questions below. This thread is for the Community Office Hours session on Splunk IT Service Intelligence (ITSI) on Wed, September 13, 2023 at 1pm PT / 4pm ET.    This is your opportunity to ask questions related to your specific ITSI challenge or use case, including: ITSI installation and troubleshooting, including Splunk Content Packs  Implementing ITSI use cases and procedures How to organize and correlate events Using machine learning for predictive alerting How to maintain accurate & up-to-date service maps Creating ITSI Glass Tables, leveraging performance dashboards (e.g., Episode Review), and anything else you’d like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants. If there’s a quick answer available, we’ll post as a direct reply.   Look forward to connecting!
I am trying to define various functions for each component level. However I am having multiple Splunk environments and I wanted to split Indexer group by different region. Say I have 2 Indexers in A ... See more...
I am trying to define various functions for each component level. However I am having multiple Splunk environments and I wanted to split Indexer group by different region. Say I have 2 Indexers in A region and 5 in B region and 1 in C region. Apart from splunk_server_group=dmc_group_indexers can I call custom group in my REST query to fetch particular indexers in that region? Or is that possible to call via custom macro? Please throw some light on this
[EMEA-friendly: 8am PT / 4pm UK time] - Register here and ask questions below. This thread is for the Community Office Hours session on Getting Data In (GDI) to Splunk Platform on Wed, September 6, 2... See more...
[EMEA-friendly: 8am PT / 4pm UK time] - Register here and ask questions below. This thread is for the Community Office Hours session on Getting Data In (GDI) to Splunk Platform on Wed, September 6, 2023 at 8am PT / 11am ET / 4pm UK time   This is your opportunity to ask questions related to your specific GDI challenge or use case, including: How to onboard common data sources (AWS, Azure, Windows, *nix, etc.) Using forwarders Apps to get data in Data Manager (Splunk Cloud Platform) Ingest actions, archiving your data, and anything else you’d like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will go in order of the questions posted below, then will open the floor up to live Q&A with meeting participants. If there’s a quick answer available, we’ll post as a direct reply.   Look forward to connecting!
Hello everyone, I'm trying to use the new custom containers feature to create a container with numpy, pandas and feature-engine packets. The container is created successfully, but every time I try ... See more...
Hello everyone, I'm trying to use the new custom containers feature to create a container with numpy, pandas and feature-engine packets. The container is created successfully, but every time I try to send data to it the search returns the "container unable to read JSON response from http://localhost:<api_port>/fit" message. It only happens with the custom container, the mltk-container-golden-image-cpu:5.0.0 continues working well. I've tried almost all the solutions o could find for the same error, but none of them work. Can anyone help me, please?
I would like to sync my production search head to a development search head on a daily/weekly basis.  I need the same apps/configs on development server for testing before moving approved configs bac... See more...
I would like to sync my production search head to a development search head on a daily/weekly basis.  I need the same apps/configs on development server for testing before moving approved configs back into production.   Any tips on building a development search head and pulling in the production configs/apps?
Hello Resilience Questers! The adventure has truly begun, and we are excited to unveil the first official leaderboard for "The Great Resilience Quest"! It is been an incredible journey so far, and w... See more...
Hello Resilience Questers! The adventure has truly begun, and we are excited to unveil the first official leaderboard for "The Great Resilience Quest"! It is been an incredible journey so far, and we've seen some fantastic efforts from all our participants. For those new to the quest, it's not too late to join! "The Great Resilience Quest" is our interactive game designed to fortify your understanding of achieving digital resilience with Splunk, through engaging real-world use cases. Join us now and embark on this epic journey. Learn more and sign up HERE . Check out the Leaderboard   Congratulations to our current leaders!  How We Feature the Leaderboard: The leaderboard is determined by a combination of factors: the number of quests players have finished, the chapters they have completed, and the time taken to complete them. It's a multi-faceted approach that recognizes the true champions of resilience. What's next:  Those players who have been featured on the leaderboard are now placed in the pool for the special Champion’s Tribute rewards. It is our way of honoring your efforts and encouraging you to continue on this exciting journey toward digital resilience mastery. Please stay tuned for the next leaderboard update in two weeks!  Thank you all for participating, and may the best questers conquer! Best regards, Splunk Customer Success
I've been fighting all day trying to figure out what keeps causing above error when starting Splunk, and here is some background:   OS: CentOS Stream 9 Kernel: Linux 5.14.0-295.el9.x86_64 Splunk: s... See more...
I've been fighting all day trying to figure out what keeps causing above error when starting Splunk, and here is some background:   OS: CentOS Stream 9 Kernel: Linux 5.14.0-295.el9.x86_64 Splunk: splunk-9.0.4.1-419ad9369127-Linux-x86_64.tgz   Earlier (today) a ran version: 8.2.9, but as it kept failing I thought it could be the Splunk version and some systemctl stuff issues (as I've read quite a bit about), but after upgrading it's still the same. The service has been initiated as:   sudo /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user splunk   The sudo systemctl status Splunkd shows:   × Splunkd.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/Splunkd.service; enabled; preset: disabled) Active: failed (Result: exit-code) since Wed 2023-08-02 19:53:42 CEST; 1h 37min ago Duration: 983us Process: 402262 ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd (code=exited, status=8) Process: 402263 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/system.slice/Splunkd.service (code=exited, status=0/SUCCESS) Main PID: 402262 (code=exited, status=8) CPU: 7ms aug 02 19:53:42 localhost.localdomain systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Converting job Splunkd.service/restart -> Splunkd.service/start aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Consumed 7ms CPU time. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Start request repeated too quickly. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Failed with result 'exit-code'. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Service restart not allowed. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed dead -> failed aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Job 99417 Splunkd.service/start finished, result=failed aug 02 19:53:42 localhost.localdomain systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Unit entered failed state.   If I run the ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd directly from the command line splunk starts without any problems - I don't get it. I've edited: /etc/systemd/system.conf and added:   LogLevel=debug   And running: journalctl -xeu Splunkd.service writes:   ░░ ░░ The unit Splunkd.service completed and consumed the indicated resources. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Will spawn child (service_enter_start): /opt/splunk/bin/splunk aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: cgroup-compat: Applying [Startup]CPUShares=1024 as [Startup]CPUWeight=100 on /system.slice/Splunkd.service aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Failed to set 'io.weight' attribute on '/system.slice/Splunkd.service' to 'default 100': No such file or directory aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: cgroup-compat: Applying MemoryLimit=7922106368 as MemoryMax= aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Passing 0 fds to service aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: About to execute /opt/splunk/bin/splunk _internal_launch_under_systemd aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Forked /opt/splunk/bin/splunk as 402262 aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Will spawn child (service_enter_start_post): /bin/bash aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: About to execute /bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/system.slice/Splunkd.service" aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Forked /bin/bash as 402263 aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed dead -> start-post aug 02 19:53:42 localhost.localdomain systemd[1]: Starting Systemd service file for Splunk, generated by 'splunk enable boot-start'... ░░ Subject: A start job for unit Splunkd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit Splunkd.service has begun execution. ░░ ░░ The job identifier is 99280. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: User lookup succeeded: uid=1002 gid=1002 aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: User lookup succeeded: uid=1002 gid=1002 aug 02 19:53:42 localhost.localdomain systemd[402263]: Splunkd.service: Executing: /bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/system.slice/Splunkd.service" aug 02 19:53:42 localhost.localdomain systemd[402262]: Splunkd.service: Executing: /opt/splunk/bin/splunk _internal_launch_under_systemd aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Child 402263 belongs to Splunkd.service. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Control process exited, code=exited, status=0/SUCCESS (success) ░░ Subject: Unit process exited ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ An ExecStartPost= process belonging to unit Splunkd.service has exited. ░░ ░░ The process' exit code is 'exited' and its exit status is 0. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Got final SIGCHLD for state start-post. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed start-post -> running aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Job 99280 Splunkd.service/start finished, result=done aug 02 19:53:42 localhost.localdomain systemd[1]: Started Systemd service file for Splunk, generated by 'splunk enable boot-start'. ░░ Subject: A start job for unit Splunkd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit Splunkd.service has finished successfully. ░░ ░░ The job identifier is 99280. aug 02 19:53:42 localhost.localdomain splunk[402262]: Couldn't open "/opt/splunk/bin/../etc/splunk-launch.conf": Permission denied aug 02 19:53:42 localhost.localdomain splunk[402262]: Couldn't open "/opt/splunk/bin/../etc/splunk-launch.conf": Permission denied aug 02 19:53:42 localhost.localdomain splunk[402262]: ERROR: Couldn't determine $SPLUNK_HOME or $SPLUNK_ETC; perhaps one should be set in environment aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Child 402262 belongs to Splunkd.service. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Main process exited, code=exited, status=8/n/a ░░ Subject: Unit process exited ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ An ExecStart= process belonging to unit Splunkd.service has exited. ░░ ░░ The process' exit code is 'exited' and its exit status is 8. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Failed with result 'exit-code'. ░░ Subject: Unit failed ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit Splunkd.service has entered the 'failed' state with result 'exit-code'. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Service will restart (restart setting) aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed running -> failed aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Unit entered failed state. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Consumed 7ms CPU time. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit Splunkd.service completed and consumed the indicated resources. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed failed -> auto-restart aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Service RestartSec=100ms expired, scheduling restart. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Trying to enqueue job Splunkd.service/restart/replace aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Installed new job Splunkd.service/restart as 99417 aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Enqueued job Splunkd.service/restart as 99417 aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Scheduled restart job, restart counter is at 5. ░░ Subject: Automatic restarting of a unit has been scheduled ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ Automatic restarting of the unit Splunkd.service has been scheduled, as the result for ░░ the configured Restart= setting for the unit. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed auto-restart -> dead aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Job 99417 Splunkd.service/restart finished, result=done aug 02 19:53:42 localhost.localdomain systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. ░░ Subject: A stop job for unit Splunkd.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit Splunkd.service has finished. ░░ ░░ The job identifier is 99417 and the job result is done. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Converting job Splunkd.service/restart -> Splunkd.service/start aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Consumed 7ms CPU time. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit Splunkd.service completed and consumed the indicated resources. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Start request repeated too quickly. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Failed with result 'exit-code'. ░░ Subject: Unit failed ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit Splunkd.service has entered the 'failed' state with result 'exit-code'. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Service restart not allowed. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed dead -> failed aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Job 99417 Splunkd.service/start finished, result=failed aug 02 19:53:42 localhost.localdomain systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. ░░ Subject: A start job for unit Splunkd.service has failed ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit Splunkd.service has finished with a failure. ░░ ░░ The job identifier is 99417 and the job result is failed. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Unit entered failed state. lines 2855-2974/2974 (END)   In relation to above error:   aug 02 22:14:58 localhost.localdomain systemd[404267]: Splunkd.service: Executing: /opt/splunk/bin/splunk _internal_launch_under_systemd aug 02 22:14:58 localhost.localdomain splunk[404267]: Couldn't open "/opt/splunk/bin/../etc/splunk-launch.conf": Permission denied aug 02 22:14:58 localhost.localdomain splunk[404267]: Couldn't open "/opt/splunk/bin/../etc/splunk-launch.conf": Permission denied aug 02 22:14:58 localhost.localdomain splunk[404267]: ERROR: Couldn't determine $SPLUNK_HOME or $SPLUNK_ETC; perhaps one should be set in environment aug 02 22:14:58 localhost.localdomain systemd[1]: Splunkd.service: Child 404267 belongs to Splunkd.service. aug 02 22:14:58 localhost.localdomain systemd[1]: Splunkd.service: Main process exited, code=exited, status=8/n/a   The text:   "/opt/splunk/bin/../etc/splunk-launch.conf": Permission denied   does not make any sense it's set as:   -rwxrwxrwx. 1 splunk splunk 765 2 aug 19:31 splunk-launch.conf   Any help would be highly appreciated.