All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am collecting logs from some files from /var/log and sysmon from journald. last 90 minutes /opt/splunkforwarder/var/log/splunk/audit.log 41 /opt/splunkforwarder/var/log/splunk/health.log ... See more...
I am collecting logs from some files from /var/log and sysmon from journald. last 90 minutes /opt/splunkforwarder/var/log/splunk/audit.log 41 /opt/splunkforwarder/var/log/splunk/health.log 39 /opt/splunkforwarder/var/log/splunk/metrics.log 8911 /opt/splunkforwarder/var/log/splunk/splunkd.log 598 /var/log/audit/audit.log 7 /var/log/messages 936 /var/log/secure 10 journald://sysmon 919   inputs.conf [monitor:///var/log/syslog] disabled = 0 sourcetype = syslog index = linux [monitor:///var/log/messages] disabled = 0 sourcetype = syslog index = linux [monitor:///var/log/secure] disabled = 0 sourcetype = linux_secure index = linux [monitor:///var/log/auth.log] disabled = 0 sourcetype = linux_secure index = linux [monitor:///var/log/audit/audit.log] disabled = 0 sourcetype = linux_audit index = linux [journald://sysmon] interval = 5 journalctl-quiet = true journalctl-include-fields = PRIORITY,_SYSTEMD_UNIT,_SYSTEMD_CGROUP,_TRANSPORT,_PID,_UID,_MACHINE_ID,_GID,_COMM,_EXE journalctl-exclude-fields = __MONOTONIC_TIMESTAMP,__SOURCE_REALTIME_TIMESTAMP journalctl-filter = _SYSTEMD_UNIT=sysmon.service sourcetype = sysmon:linux index = linux   I did not change number of pipelines. I thing that default count is 1. I will find out the OS version later. I do not have direct access to the OS. I thing it is CentOS/Redhat 8 or 9, but I may be wrong.  
Hi @Taruchit , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all th... See more...
Hi @Taruchit , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @Nivedita.Kumari, I'm unsure if you are using Dashboard or Dash Studio, but here is a link to all our Dashboard Documentation. 
I was wondering if there was a splunk app or a feature available to have a search bar when filtering by Splunk App.    Every time, you have to scroll for a bit just looking for the correct Splu... See more...
I was wondering if there was a splunk app or a feature available to have a search bar when filtering by Splunk App.    Every time, you have to scroll for a bit just looking for the correct Splunk app, even if its just the search app. Is there a way to add a search bar for the apps? We have on for other pages and options.     I may be overlooking something. 
thanks a lot @Esky73 
My presentation about Data Onboarding for Helsinki UG. https://data-findings.com/wp-content/uploads/2024/04/Data-OnBoarding-2024-04-03.pdf  It contains some hints and workflow how you could test dat... See more...
My presentation about Data Onboarding for Helsinki UG. https://data-findings.com/wp-content/uploads/2024/04/Data-OnBoarding-2024-04-03.pdf  It contains some hints and workflow how you could test data onboarding on your own workstation.
Which kind of logs you are collecting? Is it possible that there is some log or input which stalled this after it has read and then UF just wait free resources to read next one? Have you only one or... See more...
Which kind of logs you are collecting? Is it possible that there is some log or input which stalled this after it has read and then UF just wait free resources to read next one? Have you only one or several pipelines in your UF? Any performance data from OS level and which OS, version you have?
Hi @gcusello, Thank you for sharing your inputs. I have a report which fetches last seen timestamp of hosts across multiple indexes. I store the results in lookup file, and then use the lookup fil... See more...
Hi @gcusello, Thank you for sharing your inputs. I have a report which fetches last seen timestamp of hosts across multiple indexes. I store the results in lookup file, and then use the lookup file as a bounded static source from where we can read the results in other reports and dashboards as required. It helps me with two scenarios: - 1. If the report that generates results fails because of some reason, and as the result the downstream dashboards and reports that consume the data will also get impacted. And I will need to wait for Operations team to help with the issue or wait until the report runs again and hope that it runs successfully in the next execution. 2. Since I am referring a lookup file, the fetching and searching of records in SPLs written for reports and dashboards get faster. Please share if you have any views to consider and improve. Thank you
If you need parallelism then you must use kvstore based lookups not CSV based.
Hi the easiest way is use separate virtual machines, but you could do it also e.g. on linux box on lab env. In production this is not proposed way to do it. In linux you can just install 1st indexe... See more...
Hi the easiest way is use separate virtual machines, but you could do it also e.g. on linux box on lab env. In production this is not proposed way to do it. In linux you can just install 1st indexers and start it. Then you could install UF and start it with different ports for mgmt and probably some other. You just must check correct parameters from docs (I cannot find those now). But if I recall right UF tell that normal ports are reserved and ask to use some other ports. r. Ismo
Hi @PickleRick    thanks for the response i tried something similar to this . I tried to fetch %userprofile% and saved it to a variable and then call the variable as part of another command but it... See more...
Hi @PickleRick    thanks for the response i tried something similar to this . I tried to fetch %userprofile% and saved it to a variable and then call the variable as part of another command but it didnt help. Can you give an example
It seems that splunk didn't support to use normal (custom) command return data to use as value for eval. I suppose that you must update your custom command to work as function to use it with eval. W... See more...
It seems that splunk didn't support to use normal (custom) command return data to use as value for eval. I suppose that you must update your custom command to work as function to use it with eval. What is your actual issue which you are trying to solve with this eval way? Maybe there is some other way to do it or otherwise you must create additional custom function or something similar.
Hello! I am trying to upgrade to the latest version of Splunk Enterprise 9.3 on a RHEL 8 server, but I am getting this error message after accepting the license. Any one seen this error? I have chec... See more...
Hello! I am trying to upgrade to the latest version of Splunk Enterprise 9.3 on a RHEL 8 server, but I am getting this error message after accepting the license. Any one seen this error? I have checked the permissions, and they are all fine. Thanks! Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 846, in exec_module File "<frozen importlib._bootstrap_external>", line 982, in get_code File "<frozen importlib._bootstrap_external>", line 1039, in get_data PermissionError: [Errno 1] Operation not permitted: '/opt/splunk/lib/python3.9/encodings/__init__.py'
Hi @sherwin_r , if you cannot follow the solution from @isoutamo ,  the only way is my first solution, modify apps.conf on the Deployer and push the app to the Search Head Cluster. Ciao. Giuseppe
Hi @AL3Z , in this case, you have to configure two virtual machines that are connected: one with Splunk Enterprise and one with Splunk Universal Forwarder. Ciao. Giuseppe
Hi @chimuru84 , in this case you have to add some fields to the stats command, but the approach is always the same: index=...... earliest=-2y latest=-h [ search index=...... earliest=-h latest=now ... See more...
Hi @chimuru84 , in this case you have to add some fields to the stats command, but the approach is always the same: index=...... earliest=-2y latest=-h [ search index=...... earliest=-h latest=now | dedup id | fields id ] | eval period=if(_time>now()-31536000, "last Year","Previous Year") | stats dc(Period) AS Period_count values(Period) AS Period earliest(_time) AS first_date latest(_time) AS last_date BY id | where Period_count=1 AND Period!="Previous Year" | eval nr_of_days=last_date-first_date, first_date=strftime(first_date,"%Y-%m-%d %H:%M:%S"), last_date=strftime(last_date,"%Y-%m-%d %H:%M:%S") | table id nr_of_days first_date last_date Ciao. Giuseppe
Unfortunately I don't think that you have any other option. That was the reason why @gjanders did this app...
Hi @Taruchit , in addition, you could use a lookup with kv-store so you'll have a key that guarantees the unicity of data. Only one question: you're trying to use Splunk as a database and Splunk is... See more...
Hi @Taruchit , in addition, you could use a lookup with kv-store so you'll have a key that guarantees the unicity of data. Only one question: you're trying to use Splunk as a database and Splunk isn't a database, are you sure that you're using the best solution for your requirements? Ciao. Giuseppe
i will give a try thanks for the helps
The lookup file is written by the search head in one go.  There is no parallelism.