All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello.  I have a data source that is "mostly" json formatted, except it uses single quotes instead of double, therefore, splunk is not honoring it if I set the sourcetype to json. If I run a query ... See more...
Hello.  I have a data source that is "mostly" json formatted, except it uses single quotes instead of double, therefore, splunk is not honoring it if I set the sourcetype to json. If I run a query against it using this: sourcetype="test" | rex field=_raw mode=sed "s/'/\"/g" | spath it works fine, and all fields are extracted. How can I configure props and transforms to perform this change at index time so that my users don't need to have the additional search parameters and all the fields are extracted by default, short of manually extracting each field? Example event, no nested fields: {'date': '2024-02-10', 'time': '18:59:27', 'field1': 'foo', 'field2': 'bar'}
Andreas, thank for the quick response.   Unfortunately, I am using Splunk Cloud, and I see in your "curl.py" file that VERIFYSSL is "Forced to be True for Splunk Cloud Compatibility". So, while "cu... See more...
Andreas, thank for the quick response.   Unfortunately, I am using Splunk Cloud, and I see in your "curl.py" file that VERIFYSSL is "Forced to be True for Splunk Cloud Compatibility". So, while "curl -k" works from the LINUX command line on my Splunk server,  in Splunk SPL the "| curl verifyssl=false" is overridden in the add-on's python code. Is there any way to override ??? If not, I will have to find another way to do this, as I am constrained by my environment.
Just happened to us now... Do we know if this fixed it and\or what was the initial cause.  This was just after a splunkd restart.
You probably have configured also this https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/ConfigureauthextensionsforSAMLtokens ? Maybe it’s time for support ticket?
Hi @Josua.Panjaitan, Did the reply above help? If so, take a quick second to click the "Accept as Solution" button on the reply that helped. If not, reply to this thread and keep the conversation g... See more...
Hi @Josua.Panjaitan, Did the reply above help? If so, take a quick second to click the "Accept as Solution" button on the reply that helped. If not, reply to this thread and keep the conversation going. 
Hi @Sarath Kumar.Sarepaka, If you have not yet seen this AppDynamics Documentation, please check it out and see if it helps.
Not so many or floody inputs. Maybe you still should add another pipeline and check if it helps? Based on amount of entries from audit.log it is quite low. Can you check is there really so few entrie... See more...
Not so many or floody inputs. Maybe you still should add another pipeline and check if it helps? Based on amount of entries from audit.log it is quite low. Can you check is there really so few entries on source? If those are entries from one Linux node from 90 minutes period it’s really unused.
Thank you Isoutamo. I have Classic experience.
I am collecting logs from some files from /var/log and sysmon from journald. last 90 minutes /opt/splunkforwarder/var/log/splunk/audit.log 41 /opt/splunkforwarder/var/log/splunk/health.log ... See more...
I am collecting logs from some files from /var/log and sysmon from journald. last 90 minutes /opt/splunkforwarder/var/log/splunk/audit.log 41 /opt/splunkforwarder/var/log/splunk/health.log 39 /opt/splunkforwarder/var/log/splunk/metrics.log 8911 /opt/splunkforwarder/var/log/splunk/splunkd.log 598 /var/log/audit/audit.log 7 /var/log/messages 936 /var/log/secure 10 journald://sysmon 919   inputs.conf [monitor:///var/log/syslog] disabled = 0 sourcetype = syslog index = linux [monitor:///var/log/messages] disabled = 0 sourcetype = syslog index = linux [monitor:///var/log/secure] disabled = 0 sourcetype = linux_secure index = linux [monitor:///var/log/auth.log] disabled = 0 sourcetype = linux_secure index = linux [monitor:///var/log/audit/audit.log] disabled = 0 sourcetype = linux_audit index = linux [journald://sysmon] interval = 5 journalctl-quiet = true journalctl-include-fields = PRIORITY,_SYSTEMD_UNIT,_SYSTEMD_CGROUP,_TRANSPORT,_PID,_UID,_MACHINE_ID,_GID,_COMM,_EXE journalctl-exclude-fields = __MONOTONIC_TIMESTAMP,__SOURCE_REALTIME_TIMESTAMP journalctl-filter = _SYSTEMD_UNIT=sysmon.service sourcetype = sysmon:linux index = linux   I did not change number of pipelines. I thing that default count is 1. I will find out the OS version later. I do not have direct access to the OS. I thing it is CentOS/Redhat 8 or 9, but I may be wrong.  
Hi @Taruchit , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all th... See more...
Hi @Taruchit , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @Nivedita.Kumari, I'm unsure if you are using Dashboard or Dash Studio, but here is a link to all our Dashboard Documentation. 
I was wondering if there was a splunk app or a feature available to have a search bar when filtering by Splunk App.    Every time, you have to scroll for a bit just looking for the correct Splu... See more...
I was wondering if there was a splunk app or a feature available to have a search bar when filtering by Splunk App.    Every time, you have to scroll for a bit just looking for the correct Splunk app, even if its just the search app. Is there a way to add a search bar for the apps? We have on for other pages and options.     I may be overlooking something. 
thanks a lot @Esky73 
My presentation about Data Onboarding for Helsinki UG. https://data-findings.com/wp-content/uploads/2024/04/Data-OnBoarding-2024-04-03.pdf  It contains some hints and workflow how you could test dat... See more...
My presentation about Data Onboarding for Helsinki UG. https://data-findings.com/wp-content/uploads/2024/04/Data-OnBoarding-2024-04-03.pdf  It contains some hints and workflow how you could test data onboarding on your own workstation.
Which kind of logs you are collecting? Is it possible that there is some log or input which stalled this after it has read and then UF just wait free resources to read next one? Have you only one or... See more...
Which kind of logs you are collecting? Is it possible that there is some log or input which stalled this after it has read and then UF just wait free resources to read next one? Have you only one or several pipelines in your UF? Any performance data from OS level and which OS, version you have?
Hi @gcusello, Thank you for sharing your inputs. I have a report which fetches last seen timestamp of hosts across multiple indexes. I store the results in lookup file, and then use the lookup fil... See more...
Hi @gcusello, Thank you for sharing your inputs. I have a report which fetches last seen timestamp of hosts across multiple indexes. I store the results in lookup file, and then use the lookup file as a bounded static source from where we can read the results in other reports and dashboards as required. It helps me with two scenarios: - 1. If the report that generates results fails because of some reason, and as the result the downstream dashboards and reports that consume the data will also get impacted. And I will need to wait for Operations team to help with the issue or wait until the report runs again and hope that it runs successfully in the next execution. 2. Since I am referring a lookup file, the fetching and searching of records in SPLs written for reports and dashboards get faster. Please share if you have any views to consider and improve. Thank you
If you need parallelism then you must use kvstore based lookups not CSV based.
Hi the easiest way is use separate virtual machines, but you could do it also e.g. on linux box on lab env. In production this is not proposed way to do it. In linux you can just install 1st indexe... See more...
Hi the easiest way is use separate virtual machines, but you could do it also e.g. on linux box on lab env. In production this is not proposed way to do it. In linux you can just install 1st indexers and start it. Then you could install UF and start it with different ports for mgmt and probably some other. You just must check correct parameters from docs (I cannot find those now). But if I recall right UF tell that normal ports are reserved and ask to use some other ports. r. Ismo
Hi @PickleRick    thanks for the response i tried something similar to this . I tried to fetch %userprofile% and saved it to a variable and then call the variable as part of another command but it... See more...
Hi @PickleRick    thanks for the response i tried something similar to this . I tried to fetch %userprofile% and saved it to a variable and then call the variable as part of another command but it didnt help. Can you give an example
It seems that splunk didn't support to use normal (custom) command return data to use as value for eval. I suppose that you must update your custom command to work as function to use it with eval. W... See more...
It seems that splunk didn't support to use normal (custom) command return data to use as value for eval. I suppose that you must update your custom command to work as function to use it with eval. What is your actual issue which you are trying to solve with this eval way? Maybe there is some other way to do it or otherwise you must create additional custom function or something similar.