All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I am working on Education field and have started using Splunk Entp  since May 18 , 2025. Yesterday 16 Jun 2025 , i faced with log in problem. I uploaded once again the version 9.4.3 and tried... See more...
Hello, I am working on Education field and have started using Splunk Entp  since May 18 , 2025. Yesterday 16 Jun 2025 , i faced with log in problem. I uploaded once again the version 9.4.3 and tried to log in again but same result. Admin should be the person who can solve this issue what is mentioned on the main black box.  I need a service support by admin or responsible POC. I am using VPN if this may cause to log in, this is for the technical team information. My University email is oguz.unal@ogr.yesevi.edu.tr which i used while signing. My alternative email is  Kind regards, Ogz
A4server Beta is the first value so no matter what sourcetype i choose it is on;y giving the values of A4server Beta in sourcetype , newIndex an ddomain
Hello team ,  Please help me modify this query such that it is able to loop through all the values of the csv file :   Although it is able to give the clients and sensitivity of the selected source... See more...
Hello team ,  Please help me modify this query such that it is able to loop through all the values of the csv file :   Although it is able to give the clients and sensitivity of the selected sourcetype but in the results in the fields- Sourcetype Domain and NewIndex it is only giving the values of the first sourcetype- A4Server Like for example over here the selected sourcetype is A4server but in the sourcetype it is giving A4ServerBeta  as it is not looping through the entire csv but only the first value | tstats count WHERE index=* sourcetype=A4Server by index  | rex field=index max_match=0 "(?<clients>\w+)(?<sensitivity>_private|_public)"   | table index, clients, sensitivity | join type=left client [     | inputlookup appserverdomainmapping.csv      | table NewIndex, Domain, Sourcetype ]| eval NewIndex= NewIndex + sensitivity | table clients, sensitivity, Domain, Sourcetype, NewIndex  
got the solution index="webmethods_prd"  source="/apps/WebMethods*/IntegrationServer/instances/default/logs/MISC.log" MISC_dynamicPrice |where mainframePrice!=discountPrice |stats count by mainfra... See more...
got the solution index="webmethods_prd"  source="/apps/WebMethods*/IntegrationServer/instances/default/logs/MISC.log" MISC_dynamicPrice |where mainframePrice!=discountPrice |stats count by mainframePrice, discountPrice,accountNumber,itemId
this is my log    i need a report like below: where I can see price difference in a single report. I don't want to put those records which has same mainframePrice and discountPrice, only I ... See more...
this is my log    i need a report like below: where I can see price difference in a single report. I don't want to put those records which has same mainframePrice and discountPrice, only I want to put those records where mainframePrice and discountPrice are different here I manually entered the individual values to get the report,      
The splunkfwd user is created by default in version 9.1, and seeing the warning "User splunkfwd does not exist - using root" while upgrade. the upgrade guide does not say that creating the splunkfwd... See more...
The splunkfwd user is created by default in version 9.1, and seeing the warning "User splunkfwd does not exist - using root" while upgrade. the upgrade guide does not say that creating the splunkfwd user is mandatory for Universal Forwarder installations or upgrades. Upgrade the universal forwarder | Splunk Docs "When you upgrade, the RPM/DEB package installer retrieves the file owner of SPLUNK_HOME/etc/myinstall/splunkd.xml. If a previous user exists, the RPM/DEB package installer will not create a splunkfwd user and instead will reuse the existing user. If you wish to create a least privileged user, that is, the splunkfwd user, you must remove the existing user first." the warning appears during the upgrade regarding the missing splunkfwd user, there are no permission issues, and the forwarder is functioning properly with "splunk" User. Appreciate your guidance on whether it is mandatory to create the splunkfwd user for Universal Forwarder9.4.0 or higher version? Note: in this topic Splunk enterprise and Splunk UF not installed on the same machine
One more question since i am new to this platform i am wondering how can search for a certain error/ warning or info message. Such as how to seach for " 404 - file not found for file "
Any update on this topic? I am facing the same issue.
@Meett Thanks for responding. I have created a Addon builder called TA-splunk-webhook-alerts and i have attached it to a alerts So, whenever that alert is triggered it will trigger the addon builder... See more...
@Meett Thanks for responding. I have created a Addon builder called TA-splunk-webhook-alerts and i have attached it to a alerts So, whenever that alert is triggered it will trigger the addon builder. this addon builder contains a python script which calls some api to push the alert data. The above picture shows the some of python script. if you see the there are some log statements in it. like  helper.log_info("username={}".format(username)) my question is whenever this script is executed where can i find these logs? i have not done any specific configuration for logging. helper.log_info is default one. FYI: I have developed this addon builder using splunk enterprise version 9 and installed in splunk cloud.  in splunk enterprise i am able to find the location of logs($SPLUNK_HOME/var/log/splunk) but not in splunk cloud. Please assist to find the logs in splunk cloud.  
i tried with above but it is not showing anything
Thanks @simon21. This worked a treat. To go one step further - if you are using classic dashboard, you can simply put the css in the source code. Create a new hidden row by using depends="$alwaysHi... See more...
Thanks @simon21. This worked a treat. To go one step further - if you are using classic dashboard, you can simply put the css in the source code. Create a new hidden row by using depends="$alwaysHideCSSStyle$" . Then put the css code within <panel>   ->  <html>  ->  <style> tags <row depends="$alwaysHideCSSStyle$"> <panel>    <html>      <style>         div.leaflet-popup-content tr:first-child {           display: none;         }        div.leaflet-popup-content tr:nth-child(2) {          display: none;        }     </style>    </html>   </panel> </row>
@VatsalJagani / @livehybrid  https://apps.splunk.com/app/1780/.-Does this EXOS app still help in parsing? or is it outdated one?  Is EXOS an Extreme old operating system?
@Anders333  Is it possible for you to configure the app to use standard log rotation (e.g., rename and create a new file when full, or truncate/append)? If you continue current log rotation, Splunk... See more...
@Anders333  Is it possible for you to configure the app to use standard log rotation (e.g., rename and create a new file when full, or truncate/append)? If you continue current log rotation, Splunk may miss or duplicate events, and reliable ingestion cannot be guaranteed. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Thanks a million.
Hi @Anders333  I think the main issue here is that it starts overwriting events from the top of the file, I believe this is a pretty unusual approach as you will end up with events in a strange orde... See more...
Hi @Anders333  I think the main issue here is that it starts overwriting events from the top of the file, I believe this is a pretty unusual approach as you will end up with events in a strange order within the file e.g. 17/Jun/2025 09:08 - Event 5 17/Jun/2025 09:10 - Event 6 17/Jun/2025 09:01 - Event 1 17/Jun/2025 09:03 - Event 2 17/Jun/2025 09:05 - Event 3 17/Jun/2025 09:06 - Event 4 The issue here is even if you can convince Splunk to start reading the events again from the top of the file, it may end up re-ingesting events 1-4.  Is there any way you can reconfigure the output of your app to log differently? e.g. rotate into new log file?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing    
Hi @Emre , Splunk displays the logs that it receives, are you sure that you are sending these data to Splunk? Do you see these data in the raw logs in Splunk? Maybe the issue is the not correct pa... See more...
Hi @Emre , Splunk displays the logs that it receives, are you sure that you are sending these data to Splunk? Do you see these data in the raw logs in Splunk? Maybe the issue is the not correct parsing, see at https://docs.mendix.com/developerportal/operate/splunk-metrics/ to be guided. Ciao. Giuseppe
Hi @Emre  Yes, you can send JSON via HEC into Splunk Enterprise / Splunk Cloud. Check out https://docs.splunk.com/Documentation/Splunk/9.4.2/Data/HECExamples which has some good examples on how you ... See more...
Hi @Emre  Yes, you can send JSON via HEC into Splunk Enterprise / Splunk Cloud. Check out https://docs.splunk.com/Documentation/Splunk/9.4.2/Data/HECExamples which has some good examples on how you can do this, but at a basic level you have two options, you can send raw JSON to https://mysplunkserver.example.com:8088/services/collector/raw or you can send structured events to https://mysplunkserver.example.com:8088/services/collector/event A structured even for the /event endpoint would look something like this: { "time": 1426279439, // epoch time "host": "localhost", "source": "random-data-generator", "sourcetype": "my_sample_data", "index": "main", "event": "Hello world!" // or {"yourKey":"yourVal"} for example } Check out https://docs.splunk.com/Documentation/Splunk/9.4.2/Data/FormateventsforHTTPEventCollector for more info on field you can send to events to HEC.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Anders333 , No I said, that if you write the same log file, Splunk doesn't read it again. But, to better understand your issue what's the behavior of your ingestion? Ciao. Giuseppe
Perhaps the reason you are struggling is because you have painted yourself into a corner; try taking a step back. How did you get to the position of have 2 multi-value fields in the first place. Perh... See more...
Perhaps the reason you are struggling is because you have painted yourself into a corner; try taking a step back. How did you get to the position of have 2 multi-value fields in the first place. Perhaps there is another way to create the table so that you don't lose the correlation between instance name and the execution time. Please share some anonymised sample events and the search that you are using to create the table in the first place.
I would suggest a slightly optimal version that does not use the subsearch index=abc | stats count by source | inputlookup append=t my_source_lookup.csv | fillnull count | stats sum(count) AS total... See more...
I would suggest a slightly optimal version that does not use the subsearch index=abc | stats count by source | inputlookup append=t my_source_lookup.csv | fillnull count | stats sum(count) AS total BY source