All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

On some our Windows UF hosts, we were getting System events but no Security events.  Our Windows admin noticed that the Splunk service account was running as an NT service.  After changing the servic... See more...
On some our Windows UF hosts, we were getting System events but no Security events.  Our Windows admin noticed that the Splunk service account was running as an NT service.  After changing the service account to LocalSystem, the Windows UF hosts started sending their security events.
Hi Community, I'm exploring ways to ingest data into Splunk Cloud from a Amazon s3 Bucket which has multiple directories and multiple files to be ingested onto Splunk. Now, I have assessed the Gene... See more...
Hi Community, I'm exploring ways to ingest data into Splunk Cloud from a Amazon s3 Bucket which has multiple directories and multiple files to be ingested onto Splunk. Now, I have assessed the Generic s3, SQS-s3 and the Data Manager Inputs for AWS available on Splunk but am not getting the required outcome. My use case is given below: There's a s3 bucket named as exampledatastore, in that there's a directory named as statichexcodedefinition, in that there're multiple message Ids and Dates. The s3 example structure is given below: s3://exampledatastore/statichexcodedefinition/{messageId}/functionname/{date}/* - functionnameattribute Where the {messageId} and the {date} values are dynamic. And I have a start date to begin with but the messageId varies. Please can you assist me on this on how to get the data into Splunk. Many Thanks!
Have you tried other indexes? Or other users?
Very strange indeed - it works fine for me (same version). Are you trying this in a dashboard or just in the search app?
Hello Splunkers, The hardcoded time parameters inside a simple search don't work with v9.4.3.  It only takes the input from the time presets. Do you also experience a similar issue? index=index e... See more...
Hello Splunkers, The hardcoded time parameters inside a simple search don't work with v9.4.3.  It only takes the input from the time presets. Do you also experience a similar issue? index=index earliest="-7d@d" latest="-1m@m" and my preset is last 15 mins, then I get this output.  earliestTime latestTime 07/25/2025 10:40:01.636 07/25/2025 10:52:59.564 Very strange. Nothing mentioned on this in the release notes.
Can anyone please confirm if appdynamics machine agent supports TLS 1.3 or not ?  We are using java agent 25.4.0.37061 on Linux X64 platform ; If anyone can suggest an answer or point me towards rele... See more...
Can anyone please confirm if appdynamics machine agent supports TLS 1.3 or not ?  We are using java agent 25.4.0.37061 on Linux X64 platform ; If anyone can suggest an answer or point me towards relevant documentation ?    Thanks
But service field is not an indexed field. I am writing rex to extract that field from original index and then giving collect command to feed it in summary index. Still srchFilter fails in this case?... See more...
But service field is not an indexed field. I am writing rex to extract that field from original index and then giving collect command to feed it in summary index. Still srchFilter fails in this case? It should be indexed field from the original index as well? Please confirm
Yes it should allow full access to waf_123456_prod and restrict access to opco_yes_summary only where service=JUNIPER-HBEU-ACCESS. Yes, you can add more services. Make sure service is an indexed fi... See more...
Yes it should allow full access to waf_123456_prod and restrict access to opco_yes_summary only where service=JUNIPER-HBEU-ACCESS. Yes, you can add more services. Make sure service is an indexed field in opco_yes_summary. If it’s extracted at search time, srchFilter may not work. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
extracted service field from raw data and ingested it into summary index so that it will pick service field values. Then I will use this field in srchFilter to restrict users. Below is authorise.con... See more...
extracted service field from raw data and ingested it into summary index so that it will pick service field values. Then I will use this field in srchFilter to restrict users. Below is authorise.conf what I have given. [role_opco_yes_123456_prod] importRoles = user srchIndexesAllowed = waf_123456_prod, opco_yes_summary srchIndexesDefault = waf_123456_prod srchFilter = (index::waf_123456_prod) OR (index::opco_yes_summary service::JUNIPER-HBEU-ACCESS)   Will this help me or any issue or any changes need to do? Sometimes need to add 1 or 2 services. Is it possible?  Note - Try to give = in srchFilter while testing in UI, but it thrown error like we can't give = can only give :: can I still give = in backend ultimately I need to write and push it from the backend not from UI.
[role_appA] srchFilter = index=opco_yes_summary app_name="AppA" Just confirming, user already had access to this app index=A assigned. If I give this in srchFilter can he still access the normal in... See more...
[role_appA] srchFilter = index=opco_yes_summary app_name="AppA" Just confirming, user already had access to this app index=A assigned. If I give this in srchFilter can he still access the normal index=A data normally right? 
Hi Frank, I just found a quick-fix with inline-code enhancements in file but appreciate it using a configfile as mentioned. /opt/splunk/etc/apps/TA_oui-lookup/bin/get-oui-table.py proxy_host = 'lo... See more...
Hi Frank, I just found a quick-fix with inline-code enhancements in file but appreciate it using a configfile as mentioned. /opt/splunk/etc/apps/TA_oui-lookup/bin/get-oui-table.py proxy_host = 'localhost:1234'    # host and port of your proxy OUI_URL = "https://standards-oui.ieee.org" req = urllib.request.Request(OUI_URL) req.set_proxy(proxy_host, 'http') req.add_header("User-agent", USER_AGENT)  Enjoy your vacation! Lothar from Germany
@cdevoe57  Try below, index="server" source="Unix:Service" UNIT IN ("iptables.service", "auditd.service", "chronyd.service") | eval status=if(ACTIVE=="failed" OR ACTIVE=="inactive", "failed", "OK")... See more...
@cdevoe57  Try below, index="server" source="Unix:Service" UNIT IN ("iptables.service", "auditd.service", "chronyd.service") | eval status=if(ACTIVE=="failed" OR ACTIVE=="inactive", "failed", "OK") | eval service=case( UNIT=="iptables.service", "IPTABLES", UNIT=="auditd.service", "AUDITD", UNIT=="chronyd.service", "CHRONYD" ) | stats values(status) as status by host service | xyseries host service status | where IPTABLES="failed" OR AUDITD="failed" OR CHRONYD="failed" | table host IPTABLES AUDITD CHRONYD Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@Karthikeya  Change the index name Since its summary index, i would suggest to use collect command to copy your data. 1 - Create new index called opco_yes_summary 2 -Search and use collect to cop... See more...
@Karthikeya  Change the index name Since its summary index, i would suggest to use collect command to copy your data. 1 - Create new index called opco_yes_summary 2 -Search and use collect to copy index=waf_opco_yes_summary | collect index=opco_yes_summary 3 - Verify the data index=opco_yes_summary 4 - Once verified, Delete old index.   Restricting users based on the apps from summary index I would say creating separate indexes per app might be a nightmare. As a workaround, Can we consider creating tagging field for summary events if there is no specific field. For eg: field app_name Then create a role based filter. Eg: [role_appA] srchFilter = index=opco_yes_summary app_name="AppA" This ideally ensures users only see data for their app, even if they run index=* But need to avoid “All non-internal indexes” in Roles.   Anyway this needs to be tested and verified, but it might be a good starting point. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@PickleRick so what will be the best approach to do this? Creating new summary index for every application? We have nearly 100 applications on-boaarded and it will be bit painful to write same query ... See more...
@PickleRick so what will be the best approach to do this? Creating new summary index for every application? We have nearly 100 applications on-boaarded and it will be bit painful to write same query for everything and deploy? Any automation can we do? But still I have zero knowledge on coding.
Agree with others that your purpose is better served by knowing your data.  Given what you have revealed, you can simply describe the four events and lay them out with xyseries. index=application_na... See more...
Agree with others that your purpose is better served by knowing your data.  Given what you have revealed, you can simply describe the four events and lay them out with xyseries. index=application_na sourcetype=my_logs:hec appl="*" message="***" | eval event = case(match(message, "Received request"), "DoPayment start", match(message, "Sending result"), "DoPayment end", match(message, "Sending request"), "OtherApp start", match(message, "Received result"), "OtherApp end") | eval _time = strftime(_time, "%F %T.%3N") | xyseries interactionid event _time Obviously regex's used in the match functions are just to illustrate what you can do.  But xyseries can achieve what you want without complex transformations.  Using your mock data, the output is interactionid DoPayment end DoPaymet start OtherApp end OtherApp start 12345 2025-06-26 07:55:58.017 2025-06-26 07:55:56.317 2025-06-26 07:55:57.512 2025-06-26 07:55:56.717 Here is an emulation you can play with and compare with real data | makeresults format=csv data="interactionid,_time,message 12345,2025-06-26 07:55:56.317,TimeMarker: WebService: Received request. (DoPayment - ID:1721 Amount:16 Acc:1234) 12345,2025-06-26 07:55:56.717,OtherApp: -> Sending request with timeout value: 15 12345,2025-06-26 07:55:57.512,TimeMarker: OtherApp: Received result from OtherApp (SALE - ID:1721 Amount:16.00 Acc:1234) 12345,2025-06-26 07:55:58.017,TimeMarker: WebService: Sending result @20234ms. (DoPayment - ID:1721 Amount:16 Acc:1234)" | eval _time = strptime(_time, "%F %T.%N") | sort - _time ``` above emulates index=application_na sourcetype=my_logs:hec appl="*" message="***" ```  
Hello folks, We are doing splunkforwarder upgrade to 9.4.x (from 8.x) recently, we build the splunk sidecar image for our k8s application and i noticed the same procedures which works previous in fw... See more...
Hello folks, We are doing splunkforwarder upgrade to 9.4.x (from 8.x) recently, we build the splunk sidecar image for our k8s application and i noticed the same procedures which works previous in fwd version 8.x don't work anymore in 9.4.x. during the docker image startup, it's clearly to see the process hanging there and wait for interaction. bash-4.4$ ps -ef UID PID PPID C STIME TTY TIME CMD splunkf+ 1 0 0 02:11 ? 00:00:00 /bin/bash /entrypoint.sh splunkf+ 59 1 99 02:11 ? 00:01:25 /opt/splunkforwarder/bin/splunk edit user admin -password XXXXXXXX -role admin -auth admin:xxxxxx --answer-yes --accept-license --no-prompt splunkf+ 61 0 0 02:12 pts/0 00:00:00 /bin/bash splunkf+ 68 61 0 02:12 pts/0 00:00:00 ps -ef bash-4.4$ rpm -qa | grep splunkforwarder splunkforwarder-9.4.3-237ebbd22314.x86_64   there is a workaround to add a "tty: true" to k8s deployment template but this will cause a lot of efforts in our environment.   Any idea if any newer version has the fix? or any splunk command parameter can be used to bypass the tty requirement? Thanks.
This syntax is wrong and will never work | eval IPTABLES = if(UNIT=iptables.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK") UNIT is a string, so must be quoted as you have done... See more...
This syntax is wrong and will never work | eval IPTABLES = if(UNIT=iptables.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK") UNIT is a string, so must be quoted as you have done for the ACTIVE field. | eval IPTABLES = if(UNIT="iptables.service" AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK") You probably want to use  | fields _time host IPTABLES AUDITD CHRONYD | stats latest(*) as * by host to get you the most recent state  
From those 4 events, which ones do you want to calculate time between, it's not clear to me. If you have multiple messages and only two of them are relevant to your calculation, then can you not just... See more...
From those 4 events, which ones do you want to calculate time between, it's not clear to me. If you have multiple messages and only two of them are relevant to your calculation, then can you not just include search contstraints to only find the 2 you are interested in? If you have only 2 events, then you can use min/max as you are doing. Otherwise, you can use this type of logic | sort 0 _time | streamstats window=2 global=f range(_time) as duration by interactionid which will sort the events into time ascending order and put a new field into each event with the duration (time gap) between that event and the previous event for the same interactionid. You could also use eval and stats (which would be faster than streamstats) to set a field with the start time of the event you want to find - same for the end and then use stats to collect those new fields to then calculate duration. Also, note that you should never sort unless you know you need to. In this case, you don't. Also, sort has a 10,000 result limit and will chop your data to only 10,000 results (maybe not an issue in your case), but get used to using | sort 0 xxx to make sure your entire data set is sorted.    
Following on from @PickleRick suggestion, to avoid the @mt issue, you could do something like this | spath msg.@mt output=mt | rex field=mt max_match=0 "{(?<templates>[^}]+)}" | foreach mode=multiva... See more...
Following on from @PickleRick suggestion, to avoid the @mt issue, you could do something like this | spath msg.@mt output=mt | rex field=mt max_match=0 "{(?<templates>[^}]+)}" | foreach mode=multivalue templates [ | eval mt=replace(mt,"{".<<ITEM>>."}", json_extract(_raw,"msg.".<<ITEM>>)) ]