All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I think you have your answer in other posts, but this is a good indication of asking the right question - including the "by day" also is an important point
Users with an Admin or Power role are able to view the Seclytics dashboard provided by the "Seclytics for Splunk App". However, when users with the "User" role attempt to access the same dashboard, t... See more...
Users with an Admin or Power role are able to view the Seclytics dashboard provided by the "Seclytics for Splunk App". However, when users with the "User" role attempt to access the same dashboard, the content does not display. Additionally, we discovered that the lookup file "event_by_days.csv" is missing from the expected directory: /opt/splunk/etc/apps/seclytics-splunk-app/lookups/. We would like to understand the following: Why is the dashboard visible to Admin/Power roles but not to the User role? Are there specific role-based permissions required to access this dashboard? Or is there a configuration change needed on our end to ensure all roles can access the content correctly? Seclytics for Splunk App 
Hello, aamer, could you please share your experience? Our logs are being parsed wrong atm, could you lend me a hand?
You didn't answer @bowesmana 's question about whether your sample is from an index or a lookup table.  I will assume that they come from events.  In this case, it is unnecessary to extract _time inl... See more...
You didn't answer @bowesmana 's question about whether your sample is from an index or a lookup table.  I will assume that they come from events.  In this case, it is unnecessary to extract _time inline.  You can use latest as @bowesmana and @ITWhisperer suggested, or you can simply use dedup to get the latest events before further processing: | eval day = strftime(_time, "%F") | dedup day Name Given this dataset Name Status _raw _time ABC F ABC,F, 04/25/2025 15:50:00 2025-04-25 15:50:00 ABC R ABC,R, 04/25/2025 15:25:00 2025-04-25 15:25:00 ABC F ABC,F, 04/24/2025 15:30:03 2025-04-24 15:30:03 ABC R ABC,R, 04/24/2025 15:15:01 2025-04-24 15:15:01 The above will give you Name Status _raw _time day ABC F ABC,F, 04/25/2025 15:50:00 2025-04-25 15:50:00 2025-04-25 ABC F ABC,F, 04/24/2025 15:30:03 2025-04-24 15:30:03 2025-04-24 Here is a full emulation of your mock data | makeresults | eval _raw="Name,Status,Datestamp ABC,F, 04/24/2025 15:30:03 ABC,R, 04/24/2025 15:15:01 ABC,F, 04/25/2025 15:50:00 ABC,R, 04/25/2025 15:25:00" | multikv forceheader=1 | eval _time = strptime(Datestamp, "%m/%d/%Y %T") | fields - Datestamp linecount | sort - _time ``` data emulation above ```
| makeresults format=csv data="Name,Status,Timestamp ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 ABC, F, 04/25/2025 15:50:00 ABC, R, 04/25/2025 15:25:00" | eval _time = strptime(Timestamp,... See more...
| makeresults format=csv data="Name,Status,Timestamp ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 ABC, F, 04/25/2025 15:50:00 ABC, R, 04/25/2025 15:25:00" | eval _time = strptime(Timestamp, "%m/%d/%Y %T") | bin _time as _day span=1d | stats latest(*) as * by _day Name
Hi @bsreeram  If you want it splitting by Name and day so you get the latest per Name AND day then you can use a timechart | timechart span=1d latest(*) as * Did this answer help you? If so... See more...
Hi @bsreeram  If you want it splitting by Name and day so you get the latest per Name AND day then you can use a timechart | timechart span=1d latest(*) as * Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
It worked for certain cases but please see the following  For the following data records, ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 ABC, F, 04/25/2025 15:50:00 ABC, R, 04/25/2025 15... See more...
It worked for certain cases but please see the following  For the following data records, ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 ABC, F, 04/25/2025 15:50:00 ABC, R, 04/25/2025 15:25:00   The solution should be as follows - i.e. latest status by day has to be captured.  ABC,F, 04/24/2025 15:30:03 ABC, F, 04/25/2025 15:50:00
@msatish- Yes you can always define your own sourcetype & your own custom index that you want any data to fall into.   But as @livehybrid is asking you can need to figure-out how you are collecting... See more...
@msatish- Yes you can always define your own sourcetype & your own custom index that you want any data to fall into.   But as @livehybrid is asking you can need to figure-out how you are collecting the data & which format of the logs so you can figure-out from which config file & where you can apply the new sourcetype & index. And you also need to put props.conf configuration (Parsing, Timestamp extraction, Field Extraction, etc.) for your custom sourcetype.   And make sure index is created on your indexers before you start pushing the data into your custom index.   I hope this helps!!!
It worked for certain cases but please see the following  For the following data records, ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 ABC, F, 04/25/2025 15:50:00 ABC, R, 04/25/2025 15... See more...
It worked for certain cases but please see the following  For the following data records, ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 ABC, F, 04/25/2025 15:50:00 ABC, R, 04/25/2025 15:25:00   The solution should be as follows - i.e. latest status by day has to be captured.  ABC,F, 04/24/2025 15:30:03 ABC, F, 04/25/2025 15:50:00    
Hi  livehybrid, Thank you for pointing that out to me. Regarding your suggested solution. It was very helpful. I checked the mongod.log using the following command: tail -n 200 $SPLUNK_HOME/var/lo... See more...
Hi  livehybrid, Thank you for pointing that out to me. Regarding your suggested solution. It was very helpful. I checked the mongod.log using the following command: tail -n 200 $SPLUNK_HOME/var/log/splunk/mongod.log The output clearly showed the issue: 2025-03-27T10:16:32.087Z W NETWORK [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting 2025-03-27T10:16:32.087Z F NETWORK [main] The provided SSL certificate is expired or not yet valid. 2025-03-27T10:16:32.087Z F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager_openssl.cpp 1182 2025-03-27T10:16:32.087Z F - [main] ***aborting after fassert() failure It turned out that the server SSL certificate had expired. Here are the steps I took to resolve the issue: 1- Backed up the existing certificate: cp $SPLUNK_HOME/etc/auth/server.pem $SPLUNK_HOME/etc/auth/server.pem.bak 2- Generated a new self-signed certificate: splunk createssl server-cert -d $SPLUNK_HOME/etc/auth -n server (This creates a new server.pem valid for 2 years.)  3- restart splunk ./splunk restart 4- Verified KV Store status:  splunk show kvstore-status   #####Note for Search Head Cluster#### Since we’re running a SH cluster, I made sure to: Copy the new server.pem to all search head members. Restart Splunk on each node. These steps fully resolved the issue, and the KV Store is now functioning as expected.  
Hi All, As old estreamer add -on is replaced by new app Cisco security cloud ( https://splunkbase.splunk.com/app/7404) , we have installed new app and testing in distributed environment. We are faci... See more...
Hi All, As old estreamer add -on is replaced by new app Cisco security cloud ( https://splunkbase.splunk.com/app/7404) , we have installed new app and testing in distributed environment. We are facing one issue with intrusion event packet logs which are streaming from FMC into splunk. Whenever "packet data" field in intrusion event packets greater than 4k bytes, it is missing in splunk logs.Only packetdata field is missing, remaining complete log is visible in splunk. And there are no errors related to parsing, truncating issues in splunk _internal index. Does anyone has faced the same issue or any fix for this?
Optimising this will depend on your data. Using subsearches with lookups can be expensive and using NOT with subsearches, even more so. Depending on the volume of entries in those lookups you will b... See more...
Optimising this will depend on your data. Using subsearches with lookups can be expensive and using NOT with subsearches, even more so. Depending on the volume of entries in those lookups you will be better off using a lookup, e.g. index:: rasp_ NOT ( forwarded_for="140.108.26.152" OR forwarded_for="" OR forwarded_for="10.*" OR forwarded_for=null) app!="" app!="\"*\"" app!="VASTID*" host!="10.215*" host!="ip-10-*" host!="carogngsa*" host!="carogngta*" host!="carofuzedd** host!="*ebiz*" host!="echo*" host!="not logged" host!="onm*" host!="tfnm*" host!="voip*" host!="wfm*" category!="Config*" category!="Depend*" category!="Stat*" category!="Large*" category!="Uncaught*" category!="Unvalidated Redirect" category!="License" category!="*Parse*" action=* | lookup Scanners_Ext.csv forwarded_for OUTPUT forwarded_for as found | where isnull(found) | lookup Scanners_Int.csv ip_addr as forwarded_for OUTPUT ip_addr as found | where isnull(found) | lookup vz_nets.csv netblock as forwarded_for OUTPUT netblock as found | where isnull(found) | stats count so the static NOT statement and other != comparisons is part of the search and then you do each lookup in turn and if it's found then it will be discarded. The order of the 3 lookups would be in likely match count order, so the first lookup should be done that would be expected to reduce the event count by the max, and so on. Using NOT or all your != wildcard searches at the beginning will be somewhat expensive, you can use TERM() to reduce data scan count, but that requires knowing your data well.  
I am guessing this is data in a lookup file rather than event data - if you have event data you would already have a time stamp in the event which may or may not be the same as Timestamp. However, i... See more...
I am guessing this is data in a lookup file rather than event data - if you have event data you would already have a time stamp in the event which may or may not be the same as Timestamp. However, in your specific example, assuming no _time field, the just parse the Timstamp field and use stats latest to get the latest, i.e. | makeresults format=csv data="Name,Status,Timestamp ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01" | eval _time = strptime(Timestamp, "%m/%d/%Y %T") | stats latest(*) as * by Name
Hi @livehybrid  First of all, thanks for your response. When I search using index="wazuh-alerts", I get lots of events. For the search index="wazuh-alerts" "Medium", I get 7 events.
That doesn't sound right - are you referring to a multi-value field? | makeresults | fields - _time | eval value=split("ABC","") | search value=A AND value=C This search above will find a result fo... See more...
That doesn't sound right - are you referring to a multi-value field? | makeresults | fields - _time | eval value=split("ABC","") | search value=A AND value=C This search above will find a result for A and C, but if you change it to A and D it does not find results. Can you give an example of your results in the OR case
Note that this means when you select "All" it removes the other options if selected and vice versa, if you have All selected and choose one of the other options, it removes "All" from the list of sel... See more...
Note that this means when you select "All" it removes the other options if selected and vice versa, if you have All selected and choose one of the other options, it removes "All" from the list of selections.
To handle an 'All' static option in the multiselect, add this change element <change> <condition match="$form.webuser=&quot;*&quot;"> <set token="webuser"></set> ... See more...
To handle an 'All' static option in the multiselect, add this change element <change> <condition match="$form.webuser=&quot;*&quot;"> <set token="webuser"></set> </condition> <condition> <eval token="form.webuser">case(mvcount($form.webuser$)="2" AND mvindex($form.webuser$,0)="*", mvindex($form.webuser$,1), mvfind($form.webuser$,"^\\*$$")=mvcount($form.webuser$)-1, "*", true(), $form.webuser$)</eval> </condition> </change>
Hey all - I have a need to search for events in Splunk that contain two specific values in one field. I want the results to return only those events that have both values in them. I'm trying to use t... See more...
Hey all - I have a need to search for events in Splunk that contain two specific values in one field. I want the results to return only those events that have both values in them. I'm trying to use this: (my_field_name="value1" AND my_field_name="value2") This still returns results that have either value1, or value2, not events that contain both. How would I query for results that contain only both values, not individual values?
As a matter of fact, one actually doesn't need to specify the field name, which contains all the key value pair. I used following simple extract parameters: | extract pairdelim="," kvdelim=":" ... See more...
As a matter of fact, one actually doesn't need to specify the field name, which contains all the key value pair. I used following simple extract parameters: | extract pairdelim="," kvdelim=":" One doesn't need to escape "," as done in the first answer!
Hi, I have dataset in the following format Name,Status,Timestamp ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 I need to be able to only display / render the latest status for a given... See more...
Hi, I have dataset in the following format Name,Status,Timestamp ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 I need to be able to only display / render the latest status for a given name My output should like the following since the status as of 04/24/2025 15:30:03 is the most recent status. ABC,F, 04/24/2025 15:30:03 Appreciate your help.