All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try this query that does not use appends or transpose. (index=fortinet dlpextra IN (WatermarkBlock1,Log_WatermarkBlock2,Log_WatermarkBlock3,Log_WatermarkBlock4)) OR (index=035 "Common.DeviceName"="p... See more...
Try this query that does not use appends or transpose. (index=fortinet dlpextra IN (WatermarkBlock1,Log_WatermarkBlock2,Log_WatermarkBlock3,Log_WatermarkBlock4)) OR (index=035 "Common.DeviceName"="p151.d.com" OR Common.DeviceName="p1p71.c.com" "SensitiveInfoTypeData{}.SensitiveInfoTypeName"=*) OR (index=iron AutomaticClassification) OR (index=testing sourcetype="net:alert" dlp_rule="AZ C*") | eval type = case(index=fortinet, "Proxy", index=iron, "Email", index=035, "SFTP", index=testing, "Netskope", 1==1, "Unknown") | stats count by type
Thanks for the reply! I ran the query as is and received some odd results.  I slightly modified it as shown: index=main sourcetype="cisco:ios" ("SESSION_STATE_DOWN" OR "SESSION_STATE_UP") | eval st... See more...
Thanks for the reply! I ran the query as is and received some odd results.  I slightly modified it as shown: index=main sourcetype="cisco:ios" ("SESSION_STATE_DOWN" OR "SESSION_STATE_UP") | eval status=case(searchmatch("%BFD-5-SESSION_STATE_DOWN"),"Down",searchmatch("%BFD-5-SESSION_STATE_UP"),"Up",true(),"Unknown") | rex "on interface (?<iface>[a-zA-Z0-9]+)" | stats range(_time) as downTime latest(status) as latestStatus by iface | where downTime<60 This produced 2 results for the past 7 days: Can this be run in realtime and alert be generated for a "LatestStatus" of Down?
Each command works only with the results from the previous command. If the most recent event from each host is SESSION_STATE_UP then your job is done since there are no down hosts.  The where comman... See more...
Each command works only with the results from the previous command. If the most recent event from each host is SESSION_STATE_UP then your job is done since there are no down hosts.  The where command will find no SESSION_STATE_DOWN events so there are none to display or alert on.  This is the desired state (isn't it?). OTOH, if a host currently is down then dedup host will return SESSION_STATE_DOWN for that host and the where command will decide if the host has been down long enough to worry about.
Hi @dflynn235  Were you able to try the search I provided?  Im happy to help work through this if theres an issue with this approach. @livehybrid wrote: Hi @dflynn235  Does the following do w... See more...
Hi @dflynn235  Were you able to try the search I provided?  Im happy to help work through this if theres an issue with this approach. @livehybrid wrote: Hi @dflynn235  Does the following do what you are looking for?  | eval status=case(searchmatch("has gone down"),"Down",searchmatch("is up"),"Up",true(),"Unknown") | rex "on interface (?<iface>[a-zA-Z0-9]+)" | stats range(_time) as downTime latest(status) as latestStatus by iface | where downTime>60 Here is a working example with sample data, just add the | where to limit as required. | makeresults count=1 | eval _raw="2025-05-07T07:20:40.482713-04:00 \"switch_name\" : 2025 May 7 07:20:40 EDT: %BFD-5-SESSION_STATE_DOWN: BFD session 1124073489 to neighbor \"IP Address\" on interface Vlan43 has gone down. Reason: Administratively Down." | eval host="switch_name" | append [| makeresults count=1 | eval _raw="2025-05-07T07:20:41.482771-04:00 \"switch_name\" : 2025 May 7 07:20:41 EDT: %BFD-5-SESSION_STATE_UP: BFD session 1124073489 to neighbor \"IP Address\" on interface Vlan43 is up." | eval host="switch_name"] | rex "^(?<timeStr>[^\s]+)" | eval _time=strptime(timeStr,"%Y-%m-%dT%H:%M:%S.%6N%Z") | eval status=case(searchmatch("has gone down"),"Down",searchmatch("is up"),"Up",true(),"Unknown") | rex "on interface (?<iface>[a-zA-Z0-9]+)" | stats range(_time) as downTime latest(status) as latestStatus by iface  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing      
Actually, assuming that you're managing your configs with a deployment server, it might prove easier and less error-prone to uninstall old UF version and simply deploy a fresh install of a new UF ver... See more...
Actually, assuming that you're managing your configs with a deployment server, it might prove easier and less error-prone to uninstall old UF version and simply deploy a fresh install of a new UF version and attach it to the DS. But this might cause duplication of ingested data if the state files (mostly fishbucket but also state of eventlog inputs or wmi inputs) are not handled properly.
There is too little information to even blindly guess. Firstly, how are those events getting into your Splunk infrastructure? Do you have a UF installed on remote hosts and monitor file inputs deifn... See more...
There is too little information to even blindly guess. Firstly, how are those events getting into your Splunk infrastructure? Do you have a UF installed on remote hosts and monitor file inputs deifned on them? Or maybe those are eventlog inputs? Or are you receiving syslog data over the network? Directly or using a third party syslog daemon? Secondly, how did you verify that the data for those "outages" isn't ingested at all? Maybe the sources (or receivers) are getting clogged so your ingestion process stops for a while but then resumes and catches up but your data onboarding is incomplete so you don't have reliable timestamps? There are many things that can go wrong.
This is more of a question to ClamAV authors/database maintaners. I'd hazard a guess that the file contained within SE has some characteristic pieces of the Gitpaste method as part of its searches. A... See more...
This is more of a question to ClamAV authors/database maintaners. I'd hazard a guess that the file contained within SE has some characteristic pieces of the Gitpaste method as part of its searches. And ClamAV detects their presence and flags the file. But I'd double-check it with ClamAV folks.
Hi @mint_choco  Before the fillnull you culd try using "makecontinuous" Something like this (adjust accordingly): | makecontinuous _time span=1d  Did this answer help you? If so, please conside... See more...
Hi @mint_choco  Before the fillnull you culd try using "makecontinuous" Something like this (adjust accordingly): | makecontinuous _time span=1d  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
As it was already stated - the OS differences shouldn't be that problematic from the technical point of view. It can add some maintenance overhead because you have to maintain different package types... See more...
As it was already stated - the OS differences shouldn't be that problematic from the technical point of view. It can add some maintenance overhead because you have to maintain different package types, maybe different service launch methods, but generally, the software should work. It will be unsupported though. But the main issue with such setup is that you'll have a relatively spread out cluster. Here your main issue will probably be latency across your envionment. https://docs.splunk.com/Documentation/Splunk/latest/Capacity/Referencehardware#:~:text=A%20Splunk%20environment%20with%20search,should%20not%20exceed%20100%20milliseconds.
I have a query that is executing a stats count by source type, as we want to see how many sensitive files leave our firm and the quantity. I am doing an appendcols and then transposing In the da... See more...
I have a query that is executing a stats count by source type, as we want to see how many sensitive files leave our firm and the quantity. I am doing an appendcols and then transposing In the dashboard, the pie chart look like:     index=fortinet dlpextra IN (WatermarkBlock1,Log_WatermarkBlock2,Log_WatermarkBlock3,Log_WatermarkBlock4) | lookup DataF.csv dlpextra OUTPUT C_Label as C_Label | stats count as Proxy | appendcols [search index=iron AutomaticClassification | lookup IPort_Class.csv DLP_Class OUTPUT C_Label as C_Label | stats count as Email] | appendcols [search index=035 "Common.DeviceName"="p151.d.com" OR Common.DeviceName="p1p71.c.com" "SensitiveInfoTypeData{}.SensitiveInfoTypeName"=* | table SensitiveInfoTypeData{}.SensitiveInfoTypeName | stats count as SFTP ] | appendcols [search index=testing sourcetype="net:alert" dlp_rule="AZ C*" | eval dlp_rule=replace(dlp_rule, "AB", "") | stats count as Netskope] | transpose | rename "row 1" as Count My question is, how would you edit the Splunk query to rename the column name to the value I provided instead of Other.  DO i even need a transpose? That has been the best way I have found for creating a pie chart out of different data sources.  Preferably, id like to understand how to do that with the JSON formatting I get with Dashboard studio, as well as figure out how to do it inline, within the query.    Thanks  
Thanks for the reply! If I break down this query to just: index=main ("SESSION_STATE_DOWN" OR "SESSION_STATE_UP") | dedup host The only results are up sessions, the "dedup host" removes all of the... See more...
Thanks for the reply! If I break down this query to just: index=main ("SESSION_STATE_DOWN" OR "SESSION_STATE_UP") | dedup host The only results are up sessions, the "dedup host" removes all of the down sessions. as they're from the same host Does the remaining | where match(_raw, "SESSION_STATE_DOWN") AND _time<relative_time(now(), "-60s") only query the results from the | dedup host or the total query?  If yes, it will never find a "SESSION_STATE_DOWN", they are filtered out
I'm attempting to set up an Independent Stream Forwarder on a RHEL machine to collect netflow data, and have it forwarded to HEC on another machine. I have most of the configuration worked out, but w... See more...
I'm attempting to set up an Independent Stream Forwarder on a RHEL machine to collect netflow data, and have it forwarded to HEC on another machine. I have most of the configuration worked out, but when I start the streamfwd service I am receiving the following log messages: INFO [140109244728192] (SnifferReactor/SnifferReactor.cpp:161) stream.SnifferReactor - Starting network capture: sniffer ERROR [140109244728192] (SnifferReactor/PcapNetworkCapture.cpp:238) stream.NetworkCapture - SnifferReactor pcap filter 'not (host REDACTED and port 443) and not (host $decideOnStartup and port 8088)' compilation error: aid supported only on ARCnet FATAL [140109244728192] (CaptureServer.cpp:2338) stream.CaptureServer - SnifferReactor was unable to start packet capturesniffer   I don't know where it's getting that filter. I attempted to set the below line in streamfwd.conf with a valid BPF, but it doesn't seem to honor it and continues with the same error. streamfwdcapture.<N>.filter = <BPF>   I'm not necessarily concerned at this point with getting a working filter, but I assume the filter in the log message is the issue, since it's the only error in the log. Appreciate any help, thanks in advance.
Thank you the solution you provided worked.. I appreciate your patience and willingness to help me. 
Hi, @livehybrid  Thanks your reply. Unfortunately, it didn't work. I can fill the col index4 by using fillnull, instead of  "| eval index4=if(isnull(index4), 0, index4)". But, I got the table like... See more...
Hi, @livehybrid  Thanks your reply. Unfortunately, it didn't work. I can fill the col index4 by using fillnull, instead of  "| eval index4=if(isnull(index4), 0, index4)". But, I got the table like below. The line for 4/2 still missing. _time index1 index2 index3 index4 4/1 12 3 45 0 4/3 16 7 34 0 4/4 15 6 70 0   There is no logs on April 2nd, so there is no line for 4/2, I guess. Is there no way to make line for April 2nd? Maybe should I try without chart command?
how would I do this process in Windows Server 2019 running Splunk?
Hello all, ClamAV detected Unix.Trojan.Gitpaste-9787170-0 in file Splunk_Research_detections.json. This file appears to be a large repository of security research information and we'd like to verify... See more...
Hello all, ClamAV detected Unix.Trojan.Gitpaste-9787170-0 in file Splunk_Research_detections.json. This file appears to be a large repository of security research information and we'd like to verify if this detection is a true concern or if it is a false positive. Threat detection file location: /opt/splunk/etc/apps/Splunk_Security_Essentials/appserver/static/vendor/splunk/Splunk_Research_detections.json Splunk version: 9.4.0 Splunk Security Essentials version: 3.8.1 ClamAV detection: Unix.Trojan.Gitpaste-9787170-0 ClamAV version: 1.4.1/27629 ClamAV definition dates: April 24, 2025 through May 05, 2025 Security Essentials was installed on April 25, 2025 and ClamAV detections began immediately during the first scan following the install.
It's been a close to a year now and I still have the same error when trying to use this app on the iPhone what gives????? @splunk Help!
The Akamai Guardicore Add-on for Splunk is not cloud compatible due to the SDK version being 1.6.8. Splunk Cloud requires a minimum SDK version of 2.0.1. Is it possible for the developer to upgrade t... See more...
The Akamai Guardicore Add-on for Splunk is not cloud compatible due to the SDK version being 1.6.8. Splunk Cloud requires a minimum SDK version of 2.0.1. Is it possible for the developer to upgrade to SDK 2.1.0 or another SDK 2.0.1 or higher? SDK github
We are encountering the same issues with Azure App Service and our Java apps. Infrastructure metrics are totally fine.  Failed to export spans. The request could not be executed. Full error message:... See more...
We are encountering the same issues with Azure App Service and our Java apps. Infrastructure metrics are totally fine.  Failed to export spans. The request could not be executed. Full error message: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Hi, I am using Splunk 9.4.1 and eventgen 8.1.2. In my sample file to generate events I have multiple events in the same sample file.  Sample file: key1={val_1};key2={val2} - Event 1 key1={val_1;k... See more...
Hi, I am using Splunk 9.4.1 and eventgen 8.1.2. In my sample file to generate events I have multiple events in the same sample file.  Sample file: key1={val_1};key2={val2} - Event 1 key1={val_1;key2={val2} - Event 2 in next line Now I need to generate a replacement value, any random GID and replace both val_1 in both the events with the same GID. That is I need to share this. But currently splunk eventgen is not sharing the value but a for each event within the file a new value is being generated.