All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am working on implementing this query, but I need to rename and standardize the output to the C_Label values so i can stats count on those. I need a count per source. I dont think i cram my eval st... See more...
I am working on implementing this query, but I need to rename and standardize the output to the C_Label values so i can stats count on those. I need a count per source. I dont think i cram my eval statements with the OR statements. Ill try to incorporate this in my query   Thanks
Hello, I am looking to add a particular value to an existing search of Okta data. The problem is I don't know how to extract the value which is on the same level as other values. The value I am look... See more...
Hello, I am looking to add a particular value to an existing search of Okta data. The problem is I don't know how to extract the value which is on the same level as other values. The value I am looking for is "Workflows Administrator". The existing search is: index=okta "debugContext.debugData.privilegeGranted"="*" | rename actor.displayName as "Actor", targetUserDisplayName as "Target Name", targetUserAlternateId as "Target ID", description as "Action", debugContext.debugData.privilegeGranted as "Role(s)" | eval Time = strftime(_time, "%Y-%d-%m %H:%M:%S") | fields - _time | table Time, Actor, Action, "Target Name", "Target ID", Action, "Role(s)" and sample data is { [-] actor: { [+] } authenticationContext: { [+] } client: { [+] } debugContext: { [-] debugData: { [-] privilegeGranted: Application administrator (all), User administrator (all), Help Desk administrator (all) } } device: null displayMessage: Grant user privilege eventType: user.account.privilege.grant legacyEventType: core.user.admin_privilege.granted outcome: { [-] reason: null result: SUCCESS } published: 2025-05-08T19:30:54.612Z request: { [-] ipChain: [ [+] ] } securityContext: { [-] asNumber: null asOrg: null domain: null isProxy: null isp: null } severity: INFO target: [ [-] { [-] alternateId: jdoe@company.com detailEntry: null displayName: John Doe id: 00umfyv9jwzVvafI71t7 type: User } { [-] alternateId: unknown detailEntry: null displayName: Custom role binding added id: CUSTOM_ROLE_BINDING_ADDED type: CUSTOM_ROLE_BINDING_ADDED } { [-] alternateId: /api/v1/iam/roles/WORKFLOWS_ADMIN detailEntry: null displayName: Workflows Administrator id: WORKFLOWS_ADMIN type: CUSTOM_ROLE } { [-] alternateId: /api/v1/iam/resource-sets/WORKFLOWS_IAM_POLICY detailEntry: null displayName: Workflows Resource Set id: WORKFLOWS_IAM_POLICY type: RESOURCE_SET } ] transaction: { [+] } uuid: 2c42-11f0-a9fe version: 0 }  Any help is appreciated. Thank you!
Understood.  To test this, I'll actually need to down an interface for at least 60 seconds to see the Down result.  I'll need to get a network engineer involved to test.  I will get back ASAP.
Try this query that does not use appends or transpose. (index=fortinet dlpextra IN (WatermarkBlock1,Log_WatermarkBlock2,Log_WatermarkBlock3,Log_WatermarkBlock4)) OR (index=035 "Common.DeviceName"="p... See more...
Try this query that does not use appends or transpose. (index=fortinet dlpextra IN (WatermarkBlock1,Log_WatermarkBlock2,Log_WatermarkBlock3,Log_WatermarkBlock4)) OR (index=035 "Common.DeviceName"="p151.d.com" OR Common.DeviceName="p1p71.c.com" "SensitiveInfoTypeData{}.SensitiveInfoTypeName"=*) OR (index=iron AutomaticClassification) OR (index=testing sourcetype="net:alert" dlp_rule="AZ C*") | eval type = case(index=fortinet, "Proxy", index=iron, "Email", index=035, "SFTP", index=testing, "Netskope", 1==1, "Unknown") | stats count by type
Thanks for the reply! I ran the query as is and received some odd results.  I slightly modified it as shown: index=main sourcetype="cisco:ios" ("SESSION_STATE_DOWN" OR "SESSION_STATE_UP") | eval st... See more...
Thanks for the reply! I ran the query as is and received some odd results.  I slightly modified it as shown: index=main sourcetype="cisco:ios" ("SESSION_STATE_DOWN" OR "SESSION_STATE_UP") | eval status=case(searchmatch("%BFD-5-SESSION_STATE_DOWN"),"Down",searchmatch("%BFD-5-SESSION_STATE_UP"),"Up",true(),"Unknown") | rex "on interface (?<iface>[a-zA-Z0-9]+)" | stats range(_time) as downTime latest(status) as latestStatus by iface | where downTime<60 This produced 2 results for the past 7 days: Can this be run in realtime and alert be generated for a "LatestStatus" of Down?
Each command works only with the results from the previous command. If the most recent event from each host is SESSION_STATE_UP then your job is done since there are no down hosts.  The where comman... See more...
Each command works only with the results from the previous command. If the most recent event from each host is SESSION_STATE_UP then your job is done since there are no down hosts.  The where command will find no SESSION_STATE_DOWN events so there are none to display or alert on.  This is the desired state (isn't it?). OTOH, if a host currently is down then dedup host will return SESSION_STATE_DOWN for that host and the where command will decide if the host has been down long enough to worry about.
Hi @dflynn235  Were you able to try the search I provided?  Im happy to help work through this if theres an issue with this approach. @livehybrid wrote: Hi @dflynn235  Does the following do w... See more...
Hi @dflynn235  Were you able to try the search I provided?  Im happy to help work through this if theres an issue with this approach. @livehybrid wrote: Hi @dflynn235  Does the following do what you are looking for?  | eval status=case(searchmatch("has gone down"),"Down",searchmatch("is up"),"Up",true(),"Unknown") | rex "on interface (?<iface>[a-zA-Z0-9]+)" | stats range(_time) as downTime latest(status) as latestStatus by iface | where downTime>60 Here is a working example with sample data, just add the | where to limit as required. | makeresults count=1 | eval _raw="2025-05-07T07:20:40.482713-04:00 \"switch_name\" : 2025 May 7 07:20:40 EDT: %BFD-5-SESSION_STATE_DOWN: BFD session 1124073489 to neighbor \"IP Address\" on interface Vlan43 has gone down. Reason: Administratively Down." | eval host="switch_name" | append [| makeresults count=1 | eval _raw="2025-05-07T07:20:41.482771-04:00 \"switch_name\" : 2025 May 7 07:20:41 EDT: %BFD-5-SESSION_STATE_UP: BFD session 1124073489 to neighbor \"IP Address\" on interface Vlan43 is up." | eval host="switch_name"] | rex "^(?<timeStr>[^\s]+)" | eval _time=strptime(timeStr,"%Y-%m-%dT%H:%M:%S.%6N%Z") | eval status=case(searchmatch("has gone down"),"Down",searchmatch("is up"),"Up",true(),"Unknown") | rex "on interface (?<iface>[a-zA-Z0-9]+)" | stats range(_time) as downTime latest(status) as latestStatus by iface  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing      
Actually, assuming that you're managing your configs with a deployment server, it might prove easier and less error-prone to uninstall old UF version and simply deploy a fresh install of a new UF ver... See more...
Actually, assuming that you're managing your configs with a deployment server, it might prove easier and less error-prone to uninstall old UF version and simply deploy a fresh install of a new UF version and attach it to the DS. But this might cause duplication of ingested data if the state files (mostly fishbucket but also state of eventlog inputs or wmi inputs) are not handled properly.
There is too little information to even blindly guess. Firstly, how are those events getting into your Splunk infrastructure? Do you have a UF installed on remote hosts and monitor file inputs deifn... See more...
There is too little information to even blindly guess. Firstly, how are those events getting into your Splunk infrastructure? Do you have a UF installed on remote hosts and monitor file inputs deifned on them? Or maybe those are eventlog inputs? Or are you receiving syslog data over the network? Directly or using a third party syslog daemon? Secondly, how did you verify that the data for those "outages" isn't ingested at all? Maybe the sources (or receivers) are getting clogged so your ingestion process stops for a while but then resumes and catches up but your data onboarding is incomplete so you don't have reliable timestamps? There are many things that can go wrong.
This is more of a question to ClamAV authors/database maintaners. I'd hazard a guess that the file contained within SE has some characteristic pieces of the Gitpaste method as part of its searches. A... See more...
This is more of a question to ClamAV authors/database maintaners. I'd hazard a guess that the file contained within SE has some characteristic pieces of the Gitpaste method as part of its searches. And ClamAV detects their presence and flags the file. But I'd double-check it with ClamAV folks.
Hi @mint_choco  Before the fillnull you culd try using "makecontinuous" Something like this (adjust accordingly): | makecontinuous _time span=1d  Did this answer help you? If so, please conside... See more...
Hi @mint_choco  Before the fillnull you culd try using "makecontinuous" Something like this (adjust accordingly): | makecontinuous _time span=1d  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
As it was already stated - the OS differences shouldn't be that problematic from the technical point of view. It can add some maintenance overhead because you have to maintain different package types... See more...
As it was already stated - the OS differences shouldn't be that problematic from the technical point of view. It can add some maintenance overhead because you have to maintain different package types, maybe different service launch methods, but generally, the software should work. It will be unsupported though. But the main issue with such setup is that you'll have a relatively spread out cluster. Here your main issue will probably be latency across your envionment. https://docs.splunk.com/Documentation/Splunk/latest/Capacity/Referencehardware#:~:text=A%20Splunk%20environment%20with%20search,should%20not%20exceed%20100%20milliseconds.
I have a query that is executing a stats count by source type, as we want to see how many sensitive files leave our firm and the quantity. I am doing an appendcols and then transposing In the da... See more...
I have a query that is executing a stats count by source type, as we want to see how many sensitive files leave our firm and the quantity. I am doing an appendcols and then transposing In the dashboard, the pie chart look like:     index=fortinet dlpextra IN (WatermarkBlock1,Log_WatermarkBlock2,Log_WatermarkBlock3,Log_WatermarkBlock4) | lookup DataF.csv dlpextra OUTPUT C_Label as C_Label | stats count as Proxy | appendcols [search index=iron AutomaticClassification | lookup IPort_Class.csv DLP_Class OUTPUT C_Label as C_Label | stats count as Email] | appendcols [search index=035 "Common.DeviceName"="p151.d.com" OR Common.DeviceName="p1p71.c.com" "SensitiveInfoTypeData{}.SensitiveInfoTypeName"=* | table SensitiveInfoTypeData{}.SensitiveInfoTypeName | stats count as SFTP ] | appendcols [search index=testing sourcetype="net:alert" dlp_rule="AZ C*" | eval dlp_rule=replace(dlp_rule, "AB", "") | stats count as Netskope] | transpose | rename "row 1" as Count My question is, how would you edit the Splunk query to rename the column name to the value I provided instead of Other.  DO i even need a transpose? That has been the best way I have found for creating a pie chart out of different data sources.  Preferably, id like to understand how to do that with the JSON formatting I get with Dashboard studio, as well as figure out how to do it inline, within the query.    Thanks  
Thanks for the reply! If I break down this query to just: index=main ("SESSION_STATE_DOWN" OR "SESSION_STATE_UP") | dedup host The only results are up sessions, the "dedup host" removes all of the... See more...
Thanks for the reply! If I break down this query to just: index=main ("SESSION_STATE_DOWN" OR "SESSION_STATE_UP") | dedup host The only results are up sessions, the "dedup host" removes all of the down sessions. as they're from the same host Does the remaining | where match(_raw, "SESSION_STATE_DOWN") AND _time<relative_time(now(), "-60s") only query the results from the | dedup host or the total query?  If yes, it will never find a "SESSION_STATE_DOWN", they are filtered out
I'm attempting to set up an Independent Stream Forwarder on a RHEL machine to collect netflow data, and have it forwarded to HEC on another machine. I have most of the configuration worked out, but w... See more...
I'm attempting to set up an Independent Stream Forwarder on a RHEL machine to collect netflow data, and have it forwarded to HEC on another machine. I have most of the configuration worked out, but when I start the streamfwd service I am receiving the following log messages: INFO [140109244728192] (SnifferReactor/SnifferReactor.cpp:161) stream.SnifferReactor - Starting network capture: sniffer ERROR [140109244728192] (SnifferReactor/PcapNetworkCapture.cpp:238) stream.NetworkCapture - SnifferReactor pcap filter 'not (host REDACTED and port 443) and not (host $decideOnStartup and port 8088)' compilation error: aid supported only on ARCnet FATAL [140109244728192] (CaptureServer.cpp:2338) stream.CaptureServer - SnifferReactor was unable to start packet capturesniffer   I don't know where it's getting that filter. I attempted to set the below line in streamfwd.conf with a valid BPF, but it doesn't seem to honor it and continues with the same error. streamfwdcapture.<N>.filter = <BPF>   I'm not necessarily concerned at this point with getting a working filter, but I assume the filter in the log message is the issue, since it's the only error in the log. Appreciate any help, thanks in advance.
Thank you the solution you provided worked.. I appreciate your patience and willingness to help me. 
Hi, @livehybrid  Thanks your reply. Unfortunately, it didn't work. I can fill the col index4 by using fillnull, instead of  "| eval index4=if(isnull(index4), 0, index4)". But, I got the table like... See more...
Hi, @livehybrid  Thanks your reply. Unfortunately, it didn't work. I can fill the col index4 by using fillnull, instead of  "| eval index4=if(isnull(index4), 0, index4)". But, I got the table like below. The line for 4/2 still missing. _time index1 index2 index3 index4 4/1 12 3 45 0 4/3 16 7 34 0 4/4 15 6 70 0   There is no logs on April 2nd, so there is no line for 4/2, I guess. Is there no way to make line for April 2nd? Maybe should I try without chart command?
how would I do this process in Windows Server 2019 running Splunk?
Hello all, ClamAV detected Unix.Trojan.Gitpaste-9787170-0 in file Splunk_Research_detections.json. This file appears to be a large repository of security research information and we'd like to verify... See more...
Hello all, ClamAV detected Unix.Trojan.Gitpaste-9787170-0 in file Splunk_Research_detections.json. This file appears to be a large repository of security research information and we'd like to verify if this detection is a true concern or if it is a false positive. Threat detection file location: /opt/splunk/etc/apps/Splunk_Security_Essentials/appserver/static/vendor/splunk/Splunk_Research_detections.json Splunk version: 9.4.0 Splunk Security Essentials version: 3.8.1 ClamAV detection: Unix.Trojan.Gitpaste-9787170-0 ClamAV version: 1.4.1/27629 ClamAV definition dates: April 24, 2025 through May 05, 2025 Security Essentials was installed on April 25, 2025 and ClamAV detections began immediately during the first scan following the install.
It's been a close to a year now and I still have the same error when trying to use this app on the iPhone what gives????? @splunk Help!