All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It is difficult to advise without seeing your events. Please share some anonymised events which demonstrate the issue. Please share the raw event in a code block (using the </> button above) to prese... See more...
It is difficult to advise without seeing your events. Please share some anonymised events which demonstrate the issue. Please share the raw event in a code block (using the </> button above) to preserve formatting.
Try this | rex max_match=0 field=tags "(?<namevalue>[^:,]+:[^, ]+)" | mvexpand namevalue | rex field=namevalue "(?<name>[^:]+):(?<value>.*)" | eval {name}=value
We deal with hundreds of iocs ( mostly flagged IP's) that come in monthly, and we need to check them for hits in our network. We do not want to continue using summary search one at a time. Is it poss... See more...
We deal with hundreds of iocs ( mostly flagged IP's) that come in monthly, and we need to check them for hits in our network. We do not want to continue using summary search one at a time. Is it possible to use lookup table ( or any other way) to search hundreds at a time or does this have to be done one at a time. I am very new to splunk and still learning. I am needing to see if we have had any traffic from these or to these IP's. 
Hi, i made changes on my indexer storage but when i see on monitoring console part disk usage, the value is negative. Have anyone face this?. I already refresh the asset with monitoring console refre... See more...
Hi, i made changes on my indexer storage but when i see on monitoring console part disk usage, the value is negative. Have anyone face this?. I already refresh the asset with monitoring console refresh and restart the instance but nothing changed.  
Idk where to ask, that's why i'm asking here. And still don't know how to solve this issue.  I'm just Path Finder splunk and don't have access to open ticket to Splunk principle, maybe it can be sol... See more...
Idk where to ask, that's why i'm asking here. And still don't know how to solve this issue.  I'm just Path Finder splunk and don't have access to open ticket to Splunk principle, maybe it can be solved if you have Splunk Principle. 
I think I got it  | eval success=if(status=200,1,0) | eval failure=if(status=400,1,0) | stats sum(failure) as fail_sum, sum(success) as success_sum by app | eval success_rate=round((success_sum / (s... See more...
I think I got it  | eval success=if(status=200,1,0) | eval failure=if(status=400,1,0) | stats sum(failure) as fail_sum, sum(success) as success_sum by app | eval success_rate=round((success_sum / (success_sum + fail_sum))*100,1) | table app, success_rate
Thanks - this is very close to what I'm looking for (I do want to perform this extraction at search time), but may need a couple tweaks. 1) All of the dept's have a space in them (some more than one... See more...
Thanks - this is very close to what I'm looking for (I do want to perform this extraction at search time), but may need a couple tweaks. 1) All of the dept's have a space in them (some more than one)and the rex is only picking up the first word of that dept. Examples: "support services", "xyz operations r&d" 2) Also - when I look into each event to see that the Tags fields are extracted,  only one actually gets extracted. But it's not the same one each time?? The "name" and "namevalue" fields match the one field that does get extracted. Hope that makes sense?    
I've got data so: "[clientip]  [host] - [time] [method] [uri_path] [status] [useragent]" ..   and do the following search:   index=web uri_path="/somepath" status="200" OR status="400" | rex f... See more...
I've got data so: "[clientip]  [host] - [time] [method] [uri_path] [status] [useragent]" ..   and do the following search:   index=web uri_path="/somepath" status="200" OR status="400" | rex field=useragent "^(?<app_name>[^/]+)/(?<app_version>[^;]+)?\((?<app_platform>[^;]+); *" | eval app=app_platform+" "+app_name+" "+app_version   I've split up the useragent just fine and verified the output. I want to now compare status  by "app". So I've added the following:   | stats count by app, status   Which gives me: app status count android app 1.0 200 5000 ios app 2.0 400 3 android app 1.1 200 500 android app 1.0 400 12 ios app 2.0 200 3000 How can I compare, for a given "app" (combo of platform, name, version) the rate of success where success is when the response = 200 and failure if 400. I understand that I need to take success and divide by success + failure count.. But how do I combine this data?  Also note that I need to consider that some apps may not have any 400 errors. 
It was worked to me! Thanks a lot! 
Did you manage to find resolution to this issue. I am also facing same issues
What do you mean by need to switch with config back to the tcp method? How did you do that? after this change do you see it listen to port 8089? netstat -pant | egrep 8089  - do you see listen ?
Hi @WUShon Have you tried mapping.fieldColors ?  refer:https://docs.splunk.com/Documentation/Splunk/9.0.1/Viz/PanelreferenceforSimplifiedXML. Please check dashboard studio  for more options.  ... See more...
Hi @WUShon Have you tried mapping.fieldColors ?  refer:https://docs.splunk.com/Documentation/Splunk/9.0.1/Viz/PanelreferenceforSimplifiedXML. Please check dashboard studio  for more options.        If this Helps, Please Upvote.
Just filter with | where _time>=now()-86400 (Or whatever time limit you need) before you remove the _time field with the table command.
Ok, use | bin _time span=15m To split your data into 15-minute buckets. Then count your data by _time and all those other fields.
In other words, you do believe what your scanner says. Just because someone decided that something is "critical" doesn't automatically mean it is. If your VM process doesn't have a possibility for fl... See more...
In other words, you do believe what your scanner says. Just because someone decided that something is "critical" doesn't automatically mean it is. If your VM process doesn't have a possibility for flagging a false positive or adjusting the criticality, it's simply a bad process. Every reasonable VM process has vulnerability assessment after the scan phase. If you're jumping straight into remediation, you're simply taking shortcuts and doing checkbox security. Don't take it personally, I'm not saying you are responsible for the process design. It's just that you might or might not see a vulnerability which in reality is not there "fixed".
@tbessie Hello, How are you receiving this data UF or HF? Do you have any TIME format settings on your props.conf ? https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Configuretimestampre... See more...
@tbessie Hello, How are you receiving this data UF or HF? Do you have any TIME format settings on your props.conf ? https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Configuretimestamprecognition#Syntax_overview I would validate Splunk time parsing configurations first. Did you validate if the indexer and the source system might have misaligned clocks, I have seen inaccurate search results with misalignment ?   If this Helps, Please Upvote.  
 Hello @rake This could be the issue because you are on trial version (firehose has another endpoint) . Deploying this add-on to free trial Splunk Cloud deployments is not supported at this time. ... See more...
 Hello @rake This could be the issue because you are on trial version (firehose has another endpoint) . Deploying this add-on to free trial Splunk Cloud deployments is not supported at this time. Refer: https://docs.splunk.com/Documentation/AddOns/released/Firehose/Installationoverview Note: Disabling SSL validation is not recommended for production environments.. If this helps, please Upvote.
  1. I need to fetch data based on deviceMac such that row gets corresponding data from each column. 2. It should fill NA or NULL if there is not corresponding data 3. If you see column Id, yo... See more...
  1. I need to fetch data based on deviceMac such that row gets corresponding data from each column. 2. It should fill NA or NULL if there is not corresponding data 3. If you see column Id, you are seeing more data.       for example: if deviceMac 90:dd:5d:bf:10:54 is connected to SA91804F4A, then id has 2 values : SA91804F4A and f452465ee7ab but if devicemac d4:54:8b:bd:a1:c8 is connected to f452465ee7ab, then id has 1 value : f452465ee7ab. But I want to have my output like this: 90:dd:5d:bf:10:54 SA91804F4A ( do not include f452465ee7ab) d4:54:8b:bd:a1:c8 f452465ee7ab Splunk query used to get output:   | search | rex field=_raw "(?msi)(?<json>{.+}$$)" | spath input=json | spath input=json output=deviceMac audit.result.devices{}.mac | spath input=json output=deviceName audit.result.devices{}.name | spath input=json output=status audit.result.devices{}.health{}.status | spath input=json output=connectionState audit.result.devices{}.connectionState | spath input=json output=id audit.result.devices{}.leafToRoot{}.id | eval time=strftime(_time,"%m/%d/%Y %H:%M:%S.%N") | dedup deviceMac, id | table time, deviceMac, connectionState, id, deviceName, status  
Hi, I’m currently setting up a pipeline to send logs from AWS Kinesis Firehose to Splunk. I'm using Splunk’s Cloud Trial version as the destination endpoint, and my goal is to send data without requ... See more...
Hi, I’m currently setting up a pipeline to send logs from AWS Kinesis Firehose to Splunk. I'm using Splunk’s Cloud Trial version as the destination endpoint, and my goal is to send data without requiring an SSL handshake. Here's a summary of my setup: Service: AWS Kinesis Firehose Destination: Splunk Cloud Trail (using Splunk HEC URL) Goal: Send data directly from Firehose to Splunk without SSL validation, if possible. "errorCode": "Splunk.SSLHandshake", "errorMessage": "Could not connect to the HEC endpoint. Make sure that the certificate and the host are valid." To troubleshoot, I also tested sending a record with the following command: aws firehose put-record --delivery-stream-name FirehoseSplunkDeliveryStream \ --record='{"Data":"eyJldmVudCI6eyJrZXkxIjoidmFsdWUxIiwia2V5MiI6InZhbHVlMiJ9fQ=="}'  The SSL handshake error persists when connecting to the Splunk HEC endpoint. Has anyone configured a similar setup, or is there a workaround to disable SSL validation for the Splunk endpoint? I'm new to splunk and just trying it out, any insights or suggestions would be greatly appreciated! Thanks!
In my company's Splunk server, when I do a search, I usually see a difference in time between the "Time" column and the "Event" column for each log entry.  An example: Time: 10/21/24 11:06:37.000 AM... See more...
In my company's Splunk server, when I do a search, I usually see a difference in time between the "Time" column and the "Event" column for each log entry.  An example: Time: 10/21/24 11:06:37.000 AM Event: 2024-10-21 11:31:59,232 priority=WARN  ... Why would the Time column have 11:06:37 but the Event field (the actual logged data) show 11:31:59,232