All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @WUShon Have you tried mapping.fieldColors ?  refer:https://docs.splunk.com/Documentation/Splunk/9.0.1/Viz/PanelreferenceforSimplifiedXML. Please check dashboard studio  for more options.  ... See more...
Hi @WUShon Have you tried mapping.fieldColors ?  refer:https://docs.splunk.com/Documentation/Splunk/9.0.1/Viz/PanelreferenceforSimplifiedXML. Please check dashboard studio  for more options.        If this Helps, Please Upvote.
Just filter with | where _time>=now()-86400 (Or whatever time limit you need) before you remove the _time field with the table command.
Ok, use | bin _time span=15m To split your data into 15-minute buckets. Then count your data by _time and all those other fields.
In other words, you do believe what your scanner says. Just because someone decided that something is "critical" doesn't automatically mean it is. If your VM process doesn't have a possibility for fl... See more...
In other words, you do believe what your scanner says. Just because someone decided that something is "critical" doesn't automatically mean it is. If your VM process doesn't have a possibility for flagging a false positive or adjusting the criticality, it's simply a bad process. Every reasonable VM process has vulnerability assessment after the scan phase. If you're jumping straight into remediation, you're simply taking shortcuts and doing checkbox security. Don't take it personally, I'm not saying you are responsible for the process design. It's just that you might or might not see a vulnerability which in reality is not there "fixed".
@tbessie Hello, How are you receiving this data UF or HF? Do you have any TIME format settings on your props.conf ? https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Configuretimestampre... See more...
@tbessie Hello, How are you receiving this data UF or HF? Do you have any TIME format settings on your props.conf ? https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Configuretimestamprecognition#Syntax_overview I would validate Splunk time parsing configurations first. Did you validate if the indexer and the source system might have misaligned clocks, I have seen inaccurate search results with misalignment ?   If this Helps, Please Upvote.  
 Hello @rake This could be the issue because you are on trial version (firehose has another endpoint) . Deploying this add-on to free trial Splunk Cloud deployments is not supported at this time. ... See more...
 Hello @rake This could be the issue because you are on trial version (firehose has another endpoint) . Deploying this add-on to free trial Splunk Cloud deployments is not supported at this time. Refer: https://docs.splunk.com/Documentation/AddOns/released/Firehose/Installationoverview Note: Disabling SSL validation is not recommended for production environments.. If this helps, please Upvote.
  1. I need to fetch data based on deviceMac such that row gets corresponding data from each column. 2. It should fill NA or NULL if there is not corresponding data 3. If you see column Id, yo... See more...
  1. I need to fetch data based on deviceMac such that row gets corresponding data from each column. 2. It should fill NA or NULL if there is not corresponding data 3. If you see column Id, you are seeing more data.       for example: if deviceMac 90:dd:5d:bf:10:54 is connected to SA91804F4A, then id has 2 values : SA91804F4A and f452465ee7ab but if devicemac d4:54:8b:bd:a1:c8 is connected to f452465ee7ab, then id has 1 value : f452465ee7ab. But I want to have my output like this: 90:dd:5d:bf:10:54 SA91804F4A ( do not include f452465ee7ab) d4:54:8b:bd:a1:c8 f452465ee7ab Splunk query used to get output:   | search | rex field=_raw "(?msi)(?<json>{.+}$$)" | spath input=json | spath input=json output=deviceMac audit.result.devices{}.mac | spath input=json output=deviceName audit.result.devices{}.name | spath input=json output=status audit.result.devices{}.health{}.status | spath input=json output=connectionState audit.result.devices{}.connectionState | spath input=json output=id audit.result.devices{}.leafToRoot{}.id | eval time=strftime(_time,"%m/%d/%Y %H:%M:%S.%N") | dedup deviceMac, id | table time, deviceMac, connectionState, id, deviceName, status  
Hi, I’m currently setting up a pipeline to send logs from AWS Kinesis Firehose to Splunk. I'm using Splunk’s Cloud Trial version as the destination endpoint, and my goal is to send data without requ... See more...
Hi, I’m currently setting up a pipeline to send logs from AWS Kinesis Firehose to Splunk. I'm using Splunk’s Cloud Trial version as the destination endpoint, and my goal is to send data without requiring an SSL handshake. Here's a summary of my setup: Service: AWS Kinesis Firehose Destination: Splunk Cloud Trail (using Splunk HEC URL) Goal: Send data directly from Firehose to Splunk without SSL validation, if possible. "errorCode": "Splunk.SSLHandshake", "errorMessage": "Could not connect to the HEC endpoint. Make sure that the certificate and the host are valid." To troubleshoot, I also tested sending a record with the following command: aws firehose put-record --delivery-stream-name FirehoseSplunkDeliveryStream \ --record='{"Data":"eyJldmVudCI6eyJrZXkxIjoidmFsdWUxIiwia2V5MiI6InZhbHVlMiJ9fQ=="}'  The SSL handshake error persists when connecting to the Splunk HEC endpoint. Has anyone configured a similar setup, or is there a workaround to disable SSL validation for the Splunk endpoint? I'm new to splunk and just trying it out, any insights or suggestions would be greatly appreciated! Thanks!
In my company's Splunk server, when I do a search, I usually see a difference in time between the "Time" column and the "Event" column for each log entry.  An example: Time: 10/21/24 11:06:37.000 AM... See more...
In my company's Splunk server, when I do a search, I usually see a difference in time between the "Time" column and the "Event" column for each log entry.  An example: Time: 10/21/24 11:06:37.000 AM Event: 2024-10-21 11:31:59,232 priority=WARN  ... Why would the Time column have 11:06:37 but the Event field (the actual logged data) show 11:31:59,232 
thank you, that is a useful query
Found it - The scatter graph visualization on Dashboard studio works - It appears that I just had my fields in the wrong order. When I changed the table to X, Y, Label, things began to plot as expcte... See more...
Found it - The scatter graph visualization on Dashboard studio works - It appears that I just had my fields in the wrong order. When I changed the table to X, Y, Label, things began to plot as expcted. For the classic dashboard there is a Scatter Line Graph visualization.
I would like to graph a table that has 3 fields Label                                 X                                     Y Value1                            27                                  4... See more...
I would like to graph a table that has 3 fields Label                                 X                                     Y Value1                            27                                  42 Value2                           92                                   87 Value3                           61                                  74   I think it would be a scatter graph, I am currently using dashboard studio (splunk 9.3.x) - maybe this in not available in dashboard studio yet, if so, is there an option in the classic dashboards?  Using the standard scatter graph panel, I currently get the X value plotted and the value as the legend. Thanks for any assistance, Jason
still having this issue. Any help will be appreciated. Thank you
Hi @victorcorrea , for my knowledge you cannot add a delay in ingestions. you could create a script that copies the files in another folder removing them after copying, so you're sure that they hav... See more...
Hi @victorcorrea , for my knowledge you cannot add a delay in ingestions. you could create a script that copies the files in another folder removing them after copying, so you're sure that they have the correct grants and no locks, but (I know it) it's a porkaround! Ciao. Giuseppe
I'm getting the same error. Anyone figure out the solution: Splunk App for SOAR Export Latest Version 4.3.13 There was an error adding the server configuration. On SOAR: Verify server's 'Allowed I... See more...
I'm getting the same error. Anyone figure out the solution: Splunk App for SOAR Export Latest Version 4.3.13 There was an error adding the server configuration. On SOAR: Verify server's 'Allowed IPs' and authorization configuration. Error talking to Splunk: POST /servicesNS/nobody/phantom/storage/passwords: status code 500: b'{"messages":[{"type":"ERROR","text":"\\n In handler \'passwords\': Data could not be written: /nobody/phantom/passwords/credential::78a22ab111a4d706cbb4d830f19ea1b3d752f277:/password: $7$qAjGApYELkDTpOBFCFv+hnwTe6tSbTIAIk2b/s4q6GdFBw0mT6AQYQh85WYOruod9tt4ArrN0rjOHYBbesSJqjOjeOUqIjeYl7efAQ=="}]}'
Ciao @gcusello , Thanks for chiming in. The Universal Forwarder runs as the Local System account in this server, so it has full access to the folder and files. I believe the issue might be w... See more...
Ciao @gcusello , Thanks for chiming in. The Universal Forwarder runs as the Local System account in this server, so it has full access to the folder and files. I believe the issue might be with the TIBCO Process that writes the logs into the disk - and locks them while doing so. Since the files are large, Splunk tries to ingest them while they are still being written into disk and, therefore, locked by the TIBCO Process. I wanted to try and add a delay to the Log Ingestion in the UF Settings but I am not really sure how to effectively achieve that. Regards, Victor
Unfortunately it is not that simple.  It has nothing to do with "believing" everything Nessus says.  If Nessus reports a vulnerability we have 7 days to address a critical, or 30 days to address a Me... See more...
Unfortunately it is not that simple.  It has nothing to do with "believing" everything Nessus says.  If Nessus reports a vulnerability we have 7 days to address a critical, or 30 days to address a Medium.  If not addressed within 30 days then we need to open a POA&M with specific details as to why we are not compliant/what are we doing to fix and/or mitigate the issue.  And this still counts against us when trying to keep an active ATO.  So the OP's question is still valid.  When will we see an update that addresses this vulnerability?  So at a bare minimum we can be compliant with our documentation. 
Hi , Say the numbers are for every 15 minute timeframe - i want to see for the same on the next 15 minutes run and see if they are consecutive meaning the error repeated again   sorry if i did not ... See more...
Hi , Say the numbers are for every 15 minute timeframe - i want to see for the same on the next 15 minutes run and see if they are consecutive meaning the error repeated again   sorry if i did not explain properly.. please let me know i can prepare a sample dataset
The ping and traceroute checks confirm a lack of connectivity between your system and your Splunk Cloud stack.  Check your firewall and/or contact your Network Team.
helm install -n splunk --create-namespace splunk-otel-collector --set="cloudProvider=aws,distribution=eks,splunkObservability.accessToken=xxxxxx,clusterName=eks-uk-test,splunkObservability.realm=eu2,... See more...
helm install -n splunk --create-namespace splunk-otel-collector --set="cloudProvider=aws,distribution=eks,splunkObservability.accessToken=xxxxxx,clusterName=eks-uk-test,splunkObservability.realm=eu2,gateway.enabled=false,splunkPlatform.endpoint=xxxxxxx,splunkPlatform.token=xxxxx,splunkObservability.profilingEnabled=true,environment=test,operator.enabled=true,agent.discovery.enabled=true" splunk-otel-collector-chart/splunk-otel-collector Still gives: Error: INSTALLATION FAILED: template: splunk-otel-collector/templates/operator/instrumentation.yaml:2:4: executing "splunk-otel-collector/templates/operator/instrumentation.yaml" at <include "splunk-otel-collector.operator.validation-rules" .>: error calling include: template: splunk-otel-collector/templates/operator/_helpers.tpl:17:13: executing "splunk-otel-collector.operator.validation-rules" at <.Values.instrumentation.exporter.endpoint>: nil pointer evaluating interface {}.endpoint