All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

thank you, that is a useful query
Found it - The scatter graph visualization on Dashboard studio works - It appears that I just had my fields in the wrong order. When I changed the table to X, Y, Label, things began to plot as expcte... See more...
Found it - The scatter graph visualization on Dashboard studio works - It appears that I just had my fields in the wrong order. When I changed the table to X, Y, Label, things began to plot as expcted. For the classic dashboard there is a Scatter Line Graph visualization.
I would like to graph a table that has 3 fields Label                                 X                                     Y Value1                            27                                  4... See more...
I would like to graph a table that has 3 fields Label                                 X                                     Y Value1                            27                                  42 Value2                           92                                   87 Value3                           61                                  74   I think it would be a scatter graph, I am currently using dashboard studio (splunk 9.3.x) - maybe this in not available in dashboard studio yet, if so, is there an option in the classic dashboards?  Using the standard scatter graph panel, I currently get the X value plotted and the value as the legend. Thanks for any assistance, Jason
still having this issue. Any help will be appreciated. Thank you
Hi @victorcorrea , for my knowledge you cannot add a delay in ingestions. you could create a script that copies the files in another folder removing them after copying, so you're sure that they hav... See more...
Hi @victorcorrea , for my knowledge you cannot add a delay in ingestions. you could create a script that copies the files in another folder removing them after copying, so you're sure that they have the correct grants and no locks, but (I know it) it's a porkaround! Ciao. Giuseppe
I'm getting the same error. Anyone figure out the solution: Splunk App for SOAR Export Latest Version 4.3.13 There was an error adding the server configuration. On SOAR: Verify server's 'Allowed I... See more...
I'm getting the same error. Anyone figure out the solution: Splunk App for SOAR Export Latest Version 4.3.13 There was an error adding the server configuration. On SOAR: Verify server's 'Allowed IPs' and authorization configuration. Error talking to Splunk: POST /servicesNS/nobody/phantom/storage/passwords: status code 500: b'{"messages":[{"type":"ERROR","text":"\\n In handler \'passwords\': Data could not be written: /nobody/phantom/passwords/credential::78a22ab111a4d706cbb4d830f19ea1b3d752f277:/password: $7$qAjGApYELkDTpOBFCFv+hnwTe6tSbTIAIk2b/s4q6GdFBw0mT6AQYQh85WYOruod9tt4ArrN0rjOHYBbesSJqjOjeOUqIjeYl7efAQ=="}]}'
Ciao @gcusello , Thanks for chiming in. The Universal Forwarder runs as the Local System account in this server, so it has full access to the folder and files. I believe the issue might be w... See more...
Ciao @gcusello , Thanks for chiming in. The Universal Forwarder runs as the Local System account in this server, so it has full access to the folder and files. I believe the issue might be with the TIBCO Process that writes the logs into the disk - and locks them while doing so. Since the files are large, Splunk tries to ingest them while they are still being written into disk and, therefore, locked by the TIBCO Process. I wanted to try and add a delay to the Log Ingestion in the UF Settings but I am not really sure how to effectively achieve that. Regards, Victor
Unfortunately it is not that simple.  It has nothing to do with "believing" everything Nessus says.  If Nessus reports a vulnerability we have 7 days to address a critical, or 30 days to address a Me... See more...
Unfortunately it is not that simple.  It has nothing to do with "believing" everything Nessus says.  If Nessus reports a vulnerability we have 7 days to address a critical, or 30 days to address a Medium.  If not addressed within 30 days then we need to open a POA&M with specific details as to why we are not compliant/what are we doing to fix and/or mitigate the issue.  And this still counts against us when trying to keep an active ATO.  So the OP's question is still valid.  When will we see an update that addresses this vulnerability?  So at a bare minimum we can be compliant with our documentation. 
Hi , Say the numbers are for every 15 minute timeframe - i want to see for the same on the next 15 minutes run and see if they are consecutive meaning the error repeated again   sorry if i did not ... See more...
Hi , Say the numbers are for every 15 minute timeframe - i want to see for the same on the next 15 minutes run and see if they are consecutive meaning the error repeated again   sorry if i did not explain properly.. please let me know i can prepare a sample dataset
The ping and traceroute checks confirm a lack of connectivity between your system and your Splunk Cloud stack.  Check your firewall and/or contact your Network Team.
helm install -n splunk --create-namespace splunk-otel-collector --set="cloudProvider=aws,distribution=eks,splunkObservability.accessToken=xxxxxx,clusterName=eks-uk-test,splunkObservability.realm=eu2,... See more...
helm install -n splunk --create-namespace splunk-otel-collector --set="cloudProvider=aws,distribution=eks,splunkObservability.accessToken=xxxxxx,clusterName=eks-uk-test,splunkObservability.realm=eu2,gateway.enabled=false,splunkPlatform.endpoint=xxxxxxx,splunkPlatform.token=xxxxx,splunkObservability.profilingEnabled=true,environment=test,operator.enabled=true,agent.discovery.enabled=true" splunk-otel-collector-chart/splunk-otel-collector Still gives: Error: INSTALLATION FAILED: template: splunk-otel-collector/templates/operator/instrumentation.yaml:2:4: executing "splunk-otel-collector/templates/operator/instrumentation.yaml" at <include "splunk-otel-collector.operator.validation-rules" .>: error calling include: template: splunk-otel-collector/templates/operator/_helpers.tpl:17:13: executing "splunk-otel-collector.operator.validation-rules" at <.Values.instrumentation.exporter.endpoint>: nil pointer evaluating interface {}.endpoint
Hi PickleRick, Thanks for looking into this.. Say i have this dataset with errors for a particular client, api ... i need to look for the  error that is consecutive meaning it is repeating - say we ... See more...
Hi PickleRick, Thanks for looking into this.. Say i have this dataset with errors for a particular client, api ... i need to look for the  error that is consecutive meaning it is repeating - say we are looking at the last 15 minutes Client/customer   api_name            _time                                   error count Abc                            Validation_V2   2024 oct 29 10.30             10 Xyz                             Testing_V2         2024 oct 29 10.30             15 TestCust                   Testing_V3         2024 oct 29 10.30            20    
Welcome to Splunk O11y Cloud Nickhills !   From your first message, I see that you are deploying the collector as Gateway (on the helm install command, parameter gateway.enabled is set to true). Ca... See more...
Welcome to Splunk O11y Cloud Nickhills !   From your first message, I see that you are deploying the collector as Gateway (on the helm install command, parameter gateway.enabled is set to true). Can you confirm that you need to setup OpenTelemetry as gateway? If not, please try to re-install as agent and let us know how it goes.   Regards, Houssem
The earliest=-30d is there so I can get an average count for each the count for each triggered alert over the past 30 days.  My question is how can I limit those results so I only see records from ye... See more...
The earliest=-30d is there so I can get an average count for each the count for each triggered alert over the past 30 days.  My question is how can I limit those results so I only see records from yesterday, not the other 29 days
Hello, I’m experiencing a connectivity issue when trying to send events to my Splunk HTTP Event Collector (HEC) endpoint. I have confirmed that HEC is enabled, and I am using a valid authorization t... See more...
Hello, I’m experiencing a connectivity issue when trying to send events to my Splunk HTTP Event Collector (HEC) endpoint. I have confirmed that HEC is enabled, and I am using a valid authorization token. Here’s the command I am using: curl -k "https://[your-splunk-instance].splunkcloud.com:8088/services/collector/event" \ -H "Authorization: Splunk [your-token]" \ -H "Content-Type: application/json" \ -d '{"event": "Hello, Splunk!"}' Unfortunately, I receive the following error: curl: (28) Failed to connect to [your-splunk-instance].splunkcloud.com port 8088 after [time] ms: Couldn't connect to server Troubleshooting Steps Taken: Successful Connection from Another User: Notably, another user from a different system was able to successfully use the same curl command to reach the same endpoint Network Connectivity: I verified network connectivity by using ping and received a timeout for all requests. I performed a traceroute and found that packets are lost after the second hop. Despite these efforts, the issue persists. If anyone has encountered a similar issue or has suggestions for further troubleshooting, I would greatly appreciate your help. Thank you!
SPL does not support conditional execution of commands.  It can be simulated in a dashboard by setting a token to the desired search string and referencing the token in the query. <fieldset> <inpu... See more...
SPL does not support conditional execution of commands.  It can be simulated in a dashboard by setting a token to the desired search string and referencing the token in the query. <fieldset> <input token="myInput"...> <change> <condition match="<<option 1>>"> <set token="query">SPL for option 1</set> </condition> <condition match="<<option 2>>"> <set token="query">SPL for option 2></set> </condition> </change> </input> </fieldset> ... <row> <panel> <table> <search> <query>$query$</query> </search> </table> </panel> </row>
Hi gcusello.   Many thanks for your reply and understand that Splunk does not have this fuctionality thus I will need to look at how I can create 2 accounts for an individual whilst trying to use S... See more...
Hi gcusello.   Many thanks for your reply and understand that Splunk does not have this fuctionality thus I will need to look at how I can create 2 accounts for an individual whilst trying to use SSO.  We have Analysts that develop use cases on Enterprize Security and for some of the functionality they need, they need Admin rights, but when they are doing there daya to day role they should remail Annalyst.  the way splunk does it as you says, I can apply both profiles but they get to use whatever they need, within these 2 profiles.   As for the ISO27001 (2022) I am also an Auditor and was looking at the ISO27002 8.2 Privileged Access rights, Guidance para [I] : "only using identities with privileged access rights for undertaking administrative tasks and not for day-to-day general tasks [i.e. checking email, accessing the web (users should have a separate normal network identity for these activities)]." This I read  should also include normal user activity within Splunk. Kind regards Neil
Dashboard panels have recently stopped being a consistent size across the row introducing grey space on our boards. This is happening throughout the app on Splunk Cloud Version:9.2.2403.111. Does an... See more...
Dashboard panels have recently stopped being a consistent size across the row introducing grey space on our boards. This is happening throughout the app on Splunk Cloud Version:9.2.2403.111. Does anyone know of any changes or settings which may have affected this and how it can be resolved?  Thanks  
Sorry for the confusion. So search one gets the result like this. identity email extensionattribute10 extensionattribute11 first last nsurname name.surname@domain.com nsurnameT1@domain.... See more...
Sorry for the confusion. So search one gets the result like this. identity email extensionattribute10 extensionattribute11 first last nsurname name.surname@domain.com nsurnameT1@domain.com name.surname@consultant.com name surname Search two will get all my tickets that was created for people leaving my company and will return results like this _time affect_dest active description dv_state number 2024-10-31 09:46:55 STL Leaver true Leaver Request for Name Surname - 31/10/2024 active INC01 So the only way of searching would by to search the second query's description field where first and last appear
Just to follow up that if we disable the operator, the deployment is successful, but we have no APM. This issue specifically seems to relate to the operator and the APM instrumentation