All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, We've installed this app and tried configuring it to send Splunk alerts to Jira. After entering the API URL and key and hitting 'Complete Setup,' it keeps redirecting back to the configuration p... See more...
Hi, We've installed this app and tried configuring it to send Splunk alerts to Jira. After entering the API URL and key and hitting 'Complete Setup,' it keeps redirecting back to the configuration page. Is anyone else experiencing this issue on Splunk Cloud?
If I understand correctly, you want to persist the stage set in the dropdown for a particular event in the table. You would have to manage that yourself, by storing the stage you set in the dropdown... See more...
If I understand correctly, you want to persist the stage set in the dropdown for a particular event in the table. You would have to manage that yourself, by storing the stage you set in the dropdown into a lookup associated with the row you are editing and then in your search you would need to lookup the event against that lookup to find any previously set Stage for the appropriate event assuming the time/date are within your bounds As the JS implies, the 'BasicCellRenderer' is just that - it's simply for rendering the table visually, and will not store anything for you. You could possible set a token in the JS when the dropdown value is changed and then have some additional logic that saves the state of the event to the lookup. You would need to find a way to identify the event, e.g. based on a hash of the data or some unique id.
I've been pondering over this example for a couple days now and I'm still lost as to how to change my current set up to allow the third field determine which query to run based on what software versi... See more...
I've been pondering over this example for a couple days now and I'm still lost as to how to change my current set up to allow the third field determine which query to run based on what software version a customer has.   I'm struggle to understand what "$tok_searchfieldvalue$" represents and how displaying it in a panel will inform the dashboard which of the two queries to run and display results from. Using the <choice value=......> in the fieldset section is something I haven't worked with before so I'll go try to find more documentation or online use cases for this and see if I can apply those to my situation.   Can I have more than two of these <choice value=....> lines?  And then could I use one of them to tell the dashboard to say hide one panel but unhide the other one and display it's results?   I appreciate the attempt to help me but I fear I may be too new to these dashboard customizations to grasp how your example applies to mine.  
Also if there is a way to locate these events with the help of "rex" command also let me know so that i can use that as well
Both the request and response are from the same API. Just that i could not use spath to specify the path of bannerid and location code to get those values. Please help
I dont have any plain text data. All the data are feeded as REquest and response in splunk from which i need to retrieve bannerID and location codes. Could you please help me how to retrive that in s... See more...
I dont have any plain text data. All the data are feeded as REquest and response in splunk from which i need to retrieve bannerID and location codes. Could you please help me how to retrive that in splunk 
You don't need a subsearch and using join/append are rarely necessary and should be avoided where possible. Subsearches have limitations. You just need to search both datasets at the start with an (... See more...
You don't need a subsearch and using join/append are rarely necessary and should be avoided where possible. Subsearches have limitations. You just need to search both datasets at the start with an (A) or (B) search then collect them together with stats. I am not sure why you are using eventstats - you don't need that and it will not perform well anyway. Try this. (index=a component=serviceA "incoming data") OR (search index=a component=serviceB "data from") | stats values(name) as name ,values(age) as age, values(parentName) as parentName ,values(parentAge) as parentAge by id1,id2 | eval mismatch=case(isnull(name) AND isnull(age) ," data doesn't exist in serviceA", isnull(parentName) AND isnull(parentAge) ," data doesn't exist in serviceB", true, "No mismatch") | table name,age,parentAge,parentName,mismatch,id1,id2  
Hi all, I'm trying to figure out a way to edit the alert that is sent to PagerDuty.  Currenty I have a bunch of alerts that are being sent to the notable index, and then a single alert that searche... See more...
Hi all, I'm trying to figure out a way to edit the alert that is sent to PagerDuty.  Currenty I have a bunch of alerts that are being sent to the notable index, and then a single alert that searches that index and is sent to PagerDuty. The problem is, the alert is sending the name of the original alert in the "alert" section (not the notification). Is there a way I can edit the catch-all alert so that it doesn't send the name of the original alert?
You need to use proper field name. Prepended by the dataset name. Don't know your datamodel but as an example, with one of CIM datamodels. It's not | tstats count from datamodel=Network_Traffic by... See more...
You need to use proper field name. Prepended by the dataset name. Don't know your datamodel but as an example, with one of CIM datamodels. It's not | tstats count from datamodel=Network_Traffic by src_ip but | tstats count from datamodel=Network_Traffic by All_Traffic.src_ip Of course you need to adjust it to your datamodel.
Unless you explicitly do something to the data (for example, add an indexed field containing the name of the forwarder) Splunk doesn't keep this kind of metadata.
you can try REPORT instead of TRANSFORMS in props.conf
you can check field call "splunk_server"
I write it way too often on this forum - make your life easier, fix your data! At this point, even assuming that your copy-pasted sample got truncated and your real data is properly closed, you have... See more...
I write it way too often on this forum - make your life easier, fix your data! At this point, even assuming that your copy-pasted sample got truncated and your real data is properly closed, you have - XML structure - as a string field in json - prepended by some more or less structured plain-text header. Do you have any other plain text data there? I suppose not. So you could just parse the timestamp and then cut the header. This can be done with a simple SEDCMD. With the json part it will be more difficult because it requires de-escaping some characters. And if you have more data in that json, "extracting" the xml part is not really a feasible option. But it might be worth giving it a try.
Yeah, you have a same issue as me, Our Deployment start lagging for any function that need to call API for UFs phonehome information. Call Support and they confirm as a "bug" and will be fix at 9.4. ... See more...
Yeah, you have a same issue as me, Our Deployment start lagging for any function that need to call API for UFs phonehome information. Call Support and they confirm as a "bug" and will be fix at 9.4. I updated to 9.3.1 recently, no more "wrong apps" by still very lagging. I need to run commnd reload deploy-server each time want to deploy some TA to our agent.
@ITWhisperer  I tried the below query |sort 0 'Business_Date' 'StartTime' Its sorting only on StartTime not on business date Could you please suggest
both the bannerID and location are inside <n1:request> tag which is inside body of the REQUEST
How do you locate these within your events?
Try this | sort 0 'Business_Date' 'StartTime'
As you may know, the Splunk OTel Collector can collect logs from Kubernetes and send them into Splunk Cloud/Enterprise using the Splunk OTel Collector chart distribution. However, you can also use th... See more...
As you may know, the Splunk OTel Collector can collect logs from Kubernetes and send them into Splunk Cloud/Enterprise using the Splunk OTel Collector chart distribution. However, you can also use the Splunk OTel Collector to collect logs from Windows or Linux Hosts and send those logs directly to Splunk Enterprise/Cloud as well. However this information isn't easily found from the documentation as it appears the standalone (non Helm Chart) distribution of the OTel Collector can only be used for Splunk Observability. In the below instructions, I will show you how to install the Collector even if you have don't have an Splunk Observability (O11y) subscription. In terms of compatibility, the Splunk OTel Collector is supported on the following Operating Systems: Amazon Linux: 2, 2023. Log collection with Fluentd is not currently supported for Amazon Linux 2023. CentOS, Red Hat, or Oracle: 7, 8, 9 Debian: 9, 10, 11 SUSE: 12, 15 for version 0.34.0 or higher. Log collection with Fluentd is not currently supported. Ubuntu: 16.04, 18.04, 20.04, 22.04, and 24.04 Rocky Linux: 8, 9 Windows 10 Pro and Home, Windows Server 2016, 2019, 2022 Once you have confirmed that your Operating System is compatible, please use these instructions to install the Splunk OTel Collector: First, use sudo to export the following variable. This variable will be referenced by the Collector and will verify that you aren't installing the Collector for Observability where an Access Token needs to be specified:     sudo export VERIFY_ACCESS_TOKEN=false       Once you have confirmed that your Operating System is compatible, please use these instructions to install the Splunk OTel Collector (in this example we are going to use curl but there are other installation methods that can be found here).     curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh; sh /tmp/splunk-otel-collector.sh --hec-token <token> --hec-url <hec_url> --insecure true​   You may notice we modify the installation script from the original instructions, we specify the HEC Token and HEC Url of the Splunk Instance you want to send your logs to. Please note that both the HEC Token and HEC Url are required fields to specify for the installation to work correctly.  Your installer should then install and start sending logs over to Splunk Instance (assuming your network allows the traffic out) automatically; if you want to know what log ingestion methods are configured out of the box please see the default pipeline for the OTeL Collector as specified here. What if you want your Splunk OTel Collector to send logs to Enterprise/Cloud and you also want to send metrics or traces to Splunk Observability?  If you are in the situation above, then you can modify the installation script we suggest above to include your O11y realm and Access Token in addition to your HEC Url and HEC Token like this:   curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh; sh /tmp/splunk-otel-collector.sh --realm <o11y_realm> --hec-token <token> --hec-url <hec_url> --insecure true -- <ACCESS_TOKEN>​     Please note the Access Token always follows the blank -- template and should always be placed at the end of your installer script for best practice.
I have 2 queries where each query retrieve the fields from different source using regex and combining it using append sand grouping the data using stats by common id and then evaluating the result, b... See more...
I have 2 queries where each query retrieve the fields from different source using regex and combining it using append sand grouping the data using stats by common id and then evaluating the result, but what is happening is before it loads the data from query 2 it's evaluating and giving wrong result with large data set Sample query looks like this index=a component=serviceA "incoming data" | eventstats values(name) as name ,values(age) as age by id1,id2 |append [search index=a component=serviceB "data from" | eventstats values(parentName) as parentName ,values(parentAge) as parentAge by id1,id2] | stats values(name) as name ,values(age) as age, values(parentName) as parentName ,values(parentAge) as parentAge by id1,id2 | eval mismatch= case(isnull(name) AND isnull(age) ," data doesn't exist in serviceA", isnull(parentName) AND isnull(parentAge) ," data doesn't exist in serviceB", true, "No mismatch") | table name,age,parentAge,parentName,mismatch,id1,id2 so in my case with large data before the dat get's loaded from query2 it's giving as data doesn't exist in serviceB, even though there is no mismatch. Please suggest how we can tackle this situation, I tried using join , but it's same