All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I write it way too often on this forum - make your life easier, fix your data! At this point, even assuming that your copy-pasted sample got truncated and your real data is properly closed, you have... See more...
I write it way too often on this forum - make your life easier, fix your data! At this point, even assuming that your copy-pasted sample got truncated and your real data is properly closed, you have - XML structure - as a string field in json - prepended by some more or less structured plain-text header. Do you have any other plain text data there? I suppose not. So you could just parse the timestamp and then cut the header. This can be done with a simple SEDCMD. With the json part it will be more difficult because it requires de-escaping some characters. And if you have more data in that json, "extracting" the xml part is not really a feasible option. But it might be worth giving it a try.
Yeah, you have a same issue as me, Our Deployment start lagging for any function that need to call API for UFs phonehome information. Call Support and they confirm as a "bug" and will be fix at 9.4. ... See more...
Yeah, you have a same issue as me, Our Deployment start lagging for any function that need to call API for UFs phonehome information. Call Support and they confirm as a "bug" and will be fix at 9.4. I updated to 9.3.1 recently, no more "wrong apps" by still very lagging. I need to run commnd reload deploy-server each time want to deploy some TA to our agent.
@ITWhisperer  I tried the below query |sort 0 'Business_Date' 'StartTime' Its sorting only on StartTime not on business date Could you please suggest
both the bannerID and location are inside <n1:request> tag which is inside body of the REQUEST
How do you locate these within your events?
Try this | sort 0 'Business_Date' 'StartTime'
As you may know, the Splunk OTel Collector can collect logs from Kubernetes and send them into Splunk Cloud/Enterprise using the Splunk OTel Collector chart distribution. However, you can also use th... See more...
As you may know, the Splunk OTel Collector can collect logs from Kubernetes and send them into Splunk Cloud/Enterprise using the Splunk OTel Collector chart distribution. However, you can also use the Splunk OTel Collector to collect logs from Windows or Linux Hosts and send those logs directly to Splunk Enterprise/Cloud as well. However this information isn't easily found from the documentation as it appears the standalone (non Helm Chart) distribution of the OTel Collector can only be used for Splunk Observability. In the below instructions, I will show you how to install the Collector even if you have don't have an Splunk Observability (O11y) subscription. In terms of compatibility, the Splunk OTel Collector is supported on the following Operating Systems: Amazon Linux: 2, 2023. Log collection with Fluentd is not currently supported for Amazon Linux 2023. CentOS, Red Hat, or Oracle: 7, 8, 9 Debian: 9, 10, 11 SUSE: 12, 15 for version 0.34.0 or higher. Log collection with Fluentd is not currently supported. Ubuntu: 16.04, 18.04, 20.04, 22.04, and 24.04 Rocky Linux: 8, 9 Windows 10 Pro and Home, Windows Server 2016, 2019, 2022 Once you have confirmed that your Operating System is compatible, please use these instructions to install the Splunk OTel Collector: First, use sudo to export the following variable. This variable will be referenced by the Collector and will verify that you aren't installing the Collector for Observability where an Access Token needs to be specified:     sudo export VERIFY_ACCESS_TOKEN=false       Once you have confirmed that your Operating System is compatible, please use these instructions to install the Splunk OTel Collector (in this example we are going to use curl but there are other installation methods that can be found here).     curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh; sh /tmp/splunk-otel-collector.sh --hec-token <token> --hec-url <hec_url> --insecure true​   You may notice we modify the installation script from the original instructions, we specify the HEC Token and HEC Url of the Splunk Instance you want to send your logs to. Please note that both the HEC Token and HEC Url are required fields to specify for the installation to work correctly.  Your installer should then install and start sending logs over to Splunk Instance (assuming your network allows the traffic out) automatically; if you want to know what log ingestion methods are configured out of the box please see the default pipeline for the OTeL Collector as specified here. What if you want your Splunk OTel Collector to send logs to Enterprise/Cloud and you also want to send metrics or traces to Splunk Observability?  If you are in the situation above, then you can modify the installation script we suggest above to include your O11y realm and Access Token in addition to your HEC Url and HEC Token like this:   curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh; sh /tmp/splunk-otel-collector.sh --realm <o11y_realm> --hec-token <token> --hec-url <hec_url> --insecure true -- <ACCESS_TOKEN>​     Please note the Access Token always follows the blank -- template and should always be placed at the end of your installer script for best practice.
I have 2 queries where each query retrieve the fields from different source using regex and combining it using append sand grouping the data using stats by common id and then evaluating the result, b... See more...
I have 2 queries where each query retrieve the fields from different source using regex and combining it using append sand grouping the data using stats by common id and then evaluating the result, but what is happening is before it loads the data from query 2 it's evaluating and giving wrong result with large data set Sample query looks like this index=a component=serviceA "incoming data" | eventstats values(name) as name ,values(age) as age by id1,id2 |append [search index=a component=serviceB "data from" | eventstats values(parentName) as parentName ,values(parentAge) as parentAge by id1,id2] | stats values(name) as name ,values(age) as age, values(parentName) as parentName ,values(parentAge) as parentAge by id1,id2 | eval mismatch= case(isnull(name) AND isnull(age) ," data doesn't exist in serviceA", isnull(parentName) AND isnull(parentAge) ," data doesn't exist in serviceB", true, "No mismatch") | table name,age,parentAge,parentName,mismatch,id1,id2 so in my case with large data before the dat get's loaded from query2 it's giving as data doesn't exist in serviceB, even though there is no mismatch. Please suggest how we can tackle this situation, I tried using join , but it's same
Hi @myappdy.mympco, I did some searching for similar issues and found this. The same errors were seen in the DB Agent and controller logs due to a couple of entries related to the DB agent being... See more...
Hi @myappdy.mympco, I did some searching for similar issues and found this. The same errors were seen in the DB Agent and controller logs due to a couple of entries related to the DB agent being missing in the entity_relationship table.
I am attempting to use a lookup to feed some UNC file paths into a dashboard search, but I am getting tripped by all the escaping of the backslashes and double quites in my string. I want to call a ... See more...
I am attempting to use a lookup to feed some UNC file paths into a dashboard search, but I am getting tripped by all the escaping of the backslashes and double quites in my string. I want to call a field from a lookup with something like this as the actual value: file_path="\\\\*\\branch\\system\\type1\\*" OR file_path="\\\\*\\branch\\system\\type2\\*" I want to populate a field in my lookup table with actual key/value pairs and output the entire string based on a menu selection.  Unfortunately, if I try this, Splunk escapes all the double quotes and all the backslashes and it ends up looking like this in the litsearch, which is basically useless: file_path=\"\\\\\\\\*\\\\branch\\\\service\\\\type1\\\\*\" OR file_path=\"\\\\\\\\*\\\\branch\\\\service\\\\type2\\\\*\" How can I either properly escape the value within the lookup table so this doesn't happen, or is there any way to get Splunk to output the lookup value as a literal string and not try to interpret it?
Is it a single search head or search head cluster? - No cluster all single search heads Then really it should just be a user migration to the new SH that is running side by side from a backup as ... See more...
Is it a single search head or search head cluster? - No cluster all single search heads Then really it should just be a user migration to the new SH that is running side by side from a backup as descibed. With them being singletons, theres already a reduced amount of redundancy or expectation of no disruption, so should just be an organized migration and shut down.  Have the indexers already been migrated to Azure? - No indexers have been migrated yet So will the Search Heads migrated be searching back to onprem when cutover is done? if so, as mentioned, you need to have the networking right, so another reason to build the sh in azure along side existing, confirm search and comms work.  Will the SH(C) be searching Azure or On-Prem Indexers as well? - SH will be searching Azure indexers So when you build the Azure SH, they will be searching net new indexers in Azure? Do they need to Search Onprem too? Just gotta nail the config and networking.  What "components" do you rely on most on this SH(C)? Premium apps like ES or ITSI? or just Splunk Enterprise apps? - SH will rely on both premium apps and Splunk Enterprise apps Well, ES and ITSI are their own beasts, see documentation for those. Enterprise apps will depend on what needs to persist and be migrated.  Definitely ideal to involve experienced services folks or experienced Splunk Admins. Either way it is basically a build new SH, copy configs over, validate, migrate users over. All the traps you may encounter along the way should mostly be resolved in the standing up of the new SH and then getting configs running along side your existing.  This is nuanced and is why you wont really find a 1:1 migration guide, cause with Splunk, "it depends".  The amount of disruption will be mitigated by having good understanding of the major workloads running in the environments (montioring console can help with by app breakdowns) and what needs to be carried over to the new enviro and what can/needs to be cleaned up to reduce work needed in migration.
Not sure what I am doing wrong.  I have a datamodel with a dataset that I can pivot on a field when using the datamodel explorer.  When I try to use |tstats it does not work. I get results as expe... See more...
Not sure what I am doing wrong.  I have a datamodel with a dataset that I can pivot on a field when using the datamodel explorer.  When I try to use |tstats it does not work. I get results as expected with  | tstats count as order_count from datamodel=spc_orders however if I try and pivot | tstats count as order_count from datamodel=spc_orders where state="CA" 0 results. Whats going on here?
With the above request and response can u telme how we can retrieve the bannerID and location using splunk query
@PickleRick  @ITWhisperer  when I am putting this sort "Business_Date" "StartTime" Its only sorting on Business_Date and not startTime Could you please suggest
This event doesn't appear to have a REQUEST. Splunk SPL works on a pipeline of events, effectively processing each event one at a time. Usually, with request and response log events, you need to find... See more...
This event doesn't appear to have a REQUEST. Splunk SPL works on a pipeline of events, effectively processing each event one at a time. Usually, with request and response log events, you need to find a way to correlate the response with the request.
I do have a RESPONSE field as well in the API RESPONSE="{"body":"<?xml version=\"1.0\" encoding=\"UTF-8\"?><fes:Response xmlns:fes=\"http://www.abc/product/inventoryreservation_create/v1\"><fes:inve... See more...
I do have a RESPONSE field as well in the API RESPONSE="{"body":"<?xml version=\"1.0\" encoding=\"UTF-8\"?><fes:Response xmlns:fes=\"http://www.abc/product/inventoryreservation_create/v1\"><fes:inventoryReservationCreateResponse><fes:reservationId>fd19244445edb18</fes:reservationId><fes:requestStatus>Success</fes:requestStatus><fes:requestState>Order Reserved</fes:requestState></fes:inventoryReservationCreateResponse></fes:Response>","headers":{"content-type":"text/xml;charset=utf-8","accept":"application/xml,application/fastinfoset","server":"Jetty(9.4.27.v20200227)","uritemplate":"/service/v1/inventory/reservation","operationname":"CREATE_RESERVATION","method":"POST","url":"http://192.123/service/v1/inventory/reservation","x_shaw_request_tracing":"location_id","singularityheader":"appId=60*ctrlguid=1730261321*acctguid=602406e5-b988-4764-be9d-e041209f6ed8*ts=1731413516129*btid=40467*snapenable=true*donotresolve=true*guid=a61228ec-2eed-4ec7-b2eb-1e0ebb10ad65*exitguid=1|3|17*unresolvedexitid=13486*cidfrom=649,{[UNRESOLVED][17715]},648,{[UNRESOLVED][18213]},689*etypeorder=HTTP,HTTP,HTTP,HTTP,HTTP*esubtype=HTTP,HTTP,HTTP,HTTP,HTTP*cidto={[UNRESOLVED][17715]},648,{[UNRESOLVED][18213]},689,{[UNRESOLVED][13486]}","asyncreplyfordestinaton":"Svc-REST.DIRECTFULFILLMENT.CreateInventoryReservation:PROCESS","x_shaw_service_orchestration_id":"Id-ebcc8a602f57c17646182490","environment":"prod","final_match_group":"/","x_shaw_onbehalfof_id":"CREATE","directfulfillment.reservationid":"fd19244445edb18","lg_header":"Interaction=IwDMcZ3MDAZ5okkgkwEJDMgK;Locus=uWm7UBiog5Kb3BmVyz1/dA==;Flow=4geEzEzItMPK3CMgkwEODMgK;Chain=IQDMcZ3MDAZ5okkgkwEJDMgK;UpstreamOpID=eMsPL0LlEOcPDTl5JMfY6Q==;CallerAddress=tossbprd1app03.fcc.bss.globalivewireless.local;","content-length":"380"}}",
1. Just saying "not working" doesn't say anything. We have no idea what the results should look like, what they actually look like, what data you have and so on. 2. Apart from your main question I s... See more...
1. Just saying "not working" doesn't say anything. We have no idea what the results should look like, what they actually look like, what data you have and so on. 2. Apart from your main question I see another issue woth your search - you sort first, then add some data with appendcols. Are you absolutely sure that you get right data in right places? 3. And finally, if you post SPL code please do so as either code block (the </> symbol at the top of the text-edit widget) or as a preformatted style so that it doesn't get butchered into this unreadable blob of text.
Your sample event does not include "RESPONSE" so the rex will not be able to extract the REQUEST field
| rex "Total records processed - (?<processed>\d+)"
| bin span=3h _time | stats values(uptime) AS Uptime BY _time, component_hostname | where Uptime=0