All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

And your actual problem with this is...? So far you told us what you're trying to do. OK, that's a sound approach if you can't use an app apparently containing some sort of scripted/modular input, y... See more...
And your actual problem with this is...? So far you told us what you're trying to do. OK, that's a sound approach if you can't use an app apparently containing some sort of scripted/modular input, you're spawning an external script preparing the data that you later ingest with monitor input from an intermediate file. Great. Now how are we supposed to know what is in your data? And what is the desired result of your search? Maybe for some very very common types of data (like standard windows event logs) one could expect a farily common knowledge about them but even then it's better to explicitly state your problem. So - what _is_ your problem?
And why are you trying that site? https://www.splunk.com/en_us/products/splunk-security-orchestration-and-automation.html Log in with your splunk.com account (the same you use for logging into Answ... See more...
And why are you trying that site? https://www.splunk.com/en_us/products/splunk-security-orchestration-and-automation.html Log in with your splunk.com account (the same you use for logging into Answers), pick "Free Trial" and download installation package. For more info see https://docs.splunk.com/Documentation/SOARonprem/latest/Install/GetSplunkPhantom
You need to provide us with more info. So far we know that apparently some values did not change even though you thought they would. But what values, what exactly did you do and what result did you e... See more...
You need to provide us with more info. So far we know that apparently some values did not change even though you thought they would. But what values, what exactly did you do and what result did you expect - that's unknown to us. And please copy-paste your event data and expected/actual result instead of posting a photo - it's much more convenient to handle for us here.
but then also its fetching all the responses from the results instead of faultstring only it s not filtering out the responses based on that tag
OK. Again - do you see events from your UF in _internal index? (try a longer timespan extending some time into the future).
Hello, for more than two weeks, I have been trying to access the site in order to create my account and be able to download the Splunk Phantom. But I can't access the site (my.Phantom.us) what to do ... See more...
Hello, for more than two weeks, I have been trying to access the site in order to create my account and be able to download the Splunk Phantom. But I can't access the site (my.Phantom.us) what to do please
 but value are not change 
Hello @Gustavo.Marconi, Thank you for contacting us. Looks like the access key is not setup properly. Can you please confirm if you have created the access key correctly ? You can run the below... See more...
Hello @Gustavo.Marconi, Thank you for contacting us. Looks like the access key is not setup properly. Can you please confirm if you have created the access key correctly ? You can run the below command and check if you have setup the access key  or not. kubectl get secret cluster-agent-secret -n appdynamics -o jsonpath='{.data.controller-key}' | base64 --decode Also can you please confirm the agent version and the operator version that you're deploying ?   If you are using lessthan 23.11 version I would recommend you to dopwnload the latest version.   Could you please follow below: Download the latest version if the current version is 23.11 or less. Delete old Controller Access Key Secret: Create Controller Access Key Secret:  kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key> Give a try and let me know how it goes. Happy Friday and weekend. Best Regards, Rajesh Ganapavarapu  
Yes the regex is correct and validated.
Thank you @isoutamo , that worked after updating the key. However, still puzzled on after adding another stanza. My current props.conf: [source::/var/log/containers/*_integration-business*.log] TR... See more...
Thank you @isoutamo , that worked after updating the key. However, still puzzled on after adding another stanza. My current props.conf: [source::/var/log/containers/*_integration-business*.log] TRANSFORMS-int-null = setnull [source::/var/log/containers/*_0091*.log] TRANSFORMS-set = allow_all [source::/var/log/containers/*] TRANSFORMS-null = debug_to_null and transforms.conf: [allow_all] REGEX = . DEST_KEY = QUEUE FORMAT = indexQueue [setnull] REGEX = NormalizedApiId failed to resolve DEST_KEY = queue FORMAT = nullQueue [debug_to_null] REGEX = DEBUG|Debug|debug DEST_KEY = queue FORMAT = nullQueue So I would like to allow any logs including DEBUG keyword from the source pattern /var/log/containers/*_0091*.log, however I see the logs with DEBUG keyword from that particular source pattern is being dropped. Is this even possible to do?
Weird typo is fixed.
This app https://splunkbase.splunk.com/app/5433
To post an answerable question in this forum, it is important to illustrate your input, e.g., raw events (anonymize as needed), illustrate/mock desired output, then explain the logic between illustra... See more...
To post an answerable question in this forum, it is important to illustrate your input, e.g., raw events (anonymize as needed), illustrate/mock desired output, then explain the logic between illustrated input and desired output including any relevant available fields, data characteristics, etc. From your description, all volunteers here get is that you have some file ingested via Universal Forwarder and your data contains some sort of session key. 
What do you use streamstats for?  You can always bucket time before streamstats, something like | bin span=60m _time | streamstats current=f count This will not work if your use case requires you t... See more...
What do you use streamstats for?  You can always bucket time before streamstats, something like | bin span=60m _time | streamstats current=f count This will not work if your use case requires you to maintain time resolution, although that should be rather rare. 
It would be much easier to describe the problem with illustrations/mockups of your input and desired output, and a description of the logic between illustrated input and desired output without SPL. ... See more...
It would be much easier to describe the problem with illustrations/mockups of your input and desired output, and a description of the logic between illustrated input and desired output without SPL. I've got two servers providing me temperature data. Host A has Sensor1 and Sensor2. Host B has Sensor1 and Sensor2.  Let me try to decipher the requirements.  You have a dataset like _time host Sensor1 Sensor2 2024-06-05 12:00:00 HostA 45.5 48.1 2024-06-05 12:00:00 HostB 50.0 45.1 2024-06-05 12:20:00 HostA 46.8 47.4 2024-06-05 12:20:00 HostB 48.8 46.3 2024-06-05 12:40:00 HostA 45.8 48.6 2024-06-05 12:40:00 HostB 45.3 45.4 Your psuedo code suggests that Host A has Sensor1 in Room 12 and Sensor2 in Room 13. Host B has Sensor1 in Room 14 and Sensor2 in Room 15.  You want something like the following: _time avg(Roome12) avg(Roome13) avg(Roome14) avg(Roome15) 2024-06-05 12:00:00 45.5 48.1 50.0 45.1 2024-06-05 12:20:00 46.8 47.4 48.8 46.3 2024-06-05 12:40:00 45.8 48.6 45.3 45.4 But your data doesn't directly give you the mapping.  Is this correct?  I lack the imagination of how this physical layout works, but in SPL, you can do something like   index=tempmon sourcetype=tempdata | eval "Room 12" = if(host == "HostA", Sensor1, null()) | eval "Room 13" = if(host == "HostA", Sensor2, null()) | eval "Room 14" = if(host == "HostB", Sensor1, null()) | eval "Room 15" = if(host == "HostB", Sensor2, null()) | timechart span=20min avg("Room 12"), avg("Room 13"), avg("Room 14"), avg("Room 15")   You can play with the following emulation and compare with real data   | makeresults format=csv data="_time,host,Sensor1,Sensor2 2024-06-05 12:00:00, HostA, 45.5,48.1 2024-06-05 12:00:00, HostB, 50.0,45.1 2024-06-05 12:20:00, HostA,46.8,47.4 2024-06-05 12:20:00, HostB,48.8,46.3 2024-06-05 12:40:00, HostA,45.8,48.6 2024-06-05 12:40:00, HostB,45.3,45.4" | eval _time = strptime(_time, "%F %T") ``` the above emulates index=tempmon sourcetype=tempdata ```   On the other hand, maybe you do not want to hard code the mapping.  In that case, you can set up a lookup table like host Sensor Room hostA Sensor1 Room 12 hostA Sensor2 Room 13 hostB Sensor1 Room 14 hostB Sensor2 Room 15 Before I move on with SPL, it is important to understand what you really have in raw data.  My mockup assumes that you have two separate fields Sensor1 and Sensor2.  While this is a possible with some data sources, a more normalized data feed might give the following instead.    _time host Sensor Temperature 2024-06-05 12:00:00 HostA Sensor1 45.5 2024-06-05 12:00:00 HostA Sensor2 48.1 2024-06-05 12:00:00 HostB Sensor1 50.0 2024-06-05 12:00:00 HostA Sensor2 45.1 2024-06-05 12:20:00 HostA Sensor1 46.8 2024-06-05 12:20:00 HostA Sensor2 47.4 2024-06-05 12:20:00 HostB Sensor1 48.8 2024-06-05 12:20:00 HostB Sensor2 46.3 2024-06-05 12:40:00 HostA Sensor1 45.8 2024-06-05 12:40:00 HostA Sensor2 48.6 2024-06-05 12:40:00 HostB Sensor1 45.3 2024-06-05 12:40:00 HostB Sensor2 45.4 Can you clarify which raw input format is yours? Emulation of the second format is     | makeresults format=csv data="_time,host,Sensor,Temperature 2024-06-05 12:00:00, HostA,Sensor1, 45.5 2024-06-05 12:00:00, HostA,Sensor2,48.1 2024-06-05 12:00:00, HostB, Sensor1, 50.0 2024-06-05 12:00:00, HostA,Sensor2,45.1 2024-06-05 12:20:00, HostA,Sensor1,46.8 2024-06-05 12:20:00, HostA,Sensor2,47.4 2024-06-05 12:20:00, HostB,Sensor1,48.8 2024-06-05 12:20:00, HostB,Sensor2,46.3 2024-06-05 12:40:00, HostA,Sensor1,45.8 2024-06-05 12:40:00, HostA,Sensor2,48.6 2024-06-05 12:40:00, HostB,Sensor1,45.3 2024-06-05 12:40:00, HostB,Sensor2,45.4" | eval _time = strptime(_time, "%F %T") | table _time,host,Sensor,Temperature ``` the above emulates index=tempmon sourcetype=tempdata ``````  
when i run the above query, am getting "invalid arguement near innisbrook"
I want to change the color of the bars to green for approved and red for declined. I have tried using seriesColorsByField, but it does not change the color    
There are no results because the rex command is not extracting any fields.  That's because it is looking for text ("REQUEST" and "RETRYNO") that don't exist (at least not in the sample event).  This ... See more...
There are no results because the rex command is not extracting any fields.  That's because it is looking for text ("REQUEST" and "RETRYNO") that don't exist (at least not in the sample event).  This run-anywhere query works for me.   | makeresults | eval _raw="</soap:Envelope>\", RESPONSE=\"<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soapenv:Header/> <soapenv:Body> <soapenv:Fault xmlns:trefault=\"http://tresoap.intecbilling.com/fault/2.0\"> <faultcode>trefault:ApplicationException</faultcode> <faultstring><CM-41398> ERROR: Value &quot;Apple Watch 4G 5GB&quot; supplied for Fact &quot;OrderedComp.RatePlan_R&quot; is not allowed by the fact&apos;s filter search or expression</faultstring> <detail> <trefault:Detail> <trefault:Message><CM-41398> ERROR: Value &quot;Apple Watch 4G 5GB&quot; supplied for Fact &quot;OrderedComp.RatePlan_R&quot; is not allowed by the fact&apos;s filter search or expression</trefault:Message> <trefault:ErrorId>41398</trefault:ErrorId> </trefault:Detail> </detail> </soapenv:Fault> </soapenv:Body>" |rex "RESPONSE=\\\"(?<RESPONSE>.+)" |spath input=RESPONSE |spath input=RESPONSE output=trefault:Message path=soapenv:Envelope.soapenv:Body.soapenv:Fault.faultstring.detail.trefault:Detail.trefault:Message  
Hi @PadmaPriya.Pyda , Could you please send me private message with your Email id and controller url to look into it further? 
The stats family of commands will not fill in the null values created by the absence of data.  The timechart command, however, will do so.  try this |tstats prestats=t avg(TotalResponseTime) as Resp... See more...
The stats family of commands will not fill in the null values created by the absence of data.  The timechart command, however, will do so.  try this |tstats prestats=t avg(TotalResponseTime) as ResponseTime avg(FordResponseTime) as Fordresponsetime where index=app-index NOT TERM(timeout) by _time | timechart avg(TotalResponseTime) as ResponseTime avg(FordResponseTime) as Fordresponsetime The tstats command does the heavy lifting and prepares the results for later processing by another stats command.  The timechart command essentially formats what tstats created and fills in zeroes for missing values.