All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

One more thing because that's often overlooked when talking about DMs. DMs as such don't accelerate anything. DMs are just an intermediate layer of logic making Splunk able to search different types... See more...
One more thing because that's often overlooked when talking about DMs. DMs as such don't accelerate anything. DMs are just an intermediate layer of logic making Splunk able to search different types of data in the same way so when you search from DM using DM fields constraints, Splunk "underneath" transforms your search into raw data search and lets you search possibly multiple separate indexes and sourcetypes without even knowing the real structure of the underlying data. DM _acceleration_ however is a completely different beast. It's the machinery that's running under Splunk's hood and prepares this database of indexed datamodel contents so that you can search using those pre-built summaries instead of digging through the raw data itself. So while the DMA requires properly ingested and configured data normalized for DMs, it's this "one step beyond" that gives you performance benefits. If you just have DMs which are not accelerated you might be able to search your data easier (and create pivots) but it will not give you any performance gains. It's the DM acceleration that makes Splunk go zzzzooooooom.
What is your environment architecture? All-in-one? Separate simple indexer tier? Clustered indexer tier? If your indexers are a separate tier, have you deployed the "for-indexer" TA?
If I understand your problem correctly - you have two fields (sensor1 and sensor2) which contain your data points but you have also a "classifying" field host effectively giving you four separate dat... See more...
If I understand your problem correctly - you have two fields (sensor1 and sensor2) which contain your data points but you have also a "classifying" field host effectively giving you four separate data series, right? And you want to get four separate fields from that to be able to do four dinstinct aggregations for your timechart. Well, there might be several different possible approaches to this. One is to just use a set of conditional evals to create synthetic fields from your data as @yuanliu showed. The downside to this method is that it can be tedious to write all those evals and keep track of them, especially if your data is more complicated than just two sensors and two hosts. Another one is to use the {} notation to dynamically create field names. A run-anywhere example (not really timecharting much due to just a few input values but showing the idea) | makeresults format=csv data="_time,sensor1,sensor2,host 1,1,2,host1 1,2,3,host2 2,4,5,host1 2,5,6,host2" | eval {host}sensor1=sensor1 | eval {host}sensor2=sensor2 | fields - sensor1 sensor2 | timechart avg(host*sensor*) as ** This is easier to maintain because it's happening automagically but the downside is that you have much less control over resulting field names (of course you can rename them manually but that's when we again step into the field of manual fiddling with your data).
I suppose it makes sense with moving time window Splunk has to keep track of the window and events fitting that window. If you don't include the current event Splunk doesn't know how many previous ev... See more...
I suppose it makes sense with moving time window Splunk has to keep track of the window and events fitting that window. If you don't include the current event Splunk doesn't know how many previous events it has to keep and include in your calculations. If you have a fixed window expressed in number of events - that's easy - Splunk has always remember last n events to calculate your aggregation. But in case of time window it would make Splunk have to remember much more events that are used to calculate the stats in case they don't "fall out" of the window in case of next event. So it's simply easiest to forbid using use_current=f probably. I suppose you could do some ugly hacks like streamstatsing lists of values and manually calculating your aggregations but that would probably be horribly inefficient. I must say that I don't see a use case. What that would even mean "5-minute window without current event" - A window of 5 minutes looking back from the previous remembered event? Or a window of 5 minutes looking back from current event but without using current event's value? In the latter case you could simply do a "half automatic" calculations - for example with an average, you could just streamstats sum and count, then substract the current event's value from the sum and use count (or count-1) to calculate average. For more sophisticated aggregations of course you'd need to be a bit more creative. But the former case - it doesn't differ from use_current=t if you're just aggregating from previous event backwards. Maybe there's something more to this case you're not telling us and it can be done in yet another way.
i want the error messgae from "faultstring" to be displayed in my results: </soap:Envelope>", RESPONSE="<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Header></soap:... See more...
i want the error messgae from "faultstring" to be displayed in my results: </soap:Envelope>", RESPONSE="<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Header></soap:Header> <soap:Body> <soap:Fault> <faultcode>soap:Server</faultcode> <faultstring>APPL0014: IO Exception: Read timed out java.net.SocketTimeoutException: Read timed out</faultstring> </soap:Fault> </soap:Body>   My splunk query is below: index="abc" source="xyz" OPERATION = "getOrderService" |rex "RESPONSE=\\\"(?<RESPONSE>.+)" |spath input=RESPONSE |spath input=RESPONSE output=faultstring path=soapenv:Envelope.soap:Header.soapenv:Body.soapenv:Fault.faultcode.faultstring   instead of fetching only one response with faultstring, its fetching all the results from the responses
And your actual problem with this is...? So far you told us what you're trying to do. OK, that's a sound approach if you can't use an app apparently containing some sort of scripted/modular input, y... See more...
And your actual problem with this is...? So far you told us what you're trying to do. OK, that's a sound approach if you can't use an app apparently containing some sort of scripted/modular input, you're spawning an external script preparing the data that you later ingest with monitor input from an intermediate file. Great. Now how are we supposed to know what is in your data? And what is the desired result of your search? Maybe for some very very common types of data (like standard windows event logs) one could expect a farily common knowledge about them but even then it's better to explicitly state your problem. So - what _is_ your problem?
And why are you trying that site? https://www.splunk.com/en_us/products/splunk-security-orchestration-and-automation.html Log in with your splunk.com account (the same you use for logging into Answ... See more...
And why are you trying that site? https://www.splunk.com/en_us/products/splunk-security-orchestration-and-automation.html Log in with your splunk.com account (the same you use for logging into Answers), pick "Free Trial" and download installation package. For more info see https://docs.splunk.com/Documentation/SOARonprem/latest/Install/GetSplunkPhantom
You need to provide us with more info. So far we know that apparently some values did not change even though you thought they would. But what values, what exactly did you do and what result did you e... See more...
You need to provide us with more info. So far we know that apparently some values did not change even though you thought they would. But what values, what exactly did you do and what result did you expect - that's unknown to us. And please copy-paste your event data and expected/actual result instead of posting a photo - it's much more convenient to handle for us here.
but then also its fetching all the responses from the results instead of faultstring only it s not filtering out the responses based on that tag
OK. Again - do you see events from your UF in _internal index? (try a longer timespan extending some time into the future).
Hello, for more than two weeks, I have been trying to access the site in order to create my account and be able to download the Splunk Phantom. But I can't access the site (my.Phantom.us) what to do ... See more...
Hello, for more than two weeks, I have been trying to access the site in order to create my account and be able to download the Splunk Phantom. But I can't access the site (my.Phantom.us) what to do please
 but value are not change 
Hello @Gustavo.Marconi, Thank you for contacting us. Looks like the access key is not setup properly. Can you please confirm if you have created the access key correctly ? You can run the below... See more...
Hello @Gustavo.Marconi, Thank you for contacting us. Looks like the access key is not setup properly. Can you please confirm if you have created the access key correctly ? You can run the below command and check if you have setup the access key  or not. kubectl get secret cluster-agent-secret -n appdynamics -o jsonpath='{.data.controller-key}' | base64 --decode Also can you please confirm the agent version and the operator version that you're deploying ?   If you are using lessthan 23.11 version I would recommend you to dopwnload the latest version.   Could you please follow below: Download the latest version if the current version is 23.11 or less. Delete old Controller Access Key Secret: Create Controller Access Key Secret:  kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key> Give a try and let me know how it goes. Happy Friday and weekend. Best Regards, Rajesh Ganapavarapu  
Yes the regex is correct and validated.
Thank you @isoutamo , that worked after updating the key. However, still puzzled on after adding another stanza. My current props.conf: [source::/var/log/containers/*_integration-business*.log] TR... See more...
Thank you @isoutamo , that worked after updating the key. However, still puzzled on after adding another stanza. My current props.conf: [source::/var/log/containers/*_integration-business*.log] TRANSFORMS-int-null = setnull [source::/var/log/containers/*_0091*.log] TRANSFORMS-set = allow_all [source::/var/log/containers/*] TRANSFORMS-null = debug_to_null and transforms.conf: [allow_all] REGEX = . DEST_KEY = QUEUE FORMAT = indexQueue [setnull] REGEX = NormalizedApiId failed to resolve DEST_KEY = queue FORMAT = nullQueue [debug_to_null] REGEX = DEBUG|Debug|debug DEST_KEY = queue FORMAT = nullQueue So I would like to allow any logs including DEBUG keyword from the source pattern /var/log/containers/*_0091*.log, however I see the logs with DEBUG keyword from that particular source pattern is being dropped. Is this even possible to do?
Weird typo is fixed.
This app https://splunkbase.splunk.com/app/5433
To post an answerable question in this forum, it is important to illustrate your input, e.g., raw events (anonymize as needed), illustrate/mock desired output, then explain the logic between illustra... See more...
To post an answerable question in this forum, it is important to illustrate your input, e.g., raw events (anonymize as needed), illustrate/mock desired output, then explain the logic between illustrated input and desired output including any relevant available fields, data characteristics, etc. From your description, all volunteers here get is that you have some file ingested via Universal Forwarder and your data contains some sort of session key. 
What do you use streamstats for?  You can always bucket time before streamstats, something like | bin span=60m _time | streamstats current=f count This will not work if your use case requires you t... See more...
What do you use streamstats for?  You can always bucket time before streamstats, something like | bin span=60m _time | streamstats current=f count This will not work if your use case requires you to maintain time resolution, although that should be rather rare. 
It would be much easier to describe the problem with illustrations/mockups of your input and desired output, and a description of the logic between illustrated input and desired output without SPL. ... See more...
It would be much easier to describe the problem with illustrations/mockups of your input and desired output, and a description of the logic between illustrated input and desired output without SPL. I've got two servers providing me temperature data. Host A has Sensor1 and Sensor2. Host B has Sensor1 and Sensor2.  Let me try to decipher the requirements.  You have a dataset like _time host Sensor1 Sensor2 2024-06-05 12:00:00 HostA 45.5 48.1 2024-06-05 12:00:00 HostB 50.0 45.1 2024-06-05 12:20:00 HostA 46.8 47.4 2024-06-05 12:20:00 HostB 48.8 46.3 2024-06-05 12:40:00 HostA 45.8 48.6 2024-06-05 12:40:00 HostB 45.3 45.4 Your psuedo code suggests that Host A has Sensor1 in Room 12 and Sensor2 in Room 13. Host B has Sensor1 in Room 14 and Sensor2 in Room 15.  You want something like the following: _time avg(Roome12) avg(Roome13) avg(Roome14) avg(Roome15) 2024-06-05 12:00:00 45.5 48.1 50.0 45.1 2024-06-05 12:20:00 46.8 47.4 48.8 46.3 2024-06-05 12:40:00 45.8 48.6 45.3 45.4 But your data doesn't directly give you the mapping.  Is this correct?  I lack the imagination of how this physical layout works, but in SPL, you can do something like   index=tempmon sourcetype=tempdata | eval "Room 12" = if(host == "HostA", Sensor1, null()) | eval "Room 13" = if(host == "HostA", Sensor2, null()) | eval "Room 14" = if(host == "HostB", Sensor1, null()) | eval "Room 15" = if(host == "HostB", Sensor2, null()) | timechart span=20min avg("Room 12"), avg("Room 13"), avg("Room 14"), avg("Room 15")   You can play with the following emulation and compare with real data   | makeresults format=csv data="_time,host,Sensor1,Sensor2 2024-06-05 12:00:00, HostA, 45.5,48.1 2024-06-05 12:00:00, HostB, 50.0,45.1 2024-06-05 12:20:00, HostA,46.8,47.4 2024-06-05 12:20:00, HostB,48.8,46.3 2024-06-05 12:40:00, HostA,45.8,48.6 2024-06-05 12:40:00, HostB,45.3,45.4" | eval _time = strptime(_time, "%F %T") ``` the above emulates index=tempmon sourcetype=tempdata ```   On the other hand, maybe you do not want to hard code the mapping.  In that case, you can set up a lookup table like host Sensor Room hostA Sensor1 Room 12 hostA Sensor2 Room 13 hostB Sensor1 Room 14 hostB Sensor2 Room 15 Before I move on with SPL, it is important to understand what you really have in raw data.  My mockup assumes that you have two separate fields Sensor1 and Sensor2.  While this is a possible with some data sources, a more normalized data feed might give the following instead.    _time host Sensor Temperature 2024-06-05 12:00:00 HostA Sensor1 45.5 2024-06-05 12:00:00 HostA Sensor2 48.1 2024-06-05 12:00:00 HostB Sensor1 50.0 2024-06-05 12:00:00 HostA Sensor2 45.1 2024-06-05 12:20:00 HostA Sensor1 46.8 2024-06-05 12:20:00 HostA Sensor2 47.4 2024-06-05 12:20:00 HostB Sensor1 48.8 2024-06-05 12:20:00 HostB Sensor2 46.3 2024-06-05 12:40:00 HostA Sensor1 45.8 2024-06-05 12:40:00 HostA Sensor2 48.6 2024-06-05 12:40:00 HostB Sensor1 45.3 2024-06-05 12:40:00 HostB Sensor2 45.4 Can you clarify which raw input format is yours? Emulation of the second format is     | makeresults format=csv data="_time,host,Sensor,Temperature 2024-06-05 12:00:00, HostA,Sensor1, 45.5 2024-06-05 12:00:00, HostA,Sensor2,48.1 2024-06-05 12:00:00, HostB, Sensor1, 50.0 2024-06-05 12:00:00, HostA,Sensor2,45.1 2024-06-05 12:20:00, HostA,Sensor1,46.8 2024-06-05 12:20:00, HostA,Sensor2,47.4 2024-06-05 12:20:00, HostB,Sensor1,48.8 2024-06-05 12:20:00, HostB,Sensor2,46.3 2024-06-05 12:40:00, HostA,Sensor1,45.8 2024-06-05 12:40:00, HostA,Sensor2,48.6 2024-06-05 12:40:00, HostB,Sensor1,45.3 2024-06-05 12:40:00, HostB,Sensor2,45.4" | eval _time = strptime(_time, "%F %T") | table _time,host,Sensor,Temperature ``` the above emulates index=tempmon sourcetype=tempdata ``````