All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK. Again - do you see events from your UF in _internal index? (try a longer timespan extending some time into the future).
Hello, for more than two weeks, I have been trying to access the site in order to create my account and be able to download the Splunk Phantom. But I can't access the site (my.Phantom.us) what to do ... See more...
Hello, for more than two weeks, I have been trying to access the site in order to create my account and be able to download the Splunk Phantom. But I can't access the site (my.Phantom.us) what to do please
 but value are not change 
Hello @Gustavo.Marconi, Thank you for contacting us. Looks like the access key is not setup properly. Can you please confirm if you have created the access key correctly ? You can run the below... See more...
Hello @Gustavo.Marconi, Thank you for contacting us. Looks like the access key is not setup properly. Can you please confirm if you have created the access key correctly ? You can run the below command and check if you have setup the access key  or not. kubectl get secret cluster-agent-secret -n appdynamics -o jsonpath='{.data.controller-key}' | base64 --decode Also can you please confirm the agent version and the operator version that you're deploying ?   If you are using lessthan 23.11 version I would recommend you to dopwnload the latest version.   Could you please follow below: Download the latest version if the current version is 23.11 or less. Delete old Controller Access Key Secret: Create Controller Access Key Secret:  kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key> Give a try and let me know how it goes. Happy Friday and weekend. Best Regards, Rajesh Ganapavarapu  
Yes the regex is correct and validated.
Thank you @isoutamo , that worked after updating the key. However, still puzzled on after adding another stanza. My current props.conf: [source::/var/log/containers/*_integration-business*.log] TR... See more...
Thank you @isoutamo , that worked after updating the key. However, still puzzled on after adding another stanza. My current props.conf: [source::/var/log/containers/*_integration-business*.log] TRANSFORMS-int-null = setnull [source::/var/log/containers/*_0091*.log] TRANSFORMS-set = allow_all [source::/var/log/containers/*] TRANSFORMS-null = debug_to_null and transforms.conf: [allow_all] REGEX = . DEST_KEY = QUEUE FORMAT = indexQueue [setnull] REGEX = NormalizedApiId failed to resolve DEST_KEY = queue FORMAT = nullQueue [debug_to_null] REGEX = DEBUG|Debug|debug DEST_KEY = queue FORMAT = nullQueue So I would like to allow any logs including DEBUG keyword from the source pattern /var/log/containers/*_0091*.log, however I see the logs with DEBUG keyword from that particular source pattern is being dropped. Is this even possible to do?
Weird typo is fixed.
This app https://splunkbase.splunk.com/app/5433
To post an answerable question in this forum, it is important to illustrate your input, e.g., raw events (anonymize as needed), illustrate/mock desired output, then explain the logic between illustra... See more...
To post an answerable question in this forum, it is important to illustrate your input, e.g., raw events (anonymize as needed), illustrate/mock desired output, then explain the logic between illustrated input and desired output including any relevant available fields, data characteristics, etc. From your description, all volunteers here get is that you have some file ingested via Universal Forwarder and your data contains some sort of session key. 
What do you use streamstats for?  You can always bucket time before streamstats, something like | bin span=60m _time | streamstats current=f count This will not work if your use case requires you t... See more...
What do you use streamstats for?  You can always bucket time before streamstats, something like | bin span=60m _time | streamstats current=f count This will not work if your use case requires you to maintain time resolution, although that should be rather rare. 
It would be much easier to describe the problem with illustrations/mockups of your input and desired output, and a description of the logic between illustrated input and desired output without SPL. ... See more...
It would be much easier to describe the problem with illustrations/mockups of your input and desired output, and a description of the logic between illustrated input and desired output without SPL. I've got two servers providing me temperature data. Host A has Sensor1 and Sensor2. Host B has Sensor1 and Sensor2.  Let me try to decipher the requirements.  You have a dataset like _time host Sensor1 Sensor2 2024-06-05 12:00:00 HostA 45.5 48.1 2024-06-05 12:00:00 HostB 50.0 45.1 2024-06-05 12:20:00 HostA 46.8 47.4 2024-06-05 12:20:00 HostB 48.8 46.3 2024-06-05 12:40:00 HostA 45.8 48.6 2024-06-05 12:40:00 HostB 45.3 45.4 Your psuedo code suggests that Host A has Sensor1 in Room 12 and Sensor2 in Room 13. Host B has Sensor1 in Room 14 and Sensor2 in Room 15.  You want something like the following: _time avg(Roome12) avg(Roome13) avg(Roome14) avg(Roome15) 2024-06-05 12:00:00 45.5 48.1 50.0 45.1 2024-06-05 12:20:00 46.8 47.4 48.8 46.3 2024-06-05 12:40:00 45.8 48.6 45.3 45.4 But your data doesn't directly give you the mapping.  Is this correct?  I lack the imagination of how this physical layout works, but in SPL, you can do something like   index=tempmon sourcetype=tempdata | eval "Room 12" = if(host == "HostA", Sensor1, null()) | eval "Room 13" = if(host == "HostA", Sensor2, null()) | eval "Room 14" = if(host == "HostB", Sensor1, null()) | eval "Room 15" = if(host == "HostB", Sensor2, null()) | timechart span=20min avg("Room 12"), avg("Room 13"), avg("Room 14"), avg("Room 15")   You can play with the following emulation and compare with real data   | makeresults format=csv data="_time,host,Sensor1,Sensor2 2024-06-05 12:00:00, HostA, 45.5,48.1 2024-06-05 12:00:00, HostB, 50.0,45.1 2024-06-05 12:20:00, HostA,46.8,47.4 2024-06-05 12:20:00, HostB,48.8,46.3 2024-06-05 12:40:00, HostA,45.8,48.6 2024-06-05 12:40:00, HostB,45.3,45.4" | eval _time = strptime(_time, "%F %T") ``` the above emulates index=tempmon sourcetype=tempdata ```   On the other hand, maybe you do not want to hard code the mapping.  In that case, you can set up a lookup table like host Sensor Room hostA Sensor1 Room 12 hostA Sensor2 Room 13 hostB Sensor1 Room 14 hostB Sensor2 Room 15 Before I move on with SPL, it is important to understand what you really have in raw data.  My mockup assumes that you have two separate fields Sensor1 and Sensor2.  While this is a possible with some data sources, a more normalized data feed might give the following instead.    _time host Sensor Temperature 2024-06-05 12:00:00 HostA Sensor1 45.5 2024-06-05 12:00:00 HostA Sensor2 48.1 2024-06-05 12:00:00 HostB Sensor1 50.0 2024-06-05 12:00:00 HostA Sensor2 45.1 2024-06-05 12:20:00 HostA Sensor1 46.8 2024-06-05 12:20:00 HostA Sensor2 47.4 2024-06-05 12:20:00 HostB Sensor1 48.8 2024-06-05 12:20:00 HostB Sensor2 46.3 2024-06-05 12:40:00 HostA Sensor1 45.8 2024-06-05 12:40:00 HostA Sensor2 48.6 2024-06-05 12:40:00 HostB Sensor1 45.3 2024-06-05 12:40:00 HostB Sensor2 45.4 Can you clarify which raw input format is yours? Emulation of the second format is     | makeresults format=csv data="_time,host,Sensor,Temperature 2024-06-05 12:00:00, HostA,Sensor1, 45.5 2024-06-05 12:00:00, HostA,Sensor2,48.1 2024-06-05 12:00:00, HostB, Sensor1, 50.0 2024-06-05 12:00:00, HostA,Sensor2,45.1 2024-06-05 12:20:00, HostA,Sensor1,46.8 2024-06-05 12:20:00, HostA,Sensor2,47.4 2024-06-05 12:20:00, HostB,Sensor1,48.8 2024-06-05 12:20:00, HostB,Sensor2,46.3 2024-06-05 12:40:00, HostA,Sensor1,45.8 2024-06-05 12:40:00, HostA,Sensor2,48.6 2024-06-05 12:40:00, HostB,Sensor1,45.3 2024-06-05 12:40:00, HostB,Sensor2,45.4" | eval _time = strptime(_time, "%F %T") | table _time,host,Sensor,Temperature ``` the above emulates index=tempmon sourcetype=tempdata ``````  
when i run the above query, am getting "invalid arguement near innisbrook"
I want to change the color of the bars to green for approved and red for declined. I have tried using seriesColorsByField, but it does not change the color    
There are no results because the rex command is not extracting any fields.  That's because it is looking for text ("REQUEST" and "RETRYNO") that don't exist (at least not in the sample event).  This ... See more...
There are no results because the rex command is not extracting any fields.  That's because it is looking for text ("REQUEST" and "RETRYNO") that don't exist (at least not in the sample event).  This run-anywhere query works for me.   | makeresults | eval _raw="</soap:Envelope>\", RESPONSE=\"<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soapenv:Header/> <soapenv:Body> <soapenv:Fault xmlns:trefault=\"http://tresoap.intecbilling.com/fault/2.0\"> <faultcode>trefault:ApplicationException</faultcode> <faultstring><CM-41398> ERROR: Value &quot;Apple Watch 4G 5GB&quot; supplied for Fact &quot;OrderedComp.RatePlan_R&quot; is not allowed by the fact&apos;s filter search or expression</faultstring> <detail> <trefault:Detail> <trefault:Message><CM-41398> ERROR: Value &quot;Apple Watch 4G 5GB&quot; supplied for Fact &quot;OrderedComp.RatePlan_R&quot; is not allowed by the fact&apos;s filter search or expression</trefault:Message> <trefault:ErrorId>41398</trefault:ErrorId> </trefault:Detail> </detail> </soapenv:Fault> </soapenv:Body>" |rex "RESPONSE=\\\"(?<RESPONSE>.+)" |spath input=RESPONSE |spath input=RESPONSE output=trefault:Message path=soapenv:Envelope.soapenv:Body.soapenv:Fault.faultstring.detail.trefault:Detail.trefault:Message  
Hi @PadmaPriya.Pyda , Could you please send me private message with your Email id and controller url to look into it further? 
The stats family of commands will not fill in the null values created by the absence of data.  The timechart command, however, will do so.  try this |tstats prestats=t avg(TotalResponseTime) as Resp... See more...
The stats family of commands will not fill in the null values created by the absence of data.  The timechart command, however, will do so.  try this |tstats prestats=t avg(TotalResponseTime) as ResponseTime avg(FordResponseTime) as Fordresponsetime where index=app-index NOT TERM(timeout) by _time | timechart avg(TotalResponseTime) as ResponseTime avg(FordResponseTime) as Fordresponsetime The tstats command does the heavy lifting and prepares the results for later processing by another stats command.  The timechart command essentially formats what tstats created and fills in zeroes for missing values.  
Hi All, using below query but not getting complete output. If there is no data present  for Response time for particular days but values present for Fordresponsetime for same days  it is not showing... See more...
Hi All, using below query but not getting complete output. If there is no data present  for Response time for particular days but values present for Fordresponsetime for same days  it is not showing up in the table   |tstats avg(TotalResponseTime) as ResponseTime avg(FordResponseTime) as Fordresponsetime where index=app-index NOT TERM(timeout) by _time   QueryOutput: _time ResponseTime Fordresponsetime 2024-01-01 12.67 34.00 2024-01-02 34.94 56.89 2024-01-03 24.78 52.70 2024-01-04 34.70 42.87 ExpectedOutput: _time ResponseTime Fordresponsetime 2024-01-01 12.67 34.00 2024-01-02 34.94 56.89 2024-01-03 24.78 52.70 2024-01-04 34.70 42.87 2024-01-05 0 33.56 2024-01-06 0 23.77 2024-01-07 0 34.78  
if i give the below query to extract the error msg alone from the soap response: index="abcl" sourcetype="oracle:transactionlog" OPERATION = "getOrderService" |rex "REQUEST=\"(?<REQUEST>.+)\", ... See more...
if i give the below query to extract the error msg alone from the soap response: index="abcl" sourcetype="oracle:transactionlog" OPERATION = "getOrderService" |rex "REQUEST=\"(?<REQUEST>.+)\", RESPONSE=\"(?<RESPONSE>.+)\", RETRYNO" |spath input=RESPONSE |spath input=RESPONE output=trefault:Message path=trefault:Message am getting no response
Hello,  I did the splunk ES installation following all the steps noted here - https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity i did all the steps and now when t... See more...
Hello,  I did the splunk ES installation following all the steps noted here - https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity i did all the steps and now when trying to find those index, even on the /opt/splunk/etc/apps/SplunkEnterpriseSecuritSuite/local or default and there is no indexes.conf, with in them I am trying to find index=notable, notable_summary, risk to see notable events from correlation search  How am i supposed to get these indexes in apps inside ES, like shown here as well.  https://docs.splunk.com/Documentation/ES/7.3.2/Install/Indexes Any help would be appreciated
Dashboards are for people and are only action when someone is using them.  Alerts run searches on a schedule and trigger an action based on the results of the searches. Consider copying the searches... See more...
Dashboards are for people and are only action when someone is using them.  Alerts run searches on a schedule and trigger an action based on the results of the searches. Consider copying the searches from the 7 panels into scheduled searches.  Then have the dashboard use the loadjob command to display the most recent results of each search.  Create 7 alerts that also use the loadjob command to load the search results and take action based on them.