All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team,  I am trying to run appdynamics machine agent as a container to monitor the existing app containers. I have gone through the issue discussion and added this line in my environment file.  A... See more...
Hi Team,  I am trying to run appdynamics machine agent as a container to monitor the existing app containers. I have gone through the issue discussion and added this line in my environment file.  APPDYNAMICS_SIM_ENABLED=true But still I recieve the error log: c8ebf9f96874==> [system-thread-0] 24 Jun 2022 13:04:05,719 DEBUG RegistrationTask - Encountered error during registration. com.appdynamics.voltron.rest.client.NonRestException: Method: SimMachinesAgentService#registerMachine(SimMachineMinimalDto) - Result: 401 Unauthorized - content: at com.appdynamics.voltron.rest.client.VoltronErrorDecoder.decode(VoltronErrorDecoder.java:62) ~[rest-client-1.1.0.187.jar:?] at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:156) ~[feign-core-10.7.4.jar:?] at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:80) ~[feign-core-10.7.4.jar:?] at feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:100) ~[feign-core-10.7.4.jar:?] at com.sun.proxy.$Proxy113.registerMachine(Unknown Source) ~[?:?] at com.appdynamics.agent.sim.registration.RegistrationTask.run(RegistrationTask.java:147) [machineagent.jar:Machine Agent v22.5.0-3361 GA compatible with 4.4.1.0 Build Date 2022-05-26 01:20:55] at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [?:?] at java.util.concurrent.FutureTask.runAndReset(Unknown Source) [?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]" Need solution on this. 
I have doubts that this Saved Search may not be properly engineered  and very taxing in terms of how time range is specified. This Saved search is basically responsible for populating a lookup.  ... See more...
I have doubts that this Saved Search may not be properly engineered  and very taxing in terms of how time range is specified. This Saved search is basically responsible for populating a lookup.  It ends with | outputlookup <lookup name> The range of the scheduled saved search is defined as,  earliest = -7d@h latest = now In the saved search there is a logic added before the last time, that filters the event based on last 90 days. The search ends Like this, .......... .......... ........... | stats min(firstTime) as firstTime , max(lastTime) as lastTime by dest , process , process_path , SHA256_Hash , sourcetype | where lastTime > relative_time(now(), "-90d") | outputlookup LookUpName ================================== My Question is, How would the search behave? Would its scan range cover last 90 days or will limit itself to 7 days. Which time range will take precedence ?
Hi, Notable events in ES can now be assigned Dispositions. I am able to create new Dispositions from the Incident Review page and enable/disable them. From the reviewsettings.conf file i can also se... See more...
Hi, Notable events in ES can now be assigned Dispositions. I am able to create new Dispositions from the Incident Review page and enable/disable them. From the reviewsettings.conf file i can also set a default one, set it to Hidden etc. However I am looking see if there is a way for Dispositions are required to be set when anyone edits a notable event from the Incident Review tab. I want to have "Unassigned" as the default one. But then require any of the others to be assigned when a notable is edited. Kind of similar to the way Comments can be set to Required. Basically i need them to be mandatory. Anyone know of a way to do this?
In the below dashboard table, I need to set colour condition of 2 columns that is is expected difference and sla_difference. if expected_difference Is negative it should show in red colour if it is ... See more...
In the below dashboard table, I need to set colour condition of 2 columns that is is expected difference and sla_difference. if expected_difference Is negative it should show in red colour if it is positive it should show in green colour. same as for sla_difference if it is negative it should be orange if it is positive it should show in green.    
Hello, I couldn't find sufficient solution at documentation nor community. I have to setup timechart, where span=1w, to start at particular day: Monday 00:00. The query looks like this (I am so... See more...
Hello, I couldn't find sufficient solution at documentation nor community. I have to setup timechart, where span=1w, to start at particular day: Monday 00:00. The query looks like this (I am sorry, I had to anonymity sensitive information): index=XXX sourcetype= YYY | eval Alrt_lvl = B_Lvl + Prio_diff | timechart span=1w count(Alrt_lvl) by Alrt_lvl Kindly please advise.
My code : <search> <query>| makeresults | eval API="party_interaction_rest" , METHOD="GET",OPERATION="Alle,LIST_PARTY_INTERACTIONS" | append [| makeresults | eval API="ticket_mgmt_rest" , METHOD="G... See more...
My code : <search> <query>| makeresults | eval API="party_interaction_rest" , METHOD="GET",OPERATION="Alle,LIST_PARTY_INTERACTIONS" | append [| makeresults | eval API="ticket_mgmt_rest" , METHOD="GET",OPERATION="Alle,LIST_TROUBLE_TICKETS"] | eval OPERATION=split(OPERATION,",") |mvexpand OPERATION| table API METHOD OPERATION | search API="$token_service$" METHOD =$token_method$ </query> </search>   In above code $token_method$ is dropdown field for which prefix is defined  as below: <input type="dropdown" token="token_method" searchWhenChanged="true"> <label>Select Method:</label> <fieldForLabel>METHOD</fieldForLabel> <fieldForValue>METHOD</fieldForValue> <search> <query>| makeresults | eval API="party_interaction_rest",METHOD="Alle,GET,POST" | append [| makeresults | eval API="ticket_mgmt_rest",METHOD="Alle,GET,POST,PATCH"] | append [| makeresults | eval API="customer_management_rest",METHOD="Alle,GET,PATCH"] | append [| makeresults | eval API="agreement_management_rest",METHOD="Alle,GET"] | append [| makeresults | eval API="product_order_rest",METHOD="Alle,GET,POST,PATCH,DELETE"] | append [| makeresults | eval API="cust_comm_rest",METHOD="Alle,GET"] | append [| makeresults | eval API="product_inv_rest",METHOD="Alle,GET,POST,PATCH"] | eval METHOD=split(METHOD,",") |mvexpand METHOD| table API METHOD | search API="$token_service$"</query> </search> <change> <condition value="Alle"> <set token="token_method">*</set> </condition> </change> <default>Alle</default> <prefix>"properties.httpMethod"=</prefix> <initialValue>Alle</initialValue> </input> So I want to ignore prefix in some case and only need value from dropdown but in some cases I need prefix .Please guide.
I have created a custom search command, using the streaming search templates provided for the splunk SDK.  It is a simple "take in the results from x field, manipulate, and add in a couple new field... See more...
I have created a custom search command, using the streaming search templates provided for the splunk SDK.  It is a simple "take in the results from x field, manipulate, and add in a couple new fields". This runs fine on my single instance development server. When I push it out to my search head cluster, I have 2 problems: 1) (Coincidence?) My Indexers all spiked in CPU around the same time i ran my search. Can a search head custom search impact the indexer? Looking at the docs, it seems like it would only impact the search head. 2) My search runs fine in a standalone instance, but in a distributed instance (SHC, Index cluster), I get this error. I dont see any more info on this -- how can I debug something so vague in splunk?  [idx01-g,idx01-k,idx02-g,idx02-k,idx03-g,idx03-k,idx04-g,idx04-k,idx05-g,idx05-k] Streamed search execute failed because: Error in 'punycode' command: External search command exited unexpectedly with non-zero error code 1.   Further investigation throws this (not on standalone, SHC only) but I changed it and am still getting this error.  
Hi, I'm trying to set up Splunk for Zscaler Nanalog Streaming Service. In the input settings, I can't choose zscalernss. There's only ta_zscaler_api_zscaler_zia_configurations-too_small, ta_zscale... See more...
Hi, I'm trying to set up Splunk for Zscaler Nanalog Streaming Service. In the input settings, I can't choose zscalernss. There's only ta_zscaler_api_zscaler_zia_configurations-too_small, ta_zscaler_api_zscaler_zpa_configurations-too_small, zscaler:zia:api, and zscaler:zpa:api. How should I go about this? Thanks!
Hi All, I got a request to monitor a log files in splunk. below are the log file name pattern: abc_uat_cpe_220614.log abc_dev_cpe_220615.log abc_train_cpe_220616.log and so on.. I have co... See more...
Hi All, I got a request to monitor a log files in splunk. below are the log file name pattern: abc_uat_cpe_220614.log abc_dev_cpe_220615.log abc_train_cpe_220616.log and so on.. I have configured inputs lik shown below: [monitor:///usr/local/bsl_export/abc_*_cpe_*.log] index=abdxj sourcetype=bsl_export:cpe disabled = 0 But i am not getting any logs in splunk checked all the things mentioned below: Splunk service is running spunk user had read access Firewall connections are all good has latest logs files with enough size to read Restarted the splunk service still same issue,  Checked _internal logs under log_level=WARN i see below message: AutoloadBalancedConnectstrategy - Cooked connection to ip timed out But connection is fine as i have checked it already. When i run the below command it gives output as "hangup" splunk list inputstatus props is as below: ########### JIRA link ########### [bsl_export:cpe] ############ Can anyone please help me on this?    
Hi All, I have a set of folders which are created by the job which runs in the backend and the names of the folders keep changing automatically looks something like below:   hs21dsb:hs54:bsds... See more...
Hi All, I have a set of folders which are created by the job which runs in the backend and the names of the folders keep changing automatically looks something like below:   hs21dsb:hs54:bsds:hasn542sbsb hshs21:nansh2225:haan53333 and so on.. i want to monitor just the latest folder which is created newly in the list and ignore rest of all folders. how to achieve this in monitor stanza?
I need count of cloudfront-viewer-country and sec-ch-ua-platform for each Origin Please help. Expected Result: If site1 has only 2 countries and site2 has one extra platform, then the ex... See more...
I need count of cloudfront-viewer-country and sec-ch-ua-platform for each Origin Please help. Expected Result: If site1 has only 2 countries and site2 has one extra platform, then the expected result should be like below. Origin Platform Platform Count Country Country Count https://www.site1.com Android 10 US 22   macOS 12 UK 3   Windows 6     https://www.site2.com Android 4 US 8   macOS 4 UK 1   Windows 2 AU 1       IND 5 Data: { "additional": { "method": "POST", "url": "/api/resource/getContentEditorData", "headers": { "cloudfront-viewer-country": "US", "origin": "https://www.site1.com", "sec-ch-ua-platform": "\"Android\"" } }, "level": "notice", "message": "INCOMING REQUEST: POST /api/resource/getContentEditorData" } ============ { "additional": { "method": "POST", "url": "/api/resource/getContentEditorData", "headers": { "cloudfront-viewer-country": "UK", "origin": "https://www.site1.com", "sec-ch-ua-platform": "\"Windows\"" } }, "level": "notice", "message": "INCOMING REQUEST: POST /api/resource/getContentEditorData" } ========================= { "additional": { "method": "POST", "url": "/api/resource/getContentEditorData", "headers": { "cloudfront-viewer-country": "AU", "origin": "https://www.site2.com", "sec-ch-ua-platform": "\"Windows\"" } }, "level": "notice", "message": "INCOMING REQUEST: POST /api/resource/getContentEditorData" }
Hi  All,  I have this data in index 1  input active  Idle a d g b e h c f i I have this  data in index 2  input TEST pwr a d 1 b e 2 c f 3 a ... See more...
Hi  All,  I have this data in index 1  input active  Idle a d g b e h c f i I have this  data in index 2  input TEST pwr a d 1 b e 2 c f 3 a g 4 b h 5 c i 6   Now  i want to change these d , e, f   to active  and  g, h, i  to idle  so my data in index looks like this input TEST pwr a active 1 b active 2 c active 3 a idle 4 b idle 5 c idle 6   and then i want to run my final search. I tried sub searches and all, but  unable to do this. I have given  small example  there are 100s of active and idle entries  
Hi, I am using Splunk Enterprise Version 9 where the new index _configtracker is able to show changes made to configuration files. However, it is hard to identify the changes made to a correlation ... See more...
Hi, I am using Splunk Enterprise Version 9 where the new index _configtracker is able to show changes made to configuration files. However, it is hard to identify the changes made to a correlation search in savedsearches.conf at a glance or use the data.changes{}.properties{}.new_value field as it contains multiple values. Furthermore, the change is spread over two events where one shows data.changes{}.properties{}.new_value (post-change field.jpg) and the other shows data.changes{}.properties{}.old_value (empty values) How can I compare all the multiple values under the field and return the property that is being changed? I am guessing I can link the two events using the "new_checksum" and "old_checksum". I removed most of the fields to make it easier to read and changed the content of the SPL to <Search content> to mask some information. Pre-change raw details: {"datetime":"06-21-2022 16:29:41.119 +0800","log_level":"INFO ","component":"ConfigChange","data":{"path":"/splunk/etc/apps/SplunkEnterpriseSecuritySuite/local/savedsearches.conf","action":"update","modtime":"Tue Jun 21 16:29:41 2022","epoch_time":"1655800181","new_checksum":"0x621552b3fcbdfc9e","old_checksum":"0x95c4bf5f0b449f9","changes":[{"stanza":"Endpoint - Linux/MS - Server Reboot/Shutdown - Rule","properties":[{"name":"action.correlationsearch.annotations","new_value":"","old_value":"{}"},{"name":"realtime_schedule","new_value":"","old_value":"0"}, {"name":"search","new_value":"","old_value":"(<Search content>"}]}]}} Post-change raw details: {"datetime":"06-21-2022 16:29:41.642 +0800","log_level":"INFO ","component":"ConfigChange","data":{"path":"/splunk/etc/apps/SplunkEnterpriseSecuritySuite/local/savedsearches.conf","action":"update","modtime":"Tue Jun 21 16:29:41 2022","epoch_time":"1655800181","new_checksum":"0xf5867665b8a15f4","old_checksum":"0x621552b3fcbdfc9e","changes":[{"stanza":"Endpoint - Linux/MS - Server Reboot/Shutdown - Rule","properties":[{"name":"action.correlationsearch.annotations","new_value":"{}","old_value":""},{"name":"realtime_schedule","new_value":"0","old_value":""}, {"name":"search","new_value":"(<Search content>","old_value":""}]}]}}   Regards, Zijian
Hello, So basically I want to use Splunk as an BI tool, reading and getting data from our backend Oracle database without indexing it ( because our Splunk capacity can't store past 30 days and there... See more...
Hello, So basically I want to use Splunk as an BI tool, reading and getting data from our backend Oracle database without indexing it ( because our Splunk capacity can't store past 30 days and there is a requirement to see data from 100 days ago). I want to search across that data in Splunk using SPL without indexing it, is there anyway to work around it?
Hi, I am new in Splunk,  if I want to remove the display of all column field for T9_LotID_LaneA,T9_LotID_LaneB,T9_LotIB_LaneC,T9_LotID_LaneD, was empty or null value   my base search: OWA03 A... See more...
Hi, I am new in Splunk,  if I want to remove the display of all column field for T9_LotID_LaneA,T9_LotID_LaneB,T9_LotIB_LaneC,T9_LotID_LaneD, was empty or null value   my base search: OWA03 AND ID = "T9 Hot DI Air Temp.(Upper Chamber) HTC5.1 PV" OR ID = "T9 Hot DI Humidity PV" OR ID = "T9 Hot DI N2 Diffuser Temp.HTC4.1 PV" OR ID = "T9 Hot DI Water Process Temp.HTC2.1 PV" OR ID = "T9_LotID_LaneA" OR ID = "T9_LotID_LaneB" OR ID = "T9_LotID_LaneC" OR ID = "T9_LotID_LaneD" |timechart span=3s cont=false latest(Value) by ID the results:   please advise , Thanks  
Can only get the namespace AWS, the custom one are not showing 
Hello, I have all auditing enabled via GPO and I am getting WinEventLog:Security logs in Splunk.  I am attempting to generate an alert when a user or group is added to local admin on a host.  I see... See more...
Hello, I have all auditing enabled via GPO and I am getting WinEventLog:Security logs in Splunk.  I am attempting to generate an alert when a user or group is added to local admin on a host.  I see the SID that gets added in the Win event log on the host showing as the group or account name, but I see the corresponding event in Splunk showing the same entry as the alpha/numeric SID.  How can I view the account or username in Splunk for these events?   Thanks, Garry  
I'm hoping someone can help me out here. I'm looking to create a simple table that displays a column for "count" and another for "Percentage of total". For some reason splunk is not recognizing the t... See more...
I'm hoping someone can help me out here. I'm looking to create a simple table that displays a column for "count" and another for "Percentage of total". For some reason splunk is not recognizing the total field within the denominator of my eval command. Any suggestions on how to append this?    index=ABC sourcetype="ABC" "EVNT=SWIendcall" |stats count by OUTCOME | addtotals row=f col=t labelfield=OUTCOME |eval Percentage=ROUND((count/Total)*100,1)  
Hi all, I keep getting a message that the current bundle directory contains a large lookup file and the specified file is a delta under /opt/splunk/var/run. I read that the max_memtable_bytes deter... See more...
Hi all, I keep getting a message that the current bundle directory contains a large lookup file and the specified file is a delta under /opt/splunk/var/run. I read that the max_memtable_bytes determines the maximum size of lookups. But how about the delta? What delta size is too large? Or should I rather be looking at the largest lookups in the bundle to resolve the problem? Do you have any tipps on how to resolve this?Thanks.    
I have some MS IIS 10 instances and I'm ingesting the IIS logs in WSC format from them. I have installed the Splunk Addon For Microsoft IIS  on my search head and both indexers. I set the sourcetype ... See more...
I have some MS IIS 10 instances and I'm ingesting the IIS logs in WSC format from them. I have installed the Splunk Addon For Microsoft IIS  on my search head and both indexers. I set the sourcetype for my inputs to ms:iis:auto. The logs are being ingested but the fields are not being extracted at index time. So, I tried setting the sourcetype to ms:iis:default:85 some fields are extracted but with incorrect filed name. This seems to be due to the fact that the default fields are those for IIS 8.5 and in 10 some are not there by default (s-sitename for example). It seems I would have to tell it what fields I'm using as per Configure field transformations in the Splunk Add-on for Microsoft IIS - Splunk Documentation for the search time extractions to work but not all my IIS servers have the same logging setup. Furthermore, I'd like index time field extraction but that does not seem to work.  This TA is quite old (2020 last update) so I'm wondering is there a better/easier way to do this for IIS 10 ?