All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need count of cloudfront-viewer-country and sec-ch-ua-platform for each Origin Please help. Expected Result: If site1 has only 2 countries and site2 has one extra platform, then the ex... See more...
I need count of cloudfront-viewer-country and sec-ch-ua-platform for each Origin Please help. Expected Result: If site1 has only 2 countries and site2 has one extra platform, then the expected result should be like below. Origin Platform Platform Count Country Country Count https://www.site1.com Android 10 US 22   macOS 12 UK 3   Windows 6     https://www.site2.com Android 4 US 8   macOS 4 UK 1   Windows 2 AU 1       IND 5 Data: { "additional": { "method": "POST", "url": "/api/resource/getContentEditorData", "headers": { "cloudfront-viewer-country": "US", "origin": "https://www.site1.com", "sec-ch-ua-platform": "\"Android\"" } }, "level": "notice", "message": "INCOMING REQUEST: POST /api/resource/getContentEditorData" } ============ { "additional": { "method": "POST", "url": "/api/resource/getContentEditorData", "headers": { "cloudfront-viewer-country": "UK", "origin": "https://www.site1.com", "sec-ch-ua-platform": "\"Windows\"" } }, "level": "notice", "message": "INCOMING REQUEST: POST /api/resource/getContentEditorData" } ========================= { "additional": { "method": "POST", "url": "/api/resource/getContentEditorData", "headers": { "cloudfront-viewer-country": "AU", "origin": "https://www.site2.com", "sec-ch-ua-platform": "\"Windows\"" } }, "level": "notice", "message": "INCOMING REQUEST: POST /api/resource/getContentEditorData" }
Hi  All,  I have this data in index 1  input active  Idle a d g b e h c f i I have this  data in index 2  input TEST pwr a d 1 b e 2 c f 3 a ... See more...
Hi  All,  I have this data in index 1  input active  Idle a d g b e h c f i I have this  data in index 2  input TEST pwr a d 1 b e 2 c f 3 a g 4 b h 5 c i 6   Now  i want to change these d , e, f   to active  and  g, h, i  to idle  so my data in index looks like this input TEST pwr a active 1 b active 2 c active 3 a idle 4 b idle 5 c idle 6   and then i want to run my final search. I tried sub searches and all, but  unable to do this. I have given  small example  there are 100s of active and idle entries  
Hi, I am using Splunk Enterprise Version 9 where the new index _configtracker is able to show changes made to configuration files. However, it is hard to identify the changes made to a correlation ... See more...
Hi, I am using Splunk Enterprise Version 9 where the new index _configtracker is able to show changes made to configuration files. However, it is hard to identify the changes made to a correlation search in savedsearches.conf at a glance or use the data.changes{}.properties{}.new_value field as it contains multiple values. Furthermore, the change is spread over two events where one shows data.changes{}.properties{}.new_value (post-change field.jpg) and the other shows data.changes{}.properties{}.old_value (empty values) How can I compare all the multiple values under the field and return the property that is being changed? I am guessing I can link the two events using the "new_checksum" and "old_checksum". I removed most of the fields to make it easier to read and changed the content of the SPL to <Search content> to mask some information. Pre-change raw details: {"datetime":"06-21-2022 16:29:41.119 +0800","log_level":"INFO ","component":"ConfigChange","data":{"path":"/splunk/etc/apps/SplunkEnterpriseSecuritySuite/local/savedsearches.conf","action":"update","modtime":"Tue Jun 21 16:29:41 2022","epoch_time":"1655800181","new_checksum":"0x621552b3fcbdfc9e","old_checksum":"0x95c4bf5f0b449f9","changes":[{"stanza":"Endpoint - Linux/MS - Server Reboot/Shutdown - Rule","properties":[{"name":"action.correlationsearch.annotations","new_value":"","old_value":"{}"},{"name":"realtime_schedule","new_value":"","old_value":"0"}, {"name":"search","new_value":"","old_value":"(<Search content>"}]}]}} Post-change raw details: {"datetime":"06-21-2022 16:29:41.642 +0800","log_level":"INFO ","component":"ConfigChange","data":{"path":"/splunk/etc/apps/SplunkEnterpriseSecuritySuite/local/savedsearches.conf","action":"update","modtime":"Tue Jun 21 16:29:41 2022","epoch_time":"1655800181","new_checksum":"0xf5867665b8a15f4","old_checksum":"0x621552b3fcbdfc9e","changes":[{"stanza":"Endpoint - Linux/MS - Server Reboot/Shutdown - Rule","properties":[{"name":"action.correlationsearch.annotations","new_value":"{}","old_value":""},{"name":"realtime_schedule","new_value":"0","old_value":""}, {"name":"search","new_value":"(<Search content>","old_value":""}]}]}}   Regards, Zijian
Hello, So basically I want to use Splunk as an BI tool, reading and getting data from our backend Oracle database without indexing it ( because our Splunk capacity can't store past 30 days and there... See more...
Hello, So basically I want to use Splunk as an BI tool, reading and getting data from our backend Oracle database without indexing it ( because our Splunk capacity can't store past 30 days and there is a requirement to see data from 100 days ago). I want to search across that data in Splunk using SPL without indexing it, is there anyway to work around it?
Hi, I am new in Splunk,  if I want to remove the display of all column field for T9_LotID_LaneA,T9_LotID_LaneB,T9_LotIB_LaneC,T9_LotID_LaneD, was empty or null value   my base search: OWA03 A... See more...
Hi, I am new in Splunk,  if I want to remove the display of all column field for T9_LotID_LaneA,T9_LotID_LaneB,T9_LotIB_LaneC,T9_LotID_LaneD, was empty or null value   my base search: OWA03 AND ID = "T9 Hot DI Air Temp.(Upper Chamber) HTC5.1 PV" OR ID = "T9 Hot DI Humidity PV" OR ID = "T9 Hot DI N2 Diffuser Temp.HTC4.1 PV" OR ID = "T9 Hot DI Water Process Temp.HTC2.1 PV" OR ID = "T9_LotID_LaneA" OR ID = "T9_LotID_LaneB" OR ID = "T9_LotID_LaneC" OR ID = "T9_LotID_LaneD" |timechart span=3s cont=false latest(Value) by ID the results:   please advise , Thanks  
Can only get the namespace AWS, the custom one are not showing 
Hello, I have all auditing enabled via GPO and I am getting WinEventLog:Security logs in Splunk.  I am attempting to generate an alert when a user or group is added to local admin on a host.  I see... See more...
Hello, I have all auditing enabled via GPO and I am getting WinEventLog:Security logs in Splunk.  I am attempting to generate an alert when a user or group is added to local admin on a host.  I see the SID that gets added in the Win event log on the host showing as the group or account name, but I see the corresponding event in Splunk showing the same entry as the alpha/numeric SID.  How can I view the account or username in Splunk for these events?   Thanks, Garry  
I'm hoping someone can help me out here. I'm looking to create a simple table that displays a column for "count" and another for "Percentage of total". For some reason splunk is not recognizing the t... See more...
I'm hoping someone can help me out here. I'm looking to create a simple table that displays a column for "count" and another for "Percentage of total". For some reason splunk is not recognizing the total field within the denominator of my eval command. Any suggestions on how to append this?    index=ABC sourcetype="ABC" "EVNT=SWIendcall" |stats count by OUTCOME | addtotals row=f col=t labelfield=OUTCOME |eval Percentage=ROUND((count/Total)*100,1)  
Hi all, I keep getting a message that the current bundle directory contains a large lookup file and the specified file is a delta under /opt/splunk/var/run. I read that the max_memtable_bytes deter... See more...
Hi all, I keep getting a message that the current bundle directory contains a large lookup file and the specified file is a delta under /opt/splunk/var/run. I read that the max_memtable_bytes determines the maximum size of lookups. But how about the delta? What delta size is too large? Or should I rather be looking at the largest lookups in the bundle to resolve the problem? Do you have any tipps on how to resolve this?Thanks.    
I have some MS IIS 10 instances and I'm ingesting the IIS logs in WSC format from them. I have installed the Splunk Addon For Microsoft IIS  on my search head and both indexers. I set the sourcetype ... See more...
I have some MS IIS 10 instances and I'm ingesting the IIS logs in WSC format from them. I have installed the Splunk Addon For Microsoft IIS  on my search head and both indexers. I set the sourcetype for my inputs to ms:iis:auto. The logs are being ingested but the fields are not being extracted at index time. So, I tried setting the sourcetype to ms:iis:default:85 some fields are extracted but with incorrect filed name. This seems to be due to the fact that the default fields are those for IIS 8.5 and in 10 some are not there by default (s-sitename for example). It seems I would have to tell it what fields I'm using as per Configure field transformations in the Splunk Add-on for Microsoft IIS - Splunk Documentation for the search time extractions to work but not all my IIS servers have the same logging setup. Furthermore, I'd like index time field extraction but that does not seem to work.  This TA is quite old (2020 last update) so I'm wondering is there a better/easier way to do this for IIS 10 ?
  Please help - my centos 8 server is show some inactive forwards to my indexer. [splunk@centos8 ~]$ /opt/splunkforwarder/bin/splunk list forward-server Active forwards: 10.0.0.42:9997 ... See more...
  Please help - my centos 8 server is show some inactive forwards to my indexer. [splunk@centos8 ~]$ /opt/splunkforwarder/bin/splunk list forward-server Active forwards: 10.0.0.42:9997 Configured but inactive forwards: 10.0.0.36:9997 10.0.0.45:9997 [splunk@centos8 ~]$ outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] disabled = false server = mysplunkindexer1:9997,mysplunkindexer2:9997
Hello, I am upgrading our splunk instance from 8.0 to 8.2. I noticed after following the kvstore upgrade which followed the upgrade to 8.2, which I did on our cluster heads, and they all have Wire... See more...
Hello, I am upgrading our splunk instance from 8.0 to 8.2. I noticed after following the kvstore upgrade which followed the upgrade to 8.2, which I did on our cluster heads, and they all have WiredTiger on them now.  Logging into our cluster master, I get an error saying I need to upgrade the kvstore. This comes up like 30+ minutes after the upgrade. I look at the cluster master and run the same command for the kvstore status, and it says that it has the old kvstore on it still. Am I supposed to be going onto each box and upgrading the kvstore on them? The instructions said the search heads and so that's what I followed.   
I don't want the graph to show 105.
I have a db connect input that I want to programatically activate and deactivate. Following some docs I came up with this:   curl -k -H "authorization: Splunk XXXX" https://localhost:8089/services... See more...
I have a db connect input that I want to programatically activate and deactivate. Following some docs I came up with this:   curl -k -H "authorization: Splunk XXXX" https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/configs/conf-db_inputs/SIP_CAMIO_AUDIT_IN -d "disabled=1"   However db connect seems to ignore the change and keeps indexing data. Unless I access the /en-US/debug/refresh URL and manually refresh the whole server. I decided to do a test and the following cURL works:   curl -k -H "authorization: Splunk XXXX" https://localhost:8089/servicesNS/nobody/Admin_Tools/configs/conf-macros/test_rest -d "disabled=1"     How can I disable/enable db connect inputs through REST? Why does db_connect ignore conf updates through REST? Edit: I've tried accessing the following endpoint without luck aswell curl -k -H "authorization: Splunk XXXX" -X POST https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/configs/conf-db_inputs/_reload  
Dears, Is it possible to track who / how my application server was restarted, from the controller? Thanks, Rathna SR. 
I am trying to add a dropdown in a Dashboard in Splunk cloud using the index field host from a metric dataset. I want the host field to dynamically update the dropdown from the host field available i... See more...
I am trying to add a dropdown in a Dashboard in Splunk cloud using the index field host from a metric dataset. I want the host field to dynamically update the dropdown from the host field available in the dataset at any one time.  I am using the search for the data source of the dropdown as '|mcatalog values(host) where index=em_metrics | mvexpand values(host)'.  All other settings on the drop down are the defaults.  I have tried changing the search several times and have been unable to get it to work.  Any suggestions would be greatly appreciated.  
Hi together! I have an issue with the point separator, after conversion from a json file. This is raw json: "customfield_26202" : { "self" : "link", "value" : "Software: Softwareentwicklung und Pr... See more...
Hi together! I have an issue with the point separator, after conversion from a json file. This is raw json: "customfield_26202" : { "self" : "link", "value" : "Software: Softwareentwicklung und Product Security", "id" : "30705", "disabled" : false, "child" : { "self" : "link", "value" : "Software-Projektleiter", "id" : "30771", "disabled" : false Splunk extracts the field: customfield_26202.child.value="Software-Projektleiter" (Done with _json sourcetype) Now I want to merge to fields like this: | eval output = mvappend(customfield_26202.child.value, customfield_26204.child.value) | mvexpand output | table output When I do exactly the same thing without .child.value everything works fine. I tried it with several quotation marks (", ' etc.) nothing helps. Any idea what I do wrong? Thank you! Timo
My search query - source="http:product_inv_rest" | spath message | search message="Request: GET */product-inventory/product 123456" In above query , I want to find records which has any number (o... See more...
My search query - source="http:product_inv_rest" | spath message | search message="Request: GET */product-inventory/product 123456" In above query , I want to find records which has any number (only number) in place of 123456.    
Has anyone run into an issue where a Splunk HF, is not monioring files being written to it. This HF is also a syslog server so files are being written to it and the monirotied inputs are on the serve... See more...
Has anyone run into an issue where a Splunk HF, is not monioring files being written to it. This HF is also a syslog server so files are being written to it and the monirotied inputs are on the server. The file ingestion happens after a restart. Any pointers?    
I am somewhat puzzled by the performance of this search. When I leave the wildcards off the search is WAY faster than with the wildcards. In essence, shouldn't I get the same results from both search... See more...
I am somewhat puzzled by the performance of this search. When I leave the wildcards off the search is WAY faster than with the wildcards. In essence, shouldn't I get the same results from both searches? index="myindex" sourcetype="mysourcetype" "my term" vs index="myindex" sourcetype="mysourcetype" "*my term*"   On another answer I saw a Splunk employee state that ... "my term" was essentially the same as ... _raw="*my term*"   The performance difference on my system is undeniable, so I guess my question would be is there a reason I would want/need to put the wildcards in? Would I potentially get different results? Thanks.