All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All,   I have updated the indexes.conf file homePath.maxDataSizeMB  from 13gb to 30gb & maxTotalDataSizeMB 13gb to 30gb.And after that from search result i am not able to see old days data.... See more...
Hello All,   I have updated the indexes.conf file homePath.maxDataSizeMB  from 13gb to 30gb & maxTotalDataSizeMB 13gb to 30gb.And after that from search result i am not able to see old days data.Can anyone provide information how to check and fix this by getting old data in splunk? indexes.conf file values now: #aws_riskinfo [aws_riskinfo] homePath = volume:hotwarm/aws_riskinfo/db coldPath = volume:hotwarm/aws_riskinfo/colddb thawedPath = /prod/appli_is/splunk_indexes/archives/ARCH1Y/aws_riskinfo/thaweddb homePath.maxDataSizeMB = 30000 coldPath.maxDataSizeMB = 0 maxTotalDataSizeMB = 30000 maxWarmDBCount = 4294967295 frozenTimePeriodInSecs = 7776000 #maxDataSize = auto_high_volume coldToFrozenDir = /prod/appli_is/splunk_indexes/archives/ARCH1Y/aws_riskinfo/frozen
Hi all, would love help with this one.  I currently have a query where I have 4 different processing times by sessionId. I want the ability to remove/ delete any sessionId from the results that has... See more...
Hi all, would love help with this one.  I currently have a query where I have 4 different processing times by sessionId. I want the ability to remove/ delete any sessionId from the results that has a blank/ null value. If any one of the four processing times,  has a blank or null value, remove the sessionId from the stats.  After that, I would like the ability to add those four processing times into one processing time by _time and take the perc95.  Any assistance is appreciated. Let me know if more clarification is needed. Thank you!! index= [...] | bucket _time span=1h | eval apiIdentifier=coalesce(msg.apiIdentifier,apiIdentifier) | eval apiName=coalesce(msg.apiName,apiName) | eval apiVersion=coalesce(msg.apiVersion,apiVersion) | eval clientRequestId=coalesce(msg.clientRequestId,clientRequestId) | eval companyId=coalesce(msg.companyId,companyId) | eval contentType=coalesce(msg.contentType,contentType) | eval datacenter=coalesce(msg.datacenter,datacenter) | eval entityId=coalesce(msg.entityId,entityId) | eval logType=coalesce(msg.logType,logType) | eval processingTime=coalesce(msg.processingTime,processingTime) | eval responseCode=coalesce(msg.responseCode,responseCode) | eval serverId=coalesce(msg.serverId,serverId) | eval sessionId=coalesce(msg.sessionId,sessionId) | eval timestamp=coalesce(msg.timestamp,timestamp) | eval totalResponseTime=coalesce(msg.totalResponseTime,totalResponseTime) | eval session-id=coalesce(a_session_id, sessionId) | eval AM2JSRT = if(a_log_type=="Response" AND isNum(a_req_process_time), a_req_process_time,0) ,JS2ISRT = if(logType=="JS2IS", processingTime, 0), JS2AMRT = if(logType=="JS2AM", processingTime, 0), AM2DPRT = if(a_log_type=="Response" AND isNum(a_res_process_time), a_res_process_time,0) | stats SUM(AM2JSRT) as AM2JSRespTime, SUM(JS2ISRT) as JS2ISRespTime, SUM(JS2AMRT) as JS2AMRespTime, SUM(AM2DPRT) as AM2DPRespTime by sessionId | eval gw_processingTime=(AM2JSRespTime+JS2ISRespTime+JS2AMRespTime+AM2DPRespTime  
Hi, what's the correct way to upgrade the Lookup Editor app without loosing lookups? Should I replace the current lookup_editor folder with the new one and then launch the command $SPLUNK_HOME$/... See more...
Hi, what's the correct way to upgrade the Lookup Editor app without loosing lookups? Should I replace the current lookup_editor folder with the new one and then launch the command $SPLUNK_HOME$/bin/splunk apply shcluster-bundle -target http://<SHcaptain>:<port> -preserve-lookups true ? Or maybe I can just replace some of its folders or files? Is there something I need to keep or do in order not to loose old lookups? Thank you in advance for any help.
Hello Community, I'm currently trying to integrate Azure China logs into Splunk but facing some difficulties. I noticed that the Splunk Azure Add-On only seems to support Azure Government and Globa... See more...
Hello Community, I'm currently trying to integrate Azure China logs into Splunk but facing some difficulties. I noticed that the Splunk Azure Add-On only seems to support Azure Government and Global regions. Has anyone managed to successfully add logs from Azure China into Splunk using this or another method? I'd appreciate any guidance or resources that you could provide on this topic. Thank you.
Hi, My initial Splunk query was: index="ABC" sourcetype="DEF" | stats dc(fruit) AS "Fruits" by Diet | sort -"Fruits" However, I need to add a new field "Fruits 7 days ago" which finds the dis... See more...
Hi, My initial Splunk query was: index="ABC" sourcetype="DEF" | stats dc(fruit) AS "Fruits" by Diet | sort -"Fruits" However, I need to add a new field "Fruits 7 days ago" which finds the distinct count of "fruit" by "Diet". My current query is as follows: index="ABC" sourcetype="DEF" | stats dc(fruit) AS "Fruits" by Diet |append [search earliest=-1w@w latest=@w index="ABC" sourcetype="DEF" | stats dc(fruit) AS "Fruits 7 days ago" by Diet ] | sort -"Fruits", "Fruits 7 days ago" Can you please help as I should be getting 3 outputted fields: "Diet", "Fruits", "Fruits 7 days ago" BUT I am still only getting  "Diet" and "Fruits". Can you please help? Many thanks!
Hello clever people, Would anyone be able to help me build a regex that would work on a SPL level e.g something like  | rex mode=sed field=_raw s/regex_example/g I wanted to test the result fir... See more...
Hello clever people, Would anyone be able to help me build a regex that would work on a SPL level e.g something like  | rex mode=sed field=_raw s/regex_example/g I wanted to test the result first before I add to props on the indexers.  The below is the raw log and I would like to keep just the parts in bold all the rest should be dropped/cleared. ----------------------------------------------------- [meta sequenceId="-2077347367"]10000 - [action:"Accept"; conn_direction:"Internal"; flags:"dd06212"; ifdir:"inbound"; ifname:"bond3.32"; logid:"0"; loguid:"{ 000.000.000.000}"; origin:"000.000.000.000"; originsicname:"CN=XXXXXXXX,O= XXXXXXXX. XXXXXXXX.q7vvv"; sequencenum:"1457"; time:"1686217674"; version:"5"; __policy_id_tag:"product=cccccccc-1[db_tag={ XXXXXXXX-8ED31 XXXXXXXX };mgmt= XXXXXXXX xxx1;date=168XXXXXXXX;policy_name=XXXXXXXX-1\]"; dst:"000.000.000.000"; log_delay:"168XXXXXXXX "; layer_name:" XXXXXXXX "; layer_name:" XXXXXXXX "; layer_uuid:" XXXXXXXX -49d7-a207-a90ea5dd66fb"; layer_uuid:"cdc569c2-d869- XXXXXXXX "; match_id:"14x"; match_id:"50331649"; parent_rule:"0"; parent_rule:"0"; rule_action:"Accept"; rule_action:"Accept"; rule_name:" XXXXXXXX Heartbeat -> Platfxxxx"; rule_name:" XXXXXXXX "; rule_uid:"211567a0-d33a- XXXXXXXX "; rule_uid:" XXXXXXXX -4bde-a9c0-3cbaefd188b6"; product:" XXXXXXXX "; proto:"6"; s_port:" XXXXXXXX "; service:"3002"; service_id:"xxxx-Control"; src:"000.000.000.000"] ----------------------------------------------- Thank you all in advance!
We are having a microservices application that runs on Kubernetes and we planning to monitor our application using Splunk Enterprise. All the pods from our application are exposing openmetrics via t... See more...
We are having a microservices application that runs on Kubernetes and we planning to monitor our application using Splunk Enterprise. All the pods from our application are exposing openmetrics via the following annotations. annotations: prometheus.io/path: /q/metrics prometheus.io/port: 8443 prometheus.io/scheme: https prometheus.io/scrape: true Is there any way to scrape openmetrics from all the pods and send it to Splunk Enterprise.? Please let us know. Thanks, Elavarasan S.
Hello, I'm trying to migrate a dashboard to Dashboard Studio. My dashboard uses <panel> with <html> to add multiple rich text formatted panels. It seems like Dashboard Studio only supports one font... See more...
Hello, I'm trying to migrate a dashboard to Dashboard Studio. My dashboard uses <panel> with <html> to add multiple rich text formatted panels. It seems like Dashboard Studio only supports one font style per viz.text. Are there any other options available that would allow me to have some or all of these features? Different font sizes, weights, italics, and colors in the same text area Bulleted lists Links that have text that is different from the actual URL Like... maybe if there was a way to use Markdown or something, that would be perfect! Update: I see that Splunk 9.0.0 adds support for a Markdown panel. My companies Splunk instance is 8.2.7, are there any options available until it gets updated to v9?
Hi, I'm attempting to create a method to exclude users from service account values without excluding a particular service account. Is there a generic approach we can use to identify and exclude bo... See more...
Hi, I'm attempting to create a method to exclude users from service account values without excluding a particular service account. Is there a generic approach we can use to identify and exclude both existing and future service accounts? How we could write the search for this use case. Thanks..  
Hello    I have a problem parsing this csv format log, the problem is that syslog adds Time and host fields at the beginning of my log.   Jun 8 10:47:33 sv43562  "Thu Jun 8 10:47:05 2023","em... See more...
Hello    I have a problem parsing this csv format log, the problem is that syslog adds Time and host fields at the beginning of my log.   Jun 8 10:47:33 sv43562  "Thu Jun 8 10:47:05 2023","email@gmail.com","HTTPS","url","Allowed","General Browsing","General Browsing","Travel","Travel","None","None","0","None","None","GET","200","Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0),"None" How can I fix this?   Thanks for your help.  
Hi, I have two source types CardMember_cycle_data (with card member cycle date info) and CardMember_Demographic_data (with card member demographic info). Both files have more than 3-4 million recor... See more...
Hi, I have two source types CardMember_cycle_data (with card member cycle date info) and CardMember_Demographic_data (with card member demographic info). Both files have more than 3-4 million records each. (all dates are in MM/DD/YYYY format) CardMember_cycle_data CM_id   Cycle_Date CM1       05/01/2023 CM1       06/01/2023 CM2       04/03/2023 CM2       05/03/2023 CM2       06/03/2023 -------------------------- CardMember_Demographic_data CM_id Transaction_Dt   Prod_Code CM1     01/02/2020        CR CM1     05/28/2023        XX CM1     06/07/2023        AB CM2     04/14/2023        YY CM2     06/01/2023        CD My need is - For each Card Member present in CardMember_cycle_data I need to get the latest Prod_Code as of LATEST Cycle_Date. Hence the output will be: CardMember     Latest_Cycle_Date     Prod_Code CM1                       06/01/2023                  XX CM2                       06/03/2023                  CD
I have configured a Splunk HF with the following inputs.conf stanzas (details changed) for two new device logs. Note the explicit host setting for each: [monitor:///path/splunklogs/10.10.1.1/*.log]... See more...
I have configured a Splunk HF with the following inputs.conf stanzas (details changed) for two new device logs. Note the explicit host setting for each: [monitor:///path/splunklogs/10.10.1.1/*.log] disabled = false host = myhost10 sourcetype = syslog index = my_index [monitor:///path/splunklogs/10.20.1.1/*.log] disabled = false host = myhost20 sourcetype = syslog index = my_index I created the index at the same time, so to validate its working I simply ran a search "index=my_index" (knowing there will be nothing else there). But surprisingly, the search returns events from three hosts instead of two! The first device looks okay, but for the second one (ie. the second inputs stanza), some of the events are showing the wrong host value. It seems to be picking up a host value embedded in the event, but I don't see how. And I thought the inputs 'host' setting would override that anyway. So, from the below example, the host SHOULD be set to 'myhost20' from the inputs stanza, but instead is showing as host 'xyz000000001234'. Can anyone explain how that could be happening, and so, how to prevent it? Sample event, with the standard fields below it: 2023-06-08T14:38:51+10:00 Sev=notice Facility=user Hostname=<loadbalancer> Header="Client " Message="Client IP: 10.20.1.1 | <109>Jun 8 14:40:11 xyz000000001234 some_field -: AUDIT [dvc="10.20.1.1" dvchost="10.20.1.1" version="7.7" user="<user>" role="" source="10.1.2.3" type="user_action" outcome="success" message="2023-06-08T14:40:11+10:00 abc120000001111 sshd\[2876138\]: Accepted keyboard-interactive/pam for device from 10.1.2.3 port 12345 ssh2"]" host = xyz000000001234 index = my_index source = /path/splunklogs/10.20.1.1/10.20.1.1-08-06-2023:14.log sourcetype = syslog   Thanks for any response. R.
I have a dashboard using a js script. The issue is that the js sctipt is loaded a the beginning. Because the method which using the $(document).ready is not working for an unknown reason, I had the i... See more...
I have a dashboard using a js script. The issue is that the js sctipt is loaded a the beginning. Because the method which using the $(document).ready is not working for an unknown reason, I had the idea to refresh the dashboard or just the search that is usig the js script. But the only solutions that I found are refreshing every X time.  Is there a solution to refresh only one time a dashboard or just a search after the loading has ended ? Thanks in advance !
Hi,  So i have this search:        | tstats prestats=true count WHERE index=*_ot (source="*sgre*" OR o_wp="*sgre*") AND (source="*how02*" OR o_wp="*how02*") BY _indextime | eval _time=_ind... See more...
Hi,  So i have this search:        | tstats prestats=true count WHERE index=*_ot (source="*sgre*" OR o_wp="*sgre*") AND (source="*how02*" OR o_wp="*how02*") BY _indextime | eval _time=_indextime | timechart count span=1h         Which gives me the error:        When used for 'tstats' searches, the 'WHERE' clause can contain only indexed fields. Ensure all fields in the 'WHERE' clause are indexed. Properly indexed fields should appear in fields.conf.         Anyone know the solution to this? 
There are two selection "enable to risk index" and "enable to test index" from Content Management view, but these two option is dis-highlighted from default. It seems does not work properly. I think... See more...
There are two selection "enable to risk index" and "enable to test index" from Content Management view, but these two option is dis-highlighted from default. It seems does not work properly. I think this function is that we can select each of correlation search, and then I can set to risk index but we can not do it them. Is it possible to select multiple correlation searches and enable the risk index all at once?
Hi all, i have a json file like this     { "NUM": "#7", "TIME": "May 23, 2022, 09:24:40 PM", "STATUS": "SUCCESS", "DURATION": "2 hours, 13 minutes", "URL": "abc.com", "COMPONENTS": [{ ... See more...
Hi all, i have a json file like this     { "NUM": "#7", "TIME": "May 23, 2022, 09:24:40 PM", "STATUS": "SUCCESS", "DURATION": "2 hours, 13 minutes", "URL": "abc.com", "COMPONENTS": [{ "NAME": "abc", "Tasks": [{ "ITEM": [{ "ITEM_ID": "2782508", "FILE": "file1" }, { WORKITEM_ID": "2782508", "FILE": "file2" }, { "ITEM_ID": "2782508", "FILE": "file1" }, { "ITEM_ID": "2782508", "FILE": "file3" } ] }] }, { "NAME": "xyz", "tasks": [{ "ITEM": [{ "ITEM_ID": "2811478", "FILE": "file2" }] }] } ] }     how can i create a table with columns "num time status duration component_name itemid file". How can i make all the values come in different rows not together.
Hi I have sample like this Source                                                                       Sample time from tx-templated                efghi examl from templated                    ... See more...
Hi I have sample like this Source                                                                       Sample time from tx-templated                efghi examl from templated                         Sample time from [templated]        I want to extract the the values of last word as show below  Source                                                                  Extract Value Sample time from tx-templated               tx-templated efghi examl from templated                        templated Sample time from [templated]                   templated              how to do via regex?   Thanks in advance for help
Hey All,  So I'm relatively new to Splunk. I have a csv file that has multiple computers and I've created a dashboard trying to get reports based on the parameters the user chooses. The search by i... See more...
Hey All,  So I'm relatively new to Splunk. I have a csv file that has multiple computers and I've created a dashboard trying to get reports based on the parameters the user chooses. The search by itself is fine and is this: index=whatever sourcetype=whateverXxX [ | inputlookup FileName.csv | search Type="Prod" | return host=IIS_Server ] OR ([| inputlookup FileName.csv | search Type="Prod" | return host=IIS_for_XServers cs_uri_stem=Pattern_for_Servers]) | timechart span=5m count by host but when I start placing  that search in a dashboard with user inputs it looks like this: index=whatever sourcetype=whateverXxX [ | inputlookup FileName.csv |$Type_of_deployment$ | return host=$IIS_Server ] OR ([| inputlookup FileName.csv |$Type_of_deployment$ | return host=$IIS_for_XServers cs_uri_stem=$Pattern_for_Servers]) | timechart span=$Span_Timechart$ count by host Once implemented I get a "Search is wating for input..." even after selecting an input and clicking the submit button. But I found the solution for the dashboard is: index=whatever sourcetype=whateverXxX [ | inputlookup FileName.csv | $Type_of_deployment$ | return host=IIS_Server ] OR ([| inputlookup FileName.csv | $Type_of_deployment$ | return host=IIS_for_XServers cs_uri_stem=Pattern_for_Servers]) | timechart span=$Span_Timechart$ count by host So if you noticed the difference it's the <$field> with the return command. I don't understand the difference between  <$field> and <field>. I've searched everywhere and the documentation on it still confuses me, even posts from this community forum. Why does it matter when it comes into the dashboard? But when I use either format ( <$field> and <field>) for normal searching it doesn't have a problem and actually spits back the exact same results between the two. Which according to the documentation and from research that's not even supposed to happen. But it throws a fit when I place it into the dashboard. Can someone ELI5? Some Sources that I've used and don't make much sense to me: https://community.splunk.com/t5/Splunk-Search/How-to-use-INPUTLOOKUP-command-in-splunk/m-p/92212 https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/SearchReference/Return    
Hi Splunkers,   Is there any way that we can add comments to the dashboard panel to give more information for easy understanding.  For example: If there’s any jump in the trend from previous mont... See more...
Hi Splunkers,   Is there any way that we can add comments to the dashboard panel to give more information for easy understanding.  For example: If there’s any jump in the trend from previous month to this month. I need to put a comment box and explain why the jump happened. TIA