All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Thanks in Advance. I have four inputs Time,Environment,Application Name and Interface Name and two panels one is fiance and bank.Both panels has different applications name and interface names.So i ... See more...
Thanks in Advance. I have four inputs Time,Environment,Application Name and Interface Name and two panels one is fiance and bank.Both panels has different applications name and interface names.So i tried to use depends and reject in the inputs.If i change one panel to another the inputs like dropdown and text box remains same but the values need to be change as per the panels. <row> <panel id="panel_layout"> <input id="input_link_split_by" type="link" token="tokSplit" searchWhenChanged="true"> <label></label> <choice value="Finance">OVERVIEW</choice> <choice value="BankIntegrations">BANKS</choice> <default>OVERVIEW</default> <initialValue>OVERVIEW</initialValue> <change> <condition label="Finance"> <set token="Finance">true</set> <unset token="BankIntegrations"></unset> </condition> <condition label="BankIntegrations"> <set token="BankIntegrations">true</set> <unset token="Finance"></unset> </condition> <row> <panel> <input type="time" token="time" searchWhenChanged="true"> <label>Time Interval</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="env" searchWhenChanged="true"> <label>Environment</label> <choice value="*">ALL</choice> <choice value="DEV">DEV</choice> <choice value="TEST">TEST</choice> <choice value="PRD">PRD</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="applicationName" searchWhenChanged="true" depends="$Finance$" rejects="$BankIntegrations$"> <label>ApplicationName</label> <choice value="*">ALL</choice> <choice value="p-wd-finance-api">p-wd-finance-api</choice> <default>"p-wd-finance-api</default> <initialValue>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</initialValue> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>ApplicationName</fieldForValue> </input> <input type="text" token="InterfaceName" searchWhenChanged="true" depends="$Finance$" rejects="$BankIntegrations$"> <label>InterfaceName</label> <default></default> <initialValue></initialValue> </input> <input type="dropdown" token="applicationName" searchWhenChanged="true" depends="$BankIntegrations$" rejects="$Finance$"> <label>ApplicationName</label> <choice value="p-wd-finance-api">p-wd-finance-api</choice> <default>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</default> <initialValue>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</initialValue> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>ApplicationName</fieldForValue> </input> <input type="text" token="InterfaceName" searchWhenChanged="true" depends="$BankIntegrations$" rejects="$Finance$"> <label>InterfaceName</label> <default></default> <initialValue></initialValue> </input> </panel> </row>  
Hi, I have the raw data/Event as below, the splunk gets the rawdata  every 2 hrs once and only 4 time a day. This runs at 11.36 AM ,13.36,15:36 PM, 17:36 PM splunk gets the rawdata. Per day i am ge... See more...
Hi, I have the raw data/Event as below, the splunk gets the rawdata  every 2 hrs once and only 4 time a day. This runs at 11.36 AM ,13.36,15:36 PM, 17:36 PM splunk gets the rawdata. Per day i am getting ~2.5K events  Field:DATETIME , tells what time the job run 2024-04-15 21:36:58.960, DATETIME="2024-04-15 17:36:02", REGION="India", APPLICATION="webApp", CLIENT_CODE="ind", MARKET_CODE="SEBI", TRADE_COUNT="1" What I am looking is when i run the dashboard, where I want to monitor the trade count by market_code over latest DATETIME. For instance, if i run the dashboard at 14:00 hrs, the field DATETIME might have 11.36 (~600 events), 13.36(~600 events). I want to see only 13.36hrs 600 events, and metric would be TRADE_COUNT by MARKET_CODE Thanks, Selvam.
Hello Team, I am trying for a solution using multiselect input filter where the index token is passed to panels. From the below code, I currently see the filter values "Stack1", "Stack2" and "Stack... See more...
Hello Team, I am trying for a solution using multiselect input filter where the index token is passed to panels. From the below code, I currently see the filter values "Stack1", "Stack2" and "Stack3". But I face an issue that the value passed is from label. I need the index_tkn to hold value aws_stack02_p, aws_stack01_p, aws_stack01`_n. <input type="multiselect" token="index_tkn" searchWhenChanged="false"> <label>Select Stack</label> <valuePrefix>index="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>index</fieldForLabel> <fieldForValue>label</fieldForValue> <search> <query>index IN ({aws_stack02_p,aws_stack01_p,aws_stack01_n}) | eval label = case(index == "aws_stack02_p", "Stack1",index == "aws_stack01_p", "Stack2",index == "aws_stack01_n", "Stack3") |stats count by label</query> <earliest>$time_tkn.earliest$</earliest> <latest>$time_tkn.latest$</latest> </search> </input>  
i am using splunk cloud and design is UF > hf>splunk CLOUD  in HF"S we have outputs file like below      i have below splunk configuration in outputs.conf file in heavy forwarder here sslPasswor... See more...
i am using splunk cloud and design is UF > hf>splunk CLOUD  in HF"S we have outputs file like below      i have below splunk configuration in outputs.conf file in heavy forwarder here sslPassword is same for all HF"S if i am using multiple heavy forwarders root@hostname:/opt/splunk/etc/apps/100_stackname_splunkcloud/local # cat outputs.conf [tcpout] sslPassword = 27adhjwgde2y67dvff3tegd36scyctefd73****************** channelReapLowater = 10 channelTTL = 300000 dnsResolutionInterval = 300 negotiateNewProtocol = true socksResolveDNS = false useClientSSLCompression = true negotiateProtocolLevel = 0 channelReapInterval = 60000 tcpSendBufSz = 5120000 useACK = false [tcpout:splunkcloud] useClientSSLCompression = true maxQueueSize = 250MB autoLBFrequency = 300  
My row data will look like below _row= {"id":"0","severity":"Information","message":"CPW Total= 844961,SEQ Total =244881, EAS Total=1248892, VRS Total=238, CPW Remaining=74572, SEQ Remaining=22, EAS... See more...
My row data will look like below _row= {"id":"0","severity":"Information","message":"CPW Total= 844961,SEQ Total =244881, EAS Total=1248892, VRS Total=238, CPW Remaining=74572, SEQ Remaining=22, EAS Remaining =62751, VRS Remaining =0, InvetoryDate =4/15/2024 6:16:07 AM"} I want to extract fields from message and it will look like below. I tried the through rgex but I am unable to extract. Please help to create extract for    CPW Total SEQ Total EAS Total VRS Total CPW Remaining SEQ Remaining EAS Remaining VRS Remaining InvetoryDate 844961 244881 1248892 238 74572 22 62751 0 4/15/2024 6:16:07 AM  
I am trying to build some modular documentation as a Splunk app on a site with a indexer- and search head cluster.  Some of the reasoning behind this is that I spend quite some time researching exist... See more...
I am trying to build some modular documentation as a Splunk app on a site with a indexer- and search head cluster.  Some of the reasoning behind this is that I spend quite some time researching existing configuration when I'm about to make new changes .  Thus I would like to be able to create views showing me details from props, transforms and indexes on the search heads. My question is; do you see any potential pitfalls by having the configuration on search heads as well as the indexers?  Or, are there any other solution for being able to view configuration on the indexer peers from the search heads? Cheers!
@all When I'm trying to install and configure #otel collector to send data from agent mode to gateway collector  in #Splunk Observability cloud, I'm facing many challenges not able to connect data t... See more...
@all When I'm trying to install and configure #otel collector to send data from agent mode to gateway collector  in #Splunk Observability cloud, I'm facing many challenges not able to connect data to send agent with gateway. Can anyone guide me how to solve this issue
I'm sure someone here has worked on a powershell script to install splunk to different windows hosts remotely. Can I get help with that? my powershell skills are really weak.
Hello,  1. Is there an option (built in or manually built) for a container to view history of the older containers with the same artifacts and details ? It can make an analyst work easier to see not... See more...
Hello,  1. Is there an option (built in or manually built) for a container to view history of the older containers with the same artifacts and details ? It can make an analyst work easier to see notes and how the older case was solved.  2. by enabling “logging” for a playbook, where opt logs are stored to access later on (beside vie debugging in the UI..)   thank you in advance!
Hi   I finished upgrading Splunk ES to 7.3.0 on 1 of 2 non-clustered Search Heads and I receive this error on the Search Head Post Install Configuration wizard menu "Error in 'essinstall' command: ... See more...
Hi   I finished upgrading Splunk ES to 7.3.0 on 1 of 2 non-clustered Search Heads and I receive this error on the Search Head Post Install Configuration wizard menu "Error in 'essinstall' command: Automatic SSL enablement is not permitted on the deployer". Splunk support have recommened to change the setting on web.conf to "splunkdConnectionTimeout = 3000", which I added to the system file and restarted the splunkd. Unforutnately this timeout setting does not help fix this "known issue". I have selected Enable SSL option in the Post Config Process as I know that SSL is enabled in both the Deployer and SH web configs. If anyone has a work around for this or can suggest how I can enable SSL after the post configuration of Splunk ES on both the SH and Deployer, it would be appreciated.   Thanks
index=app-logs sourcetype=app-data source=*app.logs*  host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 |stats count by host |where count < 100 |bin span=1m _time W... See more...
index=app-logs sourcetype=app-data source=*app.logs*  host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 |stats count by host |where count < 100 |bin span=1m _time We have an alert with the above query,  Alert is getting triggered when the count of hosts is less than 100. but not getting triggered when the count of any  host is zero. How to make the alert to trigger even when the count=0
I have two logs below, log a is throughout the environment and would be shown for all users.  log b is limited to specific users.  I only need times for users in log b. log a:  There is a file has ... See more...
I have two logs below, log a is throughout the environment and would be shown for all users.  log b is limited to specific users.  I only need times for users in log b. log a:  There is a file has been received with the name test2.txt lob b:  The file has been found at the second destination C://user/test2.txt I am trying to write a query that captures the time between log a and log b without doing a subsearch, so far I have  index=a, env=a, account=a ("There is a file" OR "The file has been found")|field filename from log b | field filename2| eval Endtime = _time | ****Here is where I am lost, I was hoping to use if/match/like/eval to see to capture the start time where log b filename can be found in log a.  I have this so far******   | eval Starttime = if(match(filename,"There is%".filename2."%"),_time,0) I am not getting any 1s, just 0s.  I am pretty sure this is the problem "There is%".filename2."%", how do I correct it.
The event.url field stores all the urls found in the logs, I want to create a new field called url_domain that only captures the domain of the urls stored in event.url, temporarily what I do is from ... See more...
The event.url field stores all the urls found in the logs, I want to create a new field called url_domain that only captures the domain of the urls stored in event.url, temporarily what I do is from the search write the following: | rex field=event.url "^(?:https?:\/\/)?(?:www[0-9]*\.)?(?)(?<url_domain>[^\n:\/]+)" What should I add in the props.conf so that this instruction is fixed for the sourcetype "sec-web"?
So, I created at savedsearch and it was working fine. But I had to change the SPL for it and I tried it again, and it is still showing the old results and not showing the new SPL changes. Why? Do I h... See more...
So, I created at savedsearch and it was working fine. But I had to change the SPL for it and I tried it again, and it is still showing the old results and not showing the new SPL changes. Why? Do I have to wait for the changes t happen?
Hello Fellow Splunkers, I'm fairly new to ITSI and was wondering if this could be achieved. I 'm looking to create a report which would allow me to list all Services I have in ITSI along with th... See more...
Hello Fellow Splunkers, I'm fairly new to ITSI and was wondering if this could be achieved. I 'm looking to create a report which would allow me to list all Services I have in ITSI along with their associated entities as well as list associated alerts or severity. Is there a query that could achieve this? any pointers are very much appreciated! Also any pointers where I could potentially find the data and bring it together in a search would be very helpful too. Thanks!
How do I take a dashboard global time (i.e. - $global_time.earliest$, $global_time.latest$) and convert it into a date to be used when searching a lookup file that only has a date column (i.e. - 04/1... See more...
How do I take a dashboard global time (i.e. - $global_time.earliest$, $global_time.latest$) and convert it into a date to be used when searching a lookup file that only has a date column (i.e. - 04/15/2024)?
We need to easily identify the SQL submitted by DB Connect. We'd like to use Oracle's SET_MODULE procedure. How do we accomplish this in DB Connect? call DBMS_APPLICATION_INFO.SET_MODULE ( module... See more...
We need to easily identify the SQL submitted by DB Connect. We'd like to use Oracle's SET_MODULE procedure. How do we accomplish this in DB Connect? call DBMS_APPLICATION_INFO.SET_MODULE ( module_name => 'Splunk_HF', action_name => 'DMP_Dashboard' ); <put our DB Input SQL here>
I have an inputlookup that has a list of pod names that we expect to be deployed to an environment. The list would look something like:     pod_name_lookup,importance poda,non-critical podb,crit... See more...
I have an inputlookup that has a list of pod names that we expect to be deployed to an environment. The list would look something like:     pod_name_lookup,importance poda,non-critical podb,critical podc,critical     We also have data in splunk that gives us pod_name, status, and importance. Results from the below search would look like this:     index=abc sourcetype=kubectl | table pod_name, status, importance poda-284489-cs834 Running non-critical podb-834hgv8-cn28s Running critical     Note podc was not found..   I need to be able to compare the results from this search to the list from the inputlookup and show that podc was not found in the results and that it is a critical pod. Need to be able to count how many critical and non-critical pods are not found as well as table the list of missing pods.    I have tried several iterations of searches but havent came across one that allows me to compare a search result to an inputlookup using a partial match. eval result=if(like(pod_name_lookup...etc is close but requires a pattern and not the wildcard value of a field. Thoughts?      
Hi All,   We have widnows event and other application logs ngested into splunk.   There is no problem with windows event logs but for our application related logs, the logs stop suddenly and star... See more...
Hi All,   We have widnows event and other application logs ngested into splunk.   There is no problem with windows event logs but for our application related logs, the logs stop suddenly and starts reporting again but the log file in windows is being continuously updated with recent logs though the modified time does not get updated because of the windows feature. The modified time for the log file is not an issue because the logs starts rolling in even when the modified time is same but the log file had latest logs.   we are using splunk forwarder 9.0.4 version currently. Can someone please help in triaging this issue? It is a problem with only one specific source with this windows host and other sources (windows event logs) are flowing in properly.
I need to identify hosts with errors, but only in block mode MY SPL --------- index=firewall event_type="error [search index=firewall sourcetype="metadata" enforcement_mode=block] | dedup host | ... See more...
I need to identify hosts with errors, but only in block mode MY SPL --------- index=firewall event_type="error [search index=firewall sourcetype="metadata" enforcement_mode=block] | dedup host | table event_type, host, ip   ------------------ each search works separately, but combined it seating on "parsing job"  with no result for long time. Thank you