All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It is also from both a manual search and a dashboard.
It is showing when I do | loadjob savedsearch="username:search:data_need"   It was scheduled... I change it thinking it would fix it... but it did not.
I didn't disable workload management because I couldn't enable it This feature is not supported by Windows installation These messages are generated by members of IDXC, manager node and only one SH... See more...
I didn't disable workload management because I couldn't enable it This feature is not supported by Windows installation These messages are generated by members of IDXC, manager node and only one SHC (cluster captain)          
Thanks in Advance. I have four inputs Time,Environment,Application Name and Interface Name and two panels one is fiance and bank.Both panels has different applications name and interface names.So i ... See more...
Thanks in Advance. I have four inputs Time,Environment,Application Name and Interface Name and two panels one is fiance and bank.Both panels has different applications name and interface names.So i tried to use depends and reject in the inputs.If i change one panel to another the inputs like dropdown and text box remains same but the values need to be change as per the panels. <row> <panel id="panel_layout"> <input id="input_link_split_by" type="link" token="tokSplit" searchWhenChanged="true"> <label></label> <choice value="Finance">OVERVIEW</choice> <choice value="BankIntegrations">BANKS</choice> <default>OVERVIEW</default> <initialValue>OVERVIEW</initialValue> <change> <condition label="Finance"> <set token="Finance">true</set> <unset token="BankIntegrations"></unset> </condition> <condition label="BankIntegrations"> <set token="BankIntegrations">true</set> <unset token="Finance"></unset> </condition> <row> <panel> <input type="time" token="time" searchWhenChanged="true"> <label>Time Interval</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="env" searchWhenChanged="true"> <label>Environment</label> <choice value="*">ALL</choice> <choice value="DEV">DEV</choice> <choice value="TEST">TEST</choice> <choice value="PRD">PRD</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="applicationName" searchWhenChanged="true" depends="$Finance$" rejects="$BankIntegrations$"> <label>ApplicationName</label> <choice value="*">ALL</choice> <choice value="p-wd-finance-api">p-wd-finance-api</choice> <default>"p-wd-finance-api</default> <initialValue>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</initialValue> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>ApplicationName</fieldForValue> </input> <input type="text" token="InterfaceName" searchWhenChanged="true" depends="$Finance$" rejects="$BankIntegrations$"> <label>InterfaceName</label> <default></default> <initialValue></initialValue> </input> <input type="dropdown" token="applicationName" searchWhenChanged="true" depends="$BankIntegrations$" rejects="$Finance$"> <label>ApplicationName</label> <choice value="p-wd-finance-api">p-wd-finance-api</choice> <default>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</default> <initialValue>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</initialValue> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>ApplicationName</fieldForValue> </input> <input type="text" token="InterfaceName" searchWhenChanged="true" depends="$BankIntegrations$" rejects="$Finance$"> <label>InterfaceName</label> <default></default> <initialValue></initialValue> </input> </panel> </row>  
Hi, I have the raw data/Event as below, the splunk gets the rawdata  every 2 hrs once and only 4 time a day. This runs at 11.36 AM ,13.36,15:36 PM, 17:36 PM splunk gets the rawdata. Per day i am ge... See more...
Hi, I have the raw data/Event as below, the splunk gets the rawdata  every 2 hrs once and only 4 time a day. This runs at 11.36 AM ,13.36,15:36 PM, 17:36 PM splunk gets the rawdata. Per day i am getting ~2.5K events  Field:DATETIME , tells what time the job run 2024-04-15 21:36:58.960, DATETIME="2024-04-15 17:36:02", REGION="India", APPLICATION="webApp", CLIENT_CODE="ind", MARKET_CODE="SEBI", TRADE_COUNT="1" What I am looking is when i run the dashboard, where I want to monitor the trade count by market_code over latest DATETIME. For instance, if i run the dashboard at 14:00 hrs, the field DATETIME might have 11.36 (~600 events), 13.36(~600 events). I want to see only 13.36hrs 600 events, and metric would be TRADE_COUNT by MARKET_CODE Thanks, Selvam.
does this solution work remotely?
I used spath for extraction
Hello Team, I am trying for a solution using multiselect input filter where the index token is passed to panels. From the below code, I currently see the filter values "Stack1", "Stack2" and "Stack... See more...
Hello Team, I am trying for a solution using multiselect input filter where the index token is passed to panels. From the below code, I currently see the filter values "Stack1", "Stack2" and "Stack3". But I face an issue that the value passed is from label. I need the index_tkn to hold value aws_stack02_p, aws_stack01_p, aws_stack01`_n. <input type="multiselect" token="index_tkn" searchWhenChanged="false"> <label>Select Stack</label> <valuePrefix>index="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>index</fieldForLabel> <fieldForValue>label</fieldForValue> <search> <query>index IN ({aws_stack02_p,aws_stack01_p,aws_stack01_n}) | eval label = case(index == "aws_stack02_p", "Stack1",index == "aws_stack01_p", "Stack2",index == "aws_stack01_n", "Stack3") |stats count by label</query> <earliest>$time_tkn.earliest$</earliest> <latest>$time_tkn.latest$</latest> </search> </input>  
i am using splunk cloud and design is UF > hf>splunk CLOUD  in HF"S we have outputs file like below      i have below splunk configuration in outputs.conf file in heavy forwarder here sslPasswor... See more...
i am using splunk cloud and design is UF > hf>splunk CLOUD  in HF"S we have outputs file like below      i have below splunk configuration in outputs.conf file in heavy forwarder here sslPassword is same for all HF"S if i am using multiple heavy forwarders root@hostname:/opt/splunk/etc/apps/100_stackname_splunkcloud/local # cat outputs.conf [tcpout] sslPassword = 27adhjwgde2y67dvff3tegd36scyctefd73****************** channelReapLowater = 10 channelTTL = 300000 dnsResolutionInterval = 300 negotiateNewProtocol = true socksResolveDNS = false useClientSSLCompression = true negotiateProtocolLevel = 0 channelReapInterval = 60000 tcpSendBufSz = 5120000 useACK = false [tcpout:splunkcloud] useClientSSLCompression = true maxQueueSize = 250MB autoLBFrequency = 300  
As both searches invoke the same index, there is probably not much point (unless you have a very very specific use case) to use subsearch here. Just search for index=firewall event_type=error sourc... See more...
As both searches invoke the same index, there is probably not much point (unless you have a very very specific use case) to use subsearch here. Just search for index=firewall event_type=error sourcetype=metadata enforcement_mode=block Because that's effectively what your search would do. Having said that - that is probably _not_ what you need. I'd hazard a guess that you're probably looking for something like index=firewall | stats values(event_type) as event_types values(sourcetype) as sourcetypes values(enforcement_mode) as enforcement_modes | where enforcement_mode="block"  
My row data will look like below _row= {"id":"0","severity":"Information","message":"CPW Total= 844961,SEQ Total =244881, EAS Total=1248892, VRS Total=238, CPW Remaining=74572, SEQ Remaining=22, EAS... See more...
My row data will look like below _row= {"id":"0","severity":"Information","message":"CPW Total= 844961,SEQ Total =244881, EAS Total=1248892, VRS Total=238, CPW Remaining=74572, SEQ Remaining=22, EAS Remaining =62751, VRS Remaining =0, InvetoryDate =4/15/2024 6:16:07 AM"} I want to extract fields from message and it will look like below. I tried the through rgex but I am unable to extract. Please help to create extract for    CPW Total SEQ Total EAS Total VRS Total CPW Remaining SEQ Remaining EAS Remaining VRS Remaining InvetoryDate 844961 244881 1248892 238 74572 22 62751 0 4/15/2024 6:16:07 AM  
I am not going to experience this problem because I apply a throttle per event ID, and in some cases a dedup of the ID in the query itself, and I have set the alert to look 30 min back and run every ... See more...
I am not going to experience this problem because I apply a throttle per event ID, and in some cases a dedup of the ID in the query itself, and I have set the alert to look 30 min back and run every ten but I still lose some events that do appear if I run the search.
For example, one report runs at 10 minutes past the hour, looking back 10 minutes. The next time the report runs is 15 minutes past the hour, again looking back 10 minutes. Between these two runs, th... See more...
For example, one report runs at 10 minutes past the hour, looking back 10 minutes. The next time the report runs is 15 minutes past the hour, again looking back 10 minutes. Between these two runs, there is a five minute overlap between 5 past and 10 past the hour. If you don't take account of this, you could be double counting your events.
How can I do this? Note that the forwarder is an Edge Processor and you can't touch the conf files, everything is modified in the GUI.
Could you explain to me what you mean by overlapping times?
I am trying to build some modular documentation as a Splunk app on a site with a indexer- and search head cluster.  Some of the reasoning behind this is that I spend quite some time researching exist... See more...
I am trying to build some modular documentation as a Splunk app on a site with a indexer- and search head cluster.  Some of the reasoning behind this is that I spend quite some time researching existing configuration when I'm about to make new changes .  Thus I would like to be able to create views showing me details from props, transforms and indexes on the search heads. My question is; do you see any potential pitfalls by having the configuration on search heads as well as the indexers?  Or, are there any other solution for being able to view configuration on the indexer peers from the search heads? Cheers!
@meshorer there isn't anything inbuilt, but there is a Custom Function in the community Repo called "find_related_containers" which should get you somewhere close to what you want. TBH I would recomm... See more...
@meshorer there isn't anything inbuilt, but there is a Custom Function in the community Repo called "find_related_containers" which should get you somewhere close to what you want. TBH I would recommend building your own but it can be complicated depending on how you want to define "relevant" containers.  As for the playbook logs, I am not sure where they are on-disk. I can't see anything in $PHANTOM_HOME/var/log/phantom but suspect they are somewhere on the system. 
We're running into the same (or similar) issue. We're not using appLogo but appIcon to set the app's icon. The icon AND the label are displayed in the dashboard selection page accordingly but as soon... See more...
We're running into the same (or similar) issue. We're not using appLogo but appIcon to set the app's icon. The icon AND the label are displayed in the dashboard selection page accordingly but as soon as you click to show one particular dashboard, the label disappears and only the icon stays. I can't say for sure it was not present before but several users noticed since our upgrade from 9.0.x to  9.1.x.
You are deduping 'x' so you need to understand the consequences of that. Your search is not doing any aggregations, so without knowing what combinations of Application, Action and Target_URL you hav... See more...
You are deduping 'x' so you need to understand the consequences of that. Your search is not doing any aggregations, so without knowing what combinations of Application, Action and Target_URL you have, it's impossible to know what's going on here. These 3 lines are most likely the source of your problem | mvexpand x | mvexpand y | dedup x  
Hello Champs, This message is info only and can be safely ignored. Alternatively, you can turn it off by setting the TcpInputProc log level to WARN. If you can't restart splunkd yet, simply run: $... See more...
Hello Champs, This message is info only and can be safely ignored. Alternatively, you can turn it off by setting the TcpInputProc log level to WARN. If you can't restart splunkd yet, simply run: $SPLUNK_HOME/bin/splunk set log-level TcpInputProc -level WARN To make the change persistent: * Create or edit $SPLUNK_HOME/etc/log-local.cfg * Add: category.TcpInputProc=WARN * Followed by splunkd restart.