All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good morning Splunkers, I trust everyone is remaining safe. Ultimately, I'm attempting to obtain the overage connection duration of external IPs for each destination zone based on firewall logs. Th... See more...
Good morning Splunkers, I trust everyone is remaining safe. Ultimately, I'm attempting to obtain the overage connection duration of external IPs for each destination zone based on firewall logs. The reporting period would be 24h. So the output I'd be looking for is something like this: dest_zone AvgDuration ABC App 00:00:07:123 123 Zone 00:00:13:123 Cisco VPN 00:07:12:004 Please see my non-working query below: index="pan_logs" sourcetype="pan_traffic" action="allowed" | eventstats earliest(_time) as earliest_time by src_ip | eventstats latest(_time) as latest_time by src_ip | eval Duration=latest_time-earliest_time | stats avg(Duration) as AvgDuration by src_ip | eval AvgDuration = strftime(AvgDuration/1000 , "%H:%M:%S:%3Q") | stats values(AvgDuration) by dest_zone As always, any help is greatly appreciated.
Hello, A few days ago I had a problem with an index. The index_size_max was equal to the index_size , with the default setting in the indexes.conf file. Here is the request I used: |... See more...
Hello, A few days ago I had a problem with an index. The index_size_max was equal to the index_size , with the default setting in the indexes.conf file. Here is the request I used: | rest /services/data/indexes | where disabled = 0 | search NOT title = "_*" | eval currentDBSizeGB = round( currentDBSizeMB / 1024) | where currentDBSizeGB > 0 | table splunk_server title summaryHomePath_expanded minTime maxTime currentDBSizeGB totalEventCount frozenTimePeriodInSecs coldToFrozenDir maxTotalDataSizeMB | rename minTime AS earliest maxTime AS latest summaryHomePath_expanded AS index_path currentDBSizeGB AS index_size totalEventCount AS event_cnt frozenTimePeriodInSecs AS index_retention coldToFrozenDir AS index_path_frozen maxTotalDataSizeMB AS index_size_max title AS index On May 14th AM => -index_max_size set to 512Go -index_size = 500Go -latest data age was "uptodate" -earliest data age was March 19th - 05:11:30 On May 14th PM => -index_max_size set to 1536Go (updated) -index_size = 509Go -latest data age was "uptodate" -earliest data age was March 19th - 05:11:30 (still the same date) On May 18th AM => -index_max_size set to 1536Go -index_size = 524Go -latest data age was "uptodate" -earliest data age was March 19th - 05:11:30 (still the same date) On May 23th AM => -index_max_size set to 1536Go -index_size = 563Go -latest data age was "uptodate" -earliest data age was March 23th - 12:22:28 (not anymore the same date) On May 26th AM => (today) -index_max_size set to 1536Go -index_size = 564Go -latest data age was "uptodate" -earliest data age was March 28th - 06:46:27 (not anymore the same date) Since I've increased the maxTotalDataSizeMB in indexes.conf, I'm still losing the oldest data, but the index is bigger days after days. I also notice that the earliest data ages are not exactly the same between my 2 indexers in my cluster. By default I must keep 1 year of data, and parameters are set for, aka " frozenTimePeriodInSecs = 31557600 " Can anyone help me please? Thanks a lot. P.S. Can someone explain to me why this request gives me information for 2 of 3 indexes I've got? index names are csmsi_supervision_ followed by active , passive or servicenow . "passive" is missing. Thanks.
Hey guys, we have Enterprise Security and the Endpoint data model never finishes building. I even knocked the backfill range to 4 hours and it still doesn't complete. I know there is a TON of data an... See more...
Hey guys, we have Enterprise Security and the Endpoint data model never finishes building. I even knocked the backfill range to 4 hours and it still doesn't complete. I know there is a TON of data and even when I run the base macro used in all of the datasets it takes forever to run just in a 15 minute window. We just moved over a to a brand new index cluster with 10 indexers and don't see any performance issues anywhere else other than this one last data model that won't complete building.
I have the following working Query for a single product AHSDFKSD1 ns=a* DECISION IN (ELIGIBLE, INELIGIBLE) PRODUCT IN (AHSDFKSD1) | timechart span=24h limit=0 count by DECISION | eval total= ELIG... See more...
I have the following working Query for a single product AHSDFKSD1 ns=a* DECISION IN (ELIGIBLE, INELIGIBLE) PRODUCT IN (AHSDFKSD1) | timechart span=24h limit=0 count by DECISION | eval total= ELIGIBLE+INELIGIBLE | eval ELIGIBLE=round(ELIGIBLE/total,4)*100 | eval INELIGIBLE=round(INELIGIBLE/total,4)*100 | fields - total Output _time ELIGIBLE INELIGIBLE 2020-05-25 17:00 87.93 12.07 How can I modifying this query to output data per product? (Or even a totally different query if output is as follows) Example I could have over 20 products AHSDFKSD1, GFAGDAYD2, GSDAUFCBE3, IGAGSDASHD4, GASDAHJDSGDA5 ........ I am looking for following output: PRODUCT _time ELIGIBLE INELIGIBLE AHSDFKSD1 2020-05-25 17:00 87.93 12.07 GFAGDAYD2 2020-05-25 17:00 80.03 19.97 GSDAUFCBE3 2020-05-25 17:00 87.90 12.10 IGAGSDASHD4 2020-05-25 17:00 92.93 7.07 How can I achieve this? Please assist. Thanks.
Hi, I am new to splunk and trying to create a timeline with several individual calculated trend lines, but I simply can not figure out how to. Hopefully someone here is able to help me achieve this. ... See more...
Hi, I am new to splunk and trying to create a timeline with several individual calculated trend lines, but I simply can not figure out how to. Hopefully someone here is able to help me achieve this. I have tried the search below which calculates both columns as one total, but i want a total for each eventcode and two separate trend lines source="*WinEventLog:Security" sourcetype="*wineventlog:security" EventCode=4624 OR 4625 | timechart count(EventCode) by EventCode | addtotals row=t | trendline sma2(Total) as Trend | fields - Total | rename count(EventCode) as Count | rename date as date
I have a client requirement to use F5 Big IP LB for load-balancing the splunk data collection. Can anyone help me with the best /recommended method to do health check for Splunk load balancing at Ind... See more...
I have a client requirement to use F5 Big IP LB for load-balancing the splunk data collection. Can anyone help me with the best /recommended method to do health check for Splunk load balancing at Indexer level. Is it using http/https or is it better to use TCP based ? Also what type of policy will be better to use ? Round robin with String based or something different ? Please help me with your kind suggestions based on the experience . Sank
I am breaking every line in flat file and trying to fetch the field using rex, this is how my events looks like: 98000020200512 -992.00 0.00 001 01 98000020200523 ... See more...
I am breaking every line in flat file and trying to fetch the field using rex, this is how my events looks like: 98000020200512 -992.00 0.00 001 01 98000020200523 830566.00 0.00 001 02 98000020200515 -7356.00 0.00 001 03 98000020200516 -18760.00 0.00 001 04 98000020200518 764074.00 0.00 001 05 98000020200530 165432.00 0.00 001 06 98000020200531 98715.00 0.00 001 07 98000020200511 119993.00 0.00 001 08 98000020200502 908831.00 0.00 001 09 12000020200507 -5481.00 0.00 001 10 The bold digits need to be extracted as Amount field, where the values could be a negative or positive amount.
Hi, We have installed machine agent on servers but metrics are not reporting to controller. I have checked controller.xml file and logs as well. Getting below error in logs: [extension-sch... See more...
Hi, We have installed machine agent on servers but metrics are not reporting to controller. I have checked controller.xml file and logs as well. Getting below error in logs: [extension-scheduler-pool-10] 05 May 2020 16:09:03,827  INFO ReportMetricsConfigSupplier - Basic metrics will be collected and reported through the SIM extension because SIM is enabled. [AD Thread-Metric Reporter1] 05 May 2020 16:09:08,937 ERROR ManagedMonitorDelegate - HTTP Request failed: HTTP/1.1 504 GATEWAY_TIMEOUT [AD Thread-Metric Reporter1] 05 May 2020 16:09:08,937  WARN ManagedMonitorDelegate - Error sending metric data to controller:null [AD Thread-Metric Reporter1] 05 May 2020 16:09:08,937 ERROR ManagedMonitorDelegate - Error sending metrics - will requeue for later transmission   Thanks Priyanka
Hello, trying to model a simple service I have found a little issue. The entities that compose the service are feeded from two source (windows add-on and SAI). The first one create events the second... See more...
Hello, trying to model a simple service I have found a little issue. The entities that compose the service are feeded from two source (windows add-on and SAI). The first one create events the second one metrics, so I cannot use the same search for CPU, memory and so on. I suppose a single service that contains this entities should have double kpi's (one for event, one for metrics), or I can create a service that correspond at a single server with its proper kpi. Could be the second options a valid solution or there is another way ? Thanks & regards
Hi all, I have created a below SPL, which will alert when the RECEIVED =0, but I want this as an alert only when the last 2 hours continuously the RECEIVED=0, if there is data in span of 1 hour in... See more...
Hi all, I have created a below SPL, which will alert when the RECEIVED =0, but I want this as an alert only when the last 2 hours continuously the RECEIVED=0, if there is data in span of 1 hour in time range of last 2 hours, I dont want to get an alert. index=myIndex source=mySource sourcetype=mySourceType | timechart span=1h count AS Received | stats latest(Received) as RECEIVED by _time | where RECEIVED=0 Please let me know how this can be achieved?
My License Creation and Expiration time shows below , Creation time is 13th may and Expatriation 16th May , But my License usage report shows 18th May goes to Free License , Why License usage report ... See more...
My License Creation and Expiration time shows below , Creation time is 13th may and Expatriation 16th May , But my License usage report shows 18th May goes to Free License , Why License usage report it shows 18th May? 1589353200 Splunk Enterprise Reset Warnings 1589612399
Hi, I want to implement retention policy on log files, in the doc https://docs.splunk.com/Documentation/Splunk/8.0.3/Troubleshooting/Enabledebuglogging they didn't mention such a configuration, there... See more...
Hi, I want to implement retention policy on log files, in the doc https://docs.splunk.com/Documentation/Splunk/8.0.3/Troubleshooting/Enabledebuglogging they didn't mention such a configuration, there is only how to configure the maximum size of a log file (configuration of log-local.cfg), i want this configuration be applied to log files who live in SPLUNK_HOME/var/log , is there any workaround to do so ?
Hi everyone, I am interested in some questions about the glass table . I researched it like the result I understood - the glass table is only in the Enterprise Security app and IT Service Intell... See more...
Hi everyone, I am interested in some questions about the glass table . I researched it like the result I understood - the glass table is only in the Enterprise Security app and IT Service Intelligence app. It's correct? Glass table available only in these two apps? If no - How I can add the glass table dashboard to my own application on Splunk Enterprise 8? Thanks.
Hi Guru! (I edited) I have indexer cluster and one search head. I do not use monitoring console. One of peer nodes has been shutdown and the server as well. It seems that the indexer has been shut ... See more...
Hi Guru! (I edited) I have indexer cluster and one search head. I do not use monitoring console. One of peer nodes has been shutdown and the server as well. It seems that the indexer has been shut down due to OS issue. How am I able to get to know the exact shutdonw time using SPL? I would be index=_internal.... . Could you please help me out?
Hi - having issues with a Windows UF we are having to restart circa weekly to clear the issue below which happens at random times (the parsingQueue error being the first in the chain); the TcpOutProc... See more...
Hi - having issues with a Windows UF we are having to restart circa weekly to clear the issue below which happens at random times (the parsingQueue error being the first in the chain); the TcpOutProc errors continue until the UF is restarted. The amount of data being sent [hourly, on the hour] is very small. Is this issue with the Forwarder or with the remote Splunk indexer, the forwarder seems to work OK at all other times ? NB : 'Phone Home' msgs removed. I can't see this exact scenario in other related Splunk Qs. MANY THANKS!! "05-14-2020 14:45:37.371 +0100 WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been blocked for 300 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data "05-14-2020 14:43:57.048 +0100 WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been blocked for 200 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data"05-14-2020 14:43:40.009 "05-14-2020 14:43:22.638 +0100 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...","2020-05-14T14:43:22.638+0100",PRDU0000001,"_internal",1,"C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log",splunkd
Hi All, Currently I have below query which works fine for pie-chart for 3 different data , which is working fine. "*test-path*" | bucket span=1d _time | rename test-path as path | eval result=cas... See more...
Hi All, Currently I have below query which works fine for pie-chart for 3 different data , which is working fine. "*test-path*" | bucket span=1d _time | rename test-path as path | eval result=case((path == "/test/orders"), "Order Data" , (path == "/test-data/orders"), "test order" , (path == "/test2-data2/orders/"), "Test data") | chart count by result | eval result = count + " " + result | fields result, count but i want to extend it by adding 1 more search "test data for order - path" which is coming in the message key, I have tried below but not working: "*test-path*" | bucket span=1d _time | rename test-path as path | **rename message as msg** | eval result=case((path == "/test/orders"), "Order Data" , (path == "/test-data/orders"), "test order" , (path == "/test2-data2/orders/"), "Test data" , (**msg == "*test data for order - path***"), "test data order") | chart count by result | eval result = count + " " + result | fields result, count Can anyone plz help.
Is it possible to have a "folder structure" within a single drop down within a dashboard. My use case is to display certain objects within a dashboard. The information displayed is dependent on the... See more...
Is it possible to have a "folder structure" within a single drop down within a dashboard. My use case is to display certain objects within a dashboard. The information displayed is dependent on the user making a selection in the drop down. The drop down would be similar to the following: <Heading 1> <Topic 1> <Topic 2> <Topic 3> <Topic ....> <Heading 2> <Topic A> <Topic B> <Topic C> <Topic .....> The drop down options are static and not dynamic (I intend to add a search function as a phase 2 thing) but initially just trying to get the drop downs as needed. I am using a "panel depends...." configuration in the XML to show the content depending on the selection made. The only way that I am aware of to make the drop down work the way I want is to add multiple inputs but then I can't use the same token value in each input so that when a select changes the panel changes. The alternate way is to prefix each drop down but visually that doesn't look nice, e.g. <heading1 - topic1> <heading1 - topic2> <heading1 - topic3> <heading1 - topic....> <heading2 - topica> <heading2 - topicb> <heading2 - topicc> <heading2 - topic....>
When I configure a service from a service template. ITSI's service health score is disppear. But KPI result is correct. After a long time is gone , servce health score of Service Analyze... See more...
When I configure a service from a service template. ITSI's service health score is disppear. But KPI result is correct. After a long time is gone , servce health score of Service Analyzer page is none. I use base search, find no new service kpi health score generated when there is new service from service template. index=itsi_summary | table _time serviceid kpiid health_score itsi_kpi_id kpi | sort - _time Then , I remove the service from service template and restart splunk server. Sevice health score recover. This case appeared for itsi 4.4.3 . Splunk server is 8.0.3 When itsi version is 4.1.2, it is normal to configure new service from a service template, Sevice health can generate by Scheduling reports name service_health_score.
5分間隔でネットワーク共有エリアに出力されるテキストデータを、フォワーダー経由で転送しているのですが、全てのログが転送されません。 5分間隔で出力されるため、毎日288個のログが取り込まれるはずなのですが、現状は1日30~40個程しか転送されない状況です。 転送されるデータとされないデータで規則性があるわけでもなく、時間、サイズ等は全てまちまちです。 データの内容に違いはありますが、... See more...
5分間隔でネットワーク共有エリアに出力されるテキストデータを、フォワーダー経由で転送しているのですが、全てのログが転送されません。 5分間隔で出力されるため、毎日288個のログが取り込まれるはずなのですが、現状は1日30~40個程しか転送されない状況です。 転送されるデータとされないデータで規則性があるわけでもなく、時間、サイズ等は全てまちまちです。 データの内容に違いはありますが、文字コード、データ内容のフォーマットは全て同一です。 どのような原因が考えられますでしょうか。 以下、取り込み方法です。 ・Linuxサーバ上で5分毎にローカルディスクへコマンド結果を出力 (上書きではなくファイル名末尾に時間を記載して全て別ファイルとして出力) ↓ ・毎日AM1:00にローカルの出力結果をネットワーク共有エリアに差分コピー ↓ ・ネットワーク共有エリアをフォワーダー(windowsサーバー)でモニター
Hi all, I made an alert which sends out mail to the respective teams whenever a high priority task has not been updated for more than an hour. The query is as follows:- index="abc" INC* main_met... See more...
Hi all, I made an alert which sends out mail to the respective teams whenever a high priority task has not been updated for more than an hour. The query is as follows:- index="abc" INC* main_metric, state="New" OR state="In Progress" OR state="Awaiting Third Party" OR state="Pending" priority = "1 - Critical" OR priority = "2 - High" | rex field=_raw "main_metric=\"(?<main_metric>\S+\s\d+\:\d+\:\d+)\"" | dedup main_metric | dedup number | eval main_metric = upper(main_metric) | lookup lookup_inactivity_alert_distribution_list.csv assignment_group OUTPUT "Email_To" "Email_Cc" "Email_Bcc" "Enabled" | fillnull value=0 | search number != 0 AND Enabled = "Y" AND main_metric != 0 | eval end=strptime(main_metric, "%Y-%m-%d %H:%M:%S.%N") | eval start=now() | eval diff = start - end | lookup lookup_frequency_impact.csv impact output "Frequency1" "Frequency2" "Frequency3" "Frequency4" | eval freqdiff1 = Frequency1 + 600 | eval freqdiff2 = Frequency2 + 600 | eval freqdiff3 = Frequency3 + 600 | eval freqdiff4 = Frequency4 + 600 |eval result = case('caller_id' = "SCOM System" AND 'diff' >= 'Frequency3' AND 'diff' <= 'freqdiff3',"outcome1",'caller_id' != "SCOM System" AND 'diff' >= 'Frequency1' AND 'diff' <= 'freqdiff1',"ouitcome2",'caller_id' != "SCOM System" AND 'diff' >= 'Frequency2' AND 'diff' <= 'freqdiff2',"outcome3",'caller_id' != "SCOM System" AND 'diff' >= 'Frequency3' AND 'diff' <= 'freqdiff3',"outcome4",'caller_id' != "SCOM System" AND 'diff' >= 'Frequency4' AND 'diff' <= 'freqdiff4',"outcome5",1==1,"no outcome") | search result="outcome1" OR result="outcome2" OR result="outcome3" OR result="outcome4" OR result="outcome5" AND state!="Closed" | table main_metric priority caller_id result assignment_group u_updated_on "Email_To" Email_Cc Email_Bcc number start end diff Frequency1 Frequency2 Frequency3 Frequency4 | map alert_main_metric_mail assignment_group="$assignment_group$" to="$Email_To$" cc="$Email_cc$" bcc="$Email_Bcc$" The second lookup handles the frequency with which the alert emails are to be sent with respect to the priority of the ticket. Now the problem that I am having is that if the ticket or task is closed within half an hour of it being created, the alert is still generated. Even if the ticket is de-escalated, the alert is still being received. I tried many modifications in the code but nothing seems to work. Could you all help me with this bug? P.S.: The map commands connect just the saved search which sends out the emails with the appropriate subject and description.