All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How do I fix this issue? I found this search query that pulled up the indexes that was the problem. Root Cause(s): The percentage of small buckets (75%) created over the last hour is high and ex... See more...
How do I fix this issue? I found this search query that pulled up the indexes that was the problem. Root Cause(s): The percentage of small buckets (75%) created over the last hour is high and exceeded the red thresholds (50%) for index=_internal, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=4, small buckets=3 Query: index=_internal sourcetype=splunkd component=HotBucketRoller "finished moving hot to warm" | eval bucketSizeMB = round(size / 1024 / 1024, 2) | table _time splunk_server idx bid bucketSizeMB | rename idx as index | join type=left index [ | rest /services/data/indexes count=0 | rename title as index | eval maxDataSize = case (maxDataSize == "auto", 750, maxDataSize == "auto_high_volume", 10000, true(), maxDataSize) | table index updated currentDBSizeMB homePath.maxDataSizeMB maxDataSize maxHotBuckets maxWarmDBCount ] | eval bucketSizePercent = round(100*(bucketSizeMB/maxDataSize)) | eval isSmallBucket = if (bucketSizePercent < 10, 1, 0) | stats sum(isSmallBucket) as num_small_buckets count as num_total_buckets by index splunk_server | eval percentSmallBuckets = round(100*(num_small_buckets/num_total_buckets)) | sort - percentSmallBuckets | eval isViolation = if (percentSmallBuckets > 30, "Yes", "No") After that I was able to see that main, metrics, and internal were in violation. But from there I am not sure how to determine which source type is causing the issue or how to fix it. index=main | eval latency=_indextime-_time | stats min(latency), max(latency), avg(latency), median(latency) by index sourcetype The following command is for when you have determined which sourcetype is causing the issue: index=abc sourcetype=def | eval latency=_indextime-_time | stats min(latency), max(latency), avg(latency), median(latency) by index sourcetype host
Hi , I want to understand when SHC is enabled , does the cluster members ( /opt/apps/splunk/etc/system/local/server.conf) [shclustering] conf_deploy_fetch_url = https://xxxxx:8089 disabled = 0... See more...
Hi , I want to understand when SHC is enabled , does the cluster members ( /opt/apps/splunk/etc/system/local/server.conf) [shclustering] conf_deploy_fetch_url = https://xxxxx:8089 disabled = 0 mgmt_uri = https://xxxxxx:8089 pass4SymmKey = $7$iExLu2lY4pfhmcXKg0bzvFrlGBiiz8ZsgeNOw8V/eggWX/UHplXMSX4= Does this pass4Symmkey encrypted same on cluster members ? I am seeing different pass4Symmkey on other member . Is that a problem ? [shclustering] conf_deploy_fetch_url = https://xxxx:8089 disabled = 0 mgmt_uri = https://xxxxx:8089 pass4SymmKey = $7$ILXF6T0d2rhloLGdszMDKaL/H002O09I4zidU0PzN9aglnG5+wSnoWM= shcluster_label = shcluster1 id = 2FCF8358-15EC-4F17-A119-63A6CEE4734C
I appended a CSV to an index, and right now my results pop up as the 100 lines of CSV, and then 30K of the index. What I would like is to only return IF the values in the fw field from the index ... See more...
I appended a CSV to an index, and right now my results pop up as the 100 lines of CSV, and then 30K of the index. What I would like is to only return IF the values in the fw field from the index MATCH a value in the 100 lines of the CSV firewall_rule field... thoughts? I have a match in there currently but it's showing no similarities (even though I manually checked, there are many). | from inputlookup:"firewall-exception-prod.csv" | append [ search index=gcp_firewall] | rename data.jsonPayload.rule_details.reference as FW | search FW = "network:prod*" OR firewall_rule=* | rex field=FW "network:prod-corp/firewall:(?.*)" | eval result=if(match(fw, firewall_rule),"yes", "no") | table firewall_rule fw result Do you know what I'm missing? Thank you!!!
We have Splunk DB Connect tailing a table in an Oracle DB. Everything is working fine except that the events end up in Splunk with space characters filling out the fields of CHAR type up to the conf... See more...
We have Splunk DB Connect tailing a table in an Oracle DB. Everything is working fine except that the events end up in Splunk with space characters filling out the fields of CHAR type up to the configured length. ex: 2020-04-27 19:28:59.000, ACTION="replace ", STATUS="OK ", TYPE="FRAC " Is it possible to configure DB Connect to remove these extra space characters? is it possible to add something to the SQL statement to accomplish this?
Apparently, the Splunk OS TAs don't capture time and if there are index time delays, _time would be skewed and actually be _indextime . For example, the output of df.sh is - Filesystem ... See more...
Apparently, the Splunk OS TAs don't capture time and if there are index time delays, _time would be skewed and actually be _indextime . For example, the output of df.sh is - Filesystem Type Size Used Avail UsePct MountedOn / xxx 50G 18G 30G 37% / /yyyyy xxx 600G 401G 186G 69% /yyyyy /zzzzz xxx 50G 18G 30G 37% /zzzzz Is there anything we can do about it?
I am new to Splunk. I have tried to add CPU data locally in Splunk, and I am able to get data core-wise but I need average CPU utilization. How I can do this? 04/27/2020 23:09:19.414 +0530 11:09:... See more...
I am new to Splunk. I have tried to add CPU data locally in Splunk, and I am able to get data core-wise but I need average CPU utilization. How I can do this? 04/27/2020 23:09:19.414 +0530 11:09:19.414 PM collection=cpu12 object="TCPIP Performance Diagnostics (Per-CPU)" counter="TCP current connections" instance=CPU3 Value=7 Collapse host = * source = Perfmon:cpu12 sourcetype = Perfmon:cpu12 04/27/2020 23:09:19.414 +0530 11:09:19.414 PM collection=cpu12 object="TCPIP Performance Diagnostics (Per-CPU)" counter="TCP current connections" instance=CPU2 Value=6 Collapse host = * source = Perfmon:cpu12 sourcetype = Perfmon:cpu12 04/27/2020 23:09:19.414 +0530 11:09:19.414 PM collection=cpu12 object="TCPIP Performance Diagnostics (Per-CPU)" counter="TCP current connections" instance=CPU1 Value=7 Collapse host = * source = Perfmon:cpu12 sourcetype = Perfmon:cpu12 04/27/2020 23:09:19.414 +0530 11:09:19.414 PM collection=cpu12 object="TCPIP Performance Diagnostics (Per-CPU)" counter="TCP current connections" instance=CPU0 Value=8 Collapse host = * source = Perfmon:cpu12 sourcetype = Perfmon:cpu12 thanks
I have some strings like below returned by my splunk base search: "CN=aa,OU=bb,DC=cc,DC=dd,DC=ee" "CN=xx,OU=bb,DC=cc,DC=yy,DC=zz" "CN=ff,OU=gg,OU=hh,DC=ii,DC=jj" "CN=kk,DC=ll,DC=mm" Note: CN... See more...
I have some strings like below returned by my splunk base search: "CN=aa,OU=bb,DC=cc,DC=dd,DC=ee" "CN=xx,OU=bb,DC=cc,DC=yy,DC=zz" "CN=ff,OU=gg,OU=hh,DC=ii,DC=jj" "CN=kk,DC=ll,DC=mm" Note: CN,OU,DC could be 0 or many. My ultimate goal is to find all OUs something like below. (The combinations also need to be unique.) (All blank lines can be excluded.) eg: bb (blank) gg hh (blank) (blank) The query that am using currently is very naive. Plus it is not generic. It will work if atleast one of my split results into 5 parts (0,1,2,3,4). But, it will not work and give blank results if none of my split results into 5 parts (0,1,2,3,4) i.e. all of them result in less than 5 parts. index=xx sourcetype=yy | fields s | rex field=s mode=sed "s/,DC=.*//g" | eval temp=split(s,",OU=") | eval a=mvindex(temp,1) | eval b=mvindex(temp,2) | eval c=mvindex(temp,3) | eval d=mvindex(temp,4) | dedup a b c d | table a,b,c,d How to make it generic i.e. get the count of split and make fields as per maximum split length?
Hello all, I am new to regex and struggling to get the Actual value field. I only need the number in between the quotes as sometimes the number can be smaller or larger than the example in the eve... See more...
Hello all, I am new to regex and struggling to get the Actual value field. I only need the number in between the quotes as sometimes the number can be smaller or larger than the example in the event. Below is an example of what I was trying with no luck. Any help is appreciated. | rex "\svalue\":(?\d+)" Message={ "ApplicationId": "babe7022-5a00-4338-a519-0a5bbf5c64ee", "ApplicationName": "Lacerte 2019", "Measurement": "lacerteload_2019", "Description": "Measurement duration (59.615s) exceeded threshold of 40s (49.04%)", "Actual value": "59.615",
I have a csv file which has fields say _time success_count failed_count. Every 5 min we have data in these fields. This data is for past say 4 months. Now what I need is to compare current data eve... See more...
I have a csv file which has fields say _time success_count failed_count. Every 5 min we have data in these fields. This data is for past say 4 months. Now what I need is to compare current data every 5 min by the data in csv to calculate week over week. Like say success_count today to be compared with the count one and two weeks back same time present in csv and calculate difference in them. I have data in csv from December- February and now want to compare my current data( april data) with dec or jan or feb same time just a week before todays date in month say jan.
The following url needs to be called for Networker Backup tool. Is ther an option to return currentdate and time in the below format and append it to URL in Splunk addon builder: 2019-01-01T10:... See more...
The following url needs to be called for Networker Backup tool. Is ther an option to return currentdate and time in the below format and append it to URL in Splunk addon builder: 2019-01-01T10:15:00 https://networker-server-hostname:9090/nwrestapi/v3/global/backups?q=saveTime:['*2019-01-01T10:00:00*' TO '2019-01-01T10:15:00'] And first timestamp in the URL can be used as checkpoint 2019-01-01T10:00:00 Any idea?
We are looking to deploy an Intermediary forwarding tier consisting of 3 Universal Forwarders going to Splunk Cloud. The 3 UFs will be receiving data from 3 Heavy forwarders which will load-balance... See more...
We are looking to deploy an Intermediary forwarding tier consisting of 3 Universal Forwarders going to Splunk Cloud. The 3 UFs will be receiving data from 3 Heavy forwarders which will load-balance data across the intermediary forwarding tier. The intermediary tier has to be there due to networking reasons that we cannot overcome which are not allowing the Heavy forwarders to forward to Splunk Cloud directly. What specs should we be looking for the UFs of the intermediary forwarding tier considering a license of 600GB/day? The license would be split through the 3 UFs but in case of failure, each UF should be spec'd to be able to forward the full load. Would something like 4 CPU cores and 8GB RAM be enough?
Hello everyone How I can resize the table length so that the scrolling option I can remove and I can see all the field at the same time without scrolling right or left I want to display 'Area ... See more...
Hello everyone How I can resize the table length so that the scrolling option I can remove and I can see all the field at the same time without scrolling right or left I want to display 'Area CP Name' field output in the below format so that the scrolling right or left issue I can fix. Area CP Name 1. dcdio.dv. gethdhsdhthrd 2. dcdio.dv. gethdhsdhthrd My XML code is <dashboard theme="dark"> <label>Clone</label> <row> <panel> <table> <search> <query>| inputlookup raj100|table ApName "Area Nmae Details" "Area CP Name" CLevel Date "Issue Description" "MD Name" PinID "Recommended Fix" "SC Title Name" Srate Task Title URL</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">4</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>
I would like to find out the max indexing delay per index. | tstats max(_indextime - _time) where index=* by index Throws the error - -- Error in 'stats' command: The aggregation specifi... See more...
I would like to find out the max indexing delay per index. | tstats max(_indextime - _time) where index=* by index Throws the error - -- Error in 'stats' command: The aggregation specifier 'max(_indextime' is invalid. The aggregation specifier must be in func_name format.
I'm trying to ingest logs from client computers that are written to localappdata of the user running the software. The logs are not being picked up and I presume this is because splunk forwarder is ... See more...
I'm trying to ingest logs from client computers that are written to localappdata of the user running the software. The logs are not being picked up and I presume this is because splunk forwarder is not in the user context.
Hi, In Splunk Cloud Instance I am unable to find TCP options under Data Inputs. Please help to send the Data from Mulesoft to Splunk Cloud Instance using TCP.
Hello, I have a csv file generated by script daily at $SplunkHome\etc\apps\bin\'fuel_stations.csv'. I add manually that CSV file as Lookup table files using "settings> lookups> Lookup table files> ... See more...
Hello, I have a csv file generated by script daily at $SplunkHome\etc\apps\bin\'fuel_stations.csv'. I add manually that CSV file as Lookup table files using "settings> lookups> Lookup table files> add new" to use it for my splunk search |inputlookup fuel_station.csv. Now I want to automate to update lookup file whenever this csv file in above path is updated. How do I get the lookup table to update automatically whenever the CSV file is updated in the specific local file ? splunk v.6.6.3 Thanks
HI, I am getting below error while configuring mobile app monitoring. Could not find com.appdynamics:appdynamics-gradle-plugin:20.3.1.0. Searched in the following locations:   - https://jce... See more...
HI, I am getting below error while configuring mobile app monitoring. Could not find com.appdynamics:appdynamics-gradle-plugin:20.3.1.0. Searched in the following locations:   - https://jcenter.bintray.com/com/appdynamics/appdynamics-gradle-plugin/20.3.1.0/appdynamics-gradle-plugin-20.3.1.0.pom   - https://jcenter.bintray.com/com/appdynamics/appdynamics-gradle-plugin/20.3.1.0/appdynamics-gradle-plugin-20.3.1.0.jar   steps followed as per installation method-->Android Plugin. getting error post running build.   Can someone help me to understand why I am getting this error?   Regards, Praveen Mareddy
Hi , Sorry , if I am asking duplicate question. Looking for something like this.... 1) I have a list of source IPs in a csv file , which I want to exclude from the results. 2) Then filter ... See more...
Hi , Sorry , if I am asking duplicate question. Looking for something like this.... 1) I have a list of source IPs in a csv file , which I want to exclude from the results. 2) Then filter the results with different fields. index=abc_splunk sourcetype=access_log uri!="/healthcheck" |lookup Source_IPs.csv rIP OUTPUT rIP as RealIP | where isnull(RealIP) | stats count by uri,http_status This works , but if I add "stats count by realIP, uri,http_status" then it doesn't work. Do I need to use "fillnull" as well here ? If yes , then how can I use it for different fields ? Thanks, DD
Hi all, i have a strange situation where multiple universal forwarder do not forward all configured inputs. We have nearly 40 Domain Controler with the same deployment Apps and configuration. ... See more...
Hi all, i have a strange situation where multiple universal forwarder do not forward all configured inputs. We have nearly 40 Domain Controler with the same deployment Apps and configuration. Half of them are Server 2008r2 and the other half is Server 2019. I use the Splunk_TA_windows Version 7.0.0 on my indexer and as deployment app for my DC forwarders. The 2008r2 have the forwarder Version 7.2.9.1 and the Sever 2019 DC have 8.0.3. The DCs are sending the logs directly to my single indexer. The following inputs.conf is in the deployment app for those DCs: [WinEventLog://Application] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 index = wineventlog renderXml = false [WinEventLog://Security] disabled = 0 current_only = 0 blacklist1 = EventCode=%^(4658|5145|4661|4634|4656|4769|5140|4776|4768|4689|4688|4933|4932|4672|4648|4625|4770|4931|4662|4674|521|4673)$% blacklist2 = EventCode="4771" Security_ID!="contoso\*" [WinEventLog://System] disabled = 0 start_from = oldest current_only = 0 Now i'm facing the problem that on 3 of those DCs all or partiatlly of the defined inputs does not work. (Mostly the Security Log does not get collected) I do not find anything in the splunkd.log either on the indexer nor at the forwarder from my dcs. It was 1 of the 2008r2 DCs and 2 of the 2019 DCs who have problems right now. I had also the same Problem on a fourth 2019 DC which i worked on to solve that problem. I moved it to a different Deployment Group and changed the inputs.conf in that group to remove the blacklists under security log, but that did not solved my problem. After this i reinstalled the Forwarder Several Times (Still with my DEV inputs.conv which had the blacklists removed) but also that solved not my problem. ... That was on friday and i was very mad .. because my forwarder was still sending Application and System Logs i decided to put it back into the default deployment group. Now what ... i enjoyed my weekend and after i startet this morning ... my fourth DC was working like a charm ... i have no idea what's going on and i have still no idea how to solve my problem with the other 3 DCs ... Sometimes a restart helps sometimes not. Thanks in advance for your suggestions on how to identify or solve this problem. BR vess
Phantom and Cherwell are integrated. I am planning to create a playbook that fetches the incident details assigned to a specific team in Cherwell. This playbook needs to run every 5 minutes to ge... See more...
Phantom and Cherwell are integrated. I am planning to create a playbook that fetches the incident details assigned to a specific team in Cherwell. This playbook needs to run every 5 minutes to get the details from Cherwell. Any pointers are highly appreciated.