All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Found this in the log: "ERROR:root:(501, b'5.1.7 Bad sender address syntax', 'SplunkHost@sh-i-0c796565e382fd186') while sending mail"
Hi I have the following error and i am not sure how to increase the _internal buckets Root Cause(s): The percentage of small of buckets created (100) over the last hour is very high and exceeded... See more...
Hi I have the following error and i am not sure how to increase the _internal buckets Root Cause(s): The percentage of small of buckets created (100) over the last hour is very high and exceeded the red thresholds (90) for index=_internal, and possibly more indexes, on this indexer Last 50 related messages: 03-10-2020 12:34:23.745 +0100 INFO HotBucketRoller - finished moving hot to warm bid=_internal~4968~DD9E7122-0692-45B5-AA4C-0500D72BC7A9 idx=_internal from=hot_v1_4968 to=db_1547726203_1547726203_4968 size=40960 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 03-10-2020 11:53:10.742 +0100 INFO HotBucketRoller - finished moving hot to warm bid=_internal~4967~DD9E7122-0692-45B5-AA4C-0500D72BC7A9 idx=_internal from=hot_v1_4967 to=db_1582194881_1582194881_4967 size=45056 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 03-10-2020 03:56:16.392 +0100 INFO HotBucketRoller - finished moving hot to warm bid=_internal~4966~DD9E7122-0692-45B5-AA4C-0500D72BC7A9 idx=_internal from=hot_v1_4966 to=db_1582194881_1582194881_4966 size=40960 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 03-10-2020 01:00:25.190 +0100 INFO HotBucketRoller - finished moving hot to warm bid=_internal~4965~DD9E7122-0692-45B5-AA4C-0500D72BC7A9 idx=_internal from=hot_v1_4965 to=db_1547726203_1547726203_4965 size=40960 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots
Below are the sample entries from splunk. Highlighted the entries which i want to list down. Please suggest a splunk query. 1) Please suggest a query pattern to list down word "(time=" and date... See more...
Below are the sample entries from splunk. Highlighted the entries which i want to list down. Please suggest a splunk query. 1) Please suggest a query pattern to list down word "(time=" and date. Output should be like: 2020-03-10 06:48:20 (time=451) 2020-03-10 06:48:20 (time=455) 2020-03-10 06:48:20 (time=492) 2020-03-10 06:48:20 [http-nio-7001-exec-7] INFO [5e6770737be8a35b5fef38f7be2a2635] [5fef38f7be2a2635] [] c.l.e.i.a.c.ItemAvailabilityControllerImpl - DeliveryMethod(sosItmNbr=null, fullMtdTyp=3, fullMtdMsg=Delivery, fullCarrier=null, fullCarrierSvc=null, fullTransitMode=null, fullLctNbr=0, restMsg=null, isAvlSts=false, reqStates=[], onhandQty=0, totalQty=0, itmLdTmAvlQty=0, itmLdTm=null, itmConsolidationDate=null, itmLdTmDays=null, itmLdTmDaysLow=null, fullPath=null)])]) (time=451) 2020-03-10 06:48:20 [http-nio-7001-exec-28] INFO [5e677073e64bd99b5997b5bd20c3c4e0] [5997b5bd20c3c4e0] [] c.l.e.i.a.c.ItemAvailabilityControllerImpl - Finished availability process; Response: IAResponse(locationItemData=[ResponseItem(lctNbr=6877, itemNbr=10000070, modID=1500040, omniID=null, vbuNbr=14692, itmTypCode=3, reqQty=17, itemAvailList=[DeliveryMethod(sosItmNbr=null, fullMtdTyp=1, fullMtdMsg=Parcel, fullCarrier=null, fullCarrierSvc=null, fullTransitMode=null, fullLctNbr=0, restMsg=null, isAvlSts=false, reqStates=[], onhandQty=0, totalQty=0, itmLdTmAvlQty=0, itmLdTm=null, itmConsolidationDate=null, (time=455) 2020-03-10 06:48:20 [http-nio-7001-exec-46] INFO [5e6770731c4e323f4cb875712bb0d8ee] [4cb875712bb0d8ee] [] c.l.e.i.a.c.ItemAvailabilityControllerImpl - Finised (time=492)
Hello, I have the situation, where I evaluate the "All Time" logs initially and save the results to the csv file: ... | outputtext usexml=false | fields - _raw | outputcsv StartupMinMaxAvg.t... See more...
Hello, I have the situation, where I evaluate the "All Time" logs initially and save the results to the csv file: ... | outputtext usexml=false | fields - _raw | outputcsv StartupMinMaxAvg.txt ... The base search takes quite a while, so I would like to do it only once and do not mess with the StartupMinMaxAvg.txt file later. So my idea was to create the report, which executes the base search on an hourly base and writes the results to the separate delta file: ... | outputtext usexml=false | fields - _raw | outputcsv StartupMinMaxAvg_Delta_Last_1h.txt ... What I would need then is to append the StartupMinMaxAvg_Delta_Last_1h.txt to the StartupMinMaxAvg.txt. How would I do it the easiest way? Kind Regards, Kamil
Im trying to find a way to track newly added software and any hardware changes by using nessus scan data. I am sure there is something already made for this but i cant find it anywhere? Any ideas on ... See more...
Im trying to find a way to track newly added software and any hardware changes by using nessus scan data. I am sure there is something already made for this but i cant find it anywhere? Any ideas on how to do this?
Hello, I am planning to move Splunk indexers to a different location. These indexers are also part of the multi-site indexer cluster. What would be the best way to proceed? Should I stop replic... See more...
Hello, I am planning to move Splunk indexers to a different location. These indexers are also part of the multi-site indexer cluster. What would be the best way to proceed? Should I stop replication between the sites and then put indexers to be moved into a maintenance mode? Thank you.
hi, I want to add a custom button in the dashboard for the following data can someone please help me on how to proceed with these as if new to this? EXAMPLE DATA: macro micro alpa W... See more...
hi, I want to add a custom button in the dashboard for the following data can someone please help me on how to proceed with these as if new to this? EXAMPLE DATA: macro micro alpa When I click on macro button it should show me the following result: RESULT macro: atom electron similarly when I click on micro button it should give me only micro result and not macro: RESULT micro: neutron proton
Hi all, I'm working on deploying index clustering in kuberntes using docker-splunk image and faced with the following issue: By default on indexers when you configure cluster indexing repFactor i... See more...
Hi all, I'm working on deploying index clustering in kuberntes using docker-splunk image and faced with the following issue: By default on indexers when you configure cluster indexing repFactor is set to 0 what prevent replication between indexers. Should this be fixed in the nearest time or better to fix by your own? I'm interested,how people who faced with it resolved this issue?
Dear community I am trying to onboard the logs from my Cisco FMC (v6.4.0.7) to Splunk (7.3.3), using the app Cisco Firepower eStreamer eNcore (3.6.8) the connectivity is OK, I am able to colle... See more...
Dear community I am trying to onboard the logs from my Cisco FMC (v6.4.0.7) to Splunk (7.3.3), using the app Cisco Firepower eStreamer eNcore (3.6.8) the connectivity is OK, I am able to collect some logs during a few minutes. and then the estreamer process stopped/failed. after 15/30 minutes the process is able again to collect some data events from the IDS ... and then fails again I don't really know where/what troubleshoot. maybe the default setting "maxQueueSize": 100. this one can be increased as we have a lot of events. thank you so much Message output for index=estreamer sourcetype="cisco:estreamer:log" : Starting process. Starting process. Starting process. Starting Monitor. Using TLS v1.2 Connecting to x.x.x.x:8302 Connection successful Streaming info response Response message=xxxxx Receiving response message Sending request message Request message=0001000200000008ffffffff48900061 Creating request message Using TLS v1.2 Connecting to xxxxx:8302 Creating connection Check certificate Settings: xxxxxxxx= Processes: 4 Sha256: 3xxxxx Platform version: Linux-3.10.0-1062.el7.x86_64-x86_64-with-redhat-7.7-Maipo 2020-03-10 11:14:28,556 Controller INFO Starting client (pid=25963). eNcore version: 3.6.8 Goodbye Stopping Monitor. Process 20330 (Process-4) exit code: 0 Exiting Error state. Clearing queue Stop message received Process 20329 (Process-3) exit code: 0 Exiting Error state. Clearing queue Stop message received Process 20328 (Process-2) exit code: 0 Exiting Error state. Clearing queue Stop message received Process 20327 (Process-1) exit code: 1 Stopping... Running. 0 handled; average rate 0 ev/sec; Process subscriberParser is dead. Starting. 0 handled; average rate 0 ev/sec; Starting process. Starting process. Starting process. Starting Monitor.
Hi I have added these counters last week but the output queue length counter is not yet reflecting on the Splunk queries. [perfmon://Network_Interface]  counters = Bytes Received/sec;Bytes S... See more...
Hi I have added these counters last week but the output queue length counter is not yet reflecting on the Splunk queries. [perfmon://Network_Interface]  counters = Bytes Received/sec;Bytes Sent/sec; Output Queue Length; Bytes Total/sec; instances = * interval = 900 object = Network Interface  disabled=0 Why is it not reflecting? I have reloaded the server class 2 times now.
Hi I have a question regarding Qliksense. Is it possible to get the memory usage of QlikSense Engine and Repository Services through inputs.conf? Or is there another way to get the logs of Qli... See more...
Hi I have a question regarding Qliksense. Is it possible to get the memory usage of QlikSense Engine and Repository Services through inputs.conf? Or is there another way to get the logs of QlikSense to be onboarded to Splunk Cloud?
Hello! I have enabled windows auditing on a windows machine and mounted the directory where all logs are written to on a Ubuntu machine where splunk i installed. I am then monitoring the mounted au... See more...
Hello! I have enabled windows auditing on a windows machine and mounted the directory where all logs are written to on a Ubuntu machine where splunk i installed. I am then monitoring the mounted audit file from the splunk instance. The monitored file is in XML-format, the events are single-line and the last line in the XML-file is always </Events> . Every new event is written before the last line so on the second last line. The problem is that everytime new events are written to the monitored XML-file, Splunk re-indexes the entire file. When i search for "index=_internal sourcetype=splunkd component=watchedfile" I get the result "INFO WatchedFile - Checksum for seekptr didnt't match, will re-read the entire file=' /mnt/netapp_audit/audit/audit_splunk_audit_last.xml'. Other than that, the events are parsed correctly in Splunk. Why is the entire file re-indexed everytime logs are written to the monitored XML-file? Is it possible to get Splunk to only read events until the second last line?
While developing the splunk dash board,I should display result number up to 3 decimal point if those are having values(other thanszero) after dot . example,if I have 2 querires, 1 case result 99.7... See more...
While developing the splunk dash board,I should display result number up to 3 decimal point if those are having values(other thanszero) after dot . example,if I have 2 querires, 1 case result 99.789 and 2 case having result has 99.000 in this case is should disply 1 case 99.789 but for the second case it displaying 99.000 instead of 99. I have tried some condition like below. global declaration for input <set token="precisionVal">0.000</set> condition in side the search <set token="overall_pctg">$result.total_pctg$</set> </condition> <condition match=" '$overall_pctg$' == 100"> <set token="precisionVal">0</set> </condition> and apply the token in input field like below $precisionVal$ but still I am getting as 0.[0-9][0-9][0-9]
How can I use cidrmatch or case using 2 conditions? Example: I only want to get list of IPs where row_A is 11.0.0.0/24 and row_B is 8.8.8.0/24 Current Table: row_A row_B 10.0.0.1 11.0.0.1 ... See more...
How can I use cidrmatch or case using 2 conditions? Example: I only want to get list of IPs where row_A is 11.0.0.0/24 and row_B is 8.8.8.0/24 Current Table: row_A row_B 10.0.0.1 11.0.0.1 10.0.0.2 12.0.0.1 11.0.0.1 8.8.8.8 11.0.0.2 8.8.8.9 12.0.0.1 8.8.8.8 12.0.0.2 8.8.8.9 Target Result: row_A row_B 11.0.0.1 8.8.8.8 11.0.0.2 8.8.8.9 Thanks!
Hi.. Is there a way to send multiple slack Webhook URLs and channels simultaneously with a single alert action? For example like below ... https://hooks.slack.com/services/xxxxxxxxxxxxxx #a-tea... See more...
Hi.. Is there a way to send multiple slack Webhook URLs and channels simultaneously with a single alert action? For example like below ... https://hooks.slack.com/services/xxxxxxxxxxxxxx #a-team https://hooks.slack.com/services/yyyyyyyyyyyyyyy #b-team https://hooks.slack.com/services/zzzzzzzzzzzzzz #c-team Thank you.
Can you please let us know how long is retention for user logons in Splunk.
It seems the github content for TA-microsoft-sysmon is available? Can you anyone know of anything about it? https://splunkbase.splunk.com/app/1914/#/details https://github.com/splunk/TA-microso... See more...
It seems the github content for TA-microsoft-sysmon is available? Can you anyone know of anything about it? https://splunkbase.splunk.com/app/1914/#/details https://github.com/splunk/TA-microsoft-sysmon%29.
Hi , Below is the json snippet properties: { [-] columns: [ [-] { [-] name: PreTaxCost type: Number } { [-] name: UsageDate type: Number } { [-] name: Currency type: String } ] nex... See more...
Hi , Below is the json snippet properties: { [-] columns: [ [-] { [-] name: PreTaxCost type: Number } { [-] name: UsageDate type: Number } { [-] name: Currency type: String } ] nextLink: null rows: [ [-] [ [-] 37.399436789282746 20200301 USD ] [ [-] 37.4605201027181 20200302 USD ] how can i extract the fields pretaxcost ,usagedate
Could you please let us know if analytics-agent-4.5.13.951 version is compatible with jdk1.7? We are getting this error with jdk1.7 "Unsupported major.minor version 52.0" If not please let us know ... See more...
Could you please let us know if analytics-agent-4.5.13.951 version is compatible with jdk1.7? We are getting this error with jdk1.7 "Unsupported major.minor version 52.0" If not please let us know the analytics agent version which are compatible with jdk.17
So I'm trying to do something that may or may not be possible. I want to first create a lookup table that maps IP addresses to host names. I then want to use metadata or tstats to pull a list of ... See more...
So I'm trying to do something that may or may not be possible. I want to first create a lookup table that maps IP addresses to host names. I then want to use metadata or tstats to pull a list of systems that haven't logged within a certain timeframe, and then convert those IP addresses to the corresponding hostnames in the lookup table. This will provide useful for personnel who need to look at a hostname and immediately know what host it is, without needing to know the IP address of each host on the network. I believe I have the right metadata and tstats commands, but I am not sure how to then run those results against the lookup table for the IP address to hostname field conversion. This is ultimately going to be dumped into a table as a dashboard widget, and I'm not even sure if I can do all those things.