All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm useing alert manager in splunk alert action  with email action together.   But some time ,only the email can got the alert  notification, i check in _internal index, found some err log   8/6/... See more...
I'm useing alert manager in splunk alert action  with email action together.   But some time ,only the email can got the alert  notification, i check in _internal index, found some err log   8/6/218:10:02.402 AM | 08-06-2021 08:10:02.402 +0800 ERROR sendmodalert - action=alert_manager STDERR - UnicodeEncodeError: 'latin-1' codec can't encode characters in position 171-177: Body ('文件完整性告警') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8.host = bj-vm-sec-searchhead-splunk-188index = _internalsourcetype = splunkdsplunk_server = bj-vm-sec-searchhead-splunk-188 8/6/218:10:02.319 AM | 2021-08-06 08:10:02,319 INFO pid="86180" logger="alert_manager_suppression_helper" message="Checking for matching suppression rules for alert=/etc/passwd文件完整性告警" (SuppressionHelper.py:66)host = bj-vm-sec-searchhead-splunk-188index = _internalmessage = Checking for matching suppression rules for alert=/etc/passwd文件完整性告警sourcetype = alert_manager_suppression_helper-too_smallsplunk_server = bj-vm-sec-searchhead-splunk-188 8/6/218:10:02.248 AM | 2021-08-06 08:10:02,248 INFO pid="86180" logger="alert_manager" message="Found job for alert '/etc/passwd文件完整性告警' with title 'HIDS passwd file monitorning'. Context is 'HIDS_all' with 1 results." (alert_manager.py:566)host = bj-vm-sec-searchhead-splunk-188index = _internalmessage = Found job for alert '/etc/passwd文件完整性告警' with title 'HIDS passwd file monitorning'. Context is 'HIDS_all' with 1 results.sourcetype = alert_manager-too_smallsplunk_server = bj-vm-sec-searchhead-splunk-188 8/6/218:10:01.733 AM | 08-06-2021 08:10:01.733 +0800 INFO sendmodalert - Invoking modular alert action=alert_manager for search="/etc/passwd文件完整性告警" sid="scheduler__splunk_SElEU19hbGw__RMD5bbb47a07bc26a359_at_1628208600_360" in app="HIDS_all" owner="splunk" type="saved"   so it seems like alert manager not support Chinese charater.  
What is the usage for the alerts index on all the indexer? I install the alert manager app on search head, run the alert search on search head, also I see some alert log data on search head alerts i... See more...
What is the usage for the alerts index on all the indexer? I install the alert manager app on search head, run the alert search on search head, also I see some alert log data on search head alerts index, but no log on indexer, so I want to know What is the usage for the alerts index on all the indexer? If I didn't have alerts index on indexer, if it will affect the alert manager running?
I currently have Splunk running a python script every 1 min with the following output: {"DEMO": 2700, "TEST": 0, "TEST-3": 5}   How can i visualize this data in the visualize part ? All the pie ch... See more...
I currently have Splunk running a python script every 1 min with the following output: {"DEMO": 2700, "TEST": 0, "TEST-3": 5}   How can i visualize this data in the visualize part ? All the pie charts, etc seems to only support a single field, whereas i would like all fields returned by the script to be automatically added in preferably in a pie chart or graph where i can sort by the value
I have a json format of data, I can not use the following method to process the results I want, when metricValue is a new dictionary, I changed how to extract. index="huawei_fc" sourcetype="BW_H... See more...
I have a json format of data, I can not use the following method to process the results I want, when metricValue is a new dictionary, I changed how to extract. index="huawei_fc" sourcetype="BW_HWFC:metric:host" | rename value{}.* as * | eval t = mvzip(metricId,metricValue) | mvexpand t | eval mId=mvindex(split(t,","),0),mValue=mvindex(split(t,","),1) | stats values(mValue) as mValue by _time,urn,mId I was unable to extract the data in metricValue from the SPL above        
Good day, As mentioned, I wanted to flatten a series of multivalue fields, and make it just like single row entries, where the type will become "String" and not "Multivalue". To be clearer, here's m... See more...
Good day, As mentioned, I wanted to flatten a series of multivalue fields, and make it just like single row entries, where the type will become "String" and not "Multivalue". To be clearer, here's my base search: | makeresults | eval a="this,is" | eval b="an,example" | eval c="group1,group2" | makemv delim="," a | makemv delim="," b | makemv delim="," c | stats values(a) as a, values(b) as b by c | eval type_a=typeof(a) | eval type_b=typeof(b) result of this will be: so what I wanted to do is make the result like this: c a b type_a type_b group1 is an String String group1 this example String String group2 is an String String group3 this example String String             When i add this to the base search: mvexpand a | mvexpand b | eval type_c=typeof(a) | eval type_d=typeof(b) the output will be: As you can see, this was able to handle the requirement in making the entries as "String". However,  it has created unnecessary combinations (as compared to my expected output), given that "a" and "b" are multivalue fields. I am not sure if the way I'll state this is correct, but perhaps, what I wanted is to expand/remove the "grouping" nature, but still output/display it as a single line/row entry like in a CSV file. An option to handle this is just output the results into a CSV or JSON file, and do the processing away from Splunk, but doing everything inside Splunk is included in my requirement. Thanks a lot in advance, and as always, any ideas are greatly appreciated
I am currently using a python API call to retrieve data from Splunk. I am getting approximately 1 day of data when the argument passed is for 30 days or more or lower of the data shown in the Splunk c... See more...
I am currently using a python API call to retrieve data from Splunk. I am getting approximately 1 day of data when the argument passed is for 30 days or more or lower of the data shown in the Splunk console. Can someone help?
Hello Splunk Community I'm working on a SPL to give _time difference of list of eventTypes as per the algorithm. Currently I'm using the below query. index=apple source=datapipe AccountNumber=* ... See more...
Hello Splunk Community I'm working on a SPL to give _time difference of list of eventTypes as per the algorithm. Currently I'm using the below query. index=apple source=datapipe AccountNumber=* eventType=newyork          OR                               eventType=california         OR                             eventType=boston             OR                             eventType=houston           OR                            eventType=dallas                OR                         eventType=austin               OR                           eventType=Irvine                OR                        eventType=Washington      OR                      eventType=Atlanta               OR                       eventType=San Antonio      OR                  eventType=Brazil                   OR                  eventType=Mumbai              OR                      eventType=Delhi                    OR                    |fieldformat _time=strftime(_time,"%m/%d/%Y%I:%M:%S %p") |sort by AccountNumber,_time |streamstats  range(_time) as diff window=2 |eval DifferenceInTimeByEventtime=strftime(diff,"%M:%S") |table AccountNumber eventType _time DifferenceInTimeByEventtime The query is working..However I need the time difference as per the algorithm. NOT ONLY as per the previous event .The algorithm is as follows A    eventType=newyork                                    B    eventType=california            B-A                         C    eventType=boston                C-B                                  D    eventType=houston             D-C                                 E     eventType=dallas                  E-D                     F     eventType=dallas                   F-D                    G     eventType=Irvine                 G-E                        H     eventType=Irvine                  H-F        I      eventType=Atlanta                I-H                       J    eventType=San Antonio         J-I                  K   eventType=San Antonio         K-I                              L    eventType=Mumbai               L-I                     M   eventType=Delhi                    M-I I'm looking for a _time difference according to the algorithm above Add Avg,Max,Min column to the search     I would appreciate if there is a query optimization Thanks in Advance.
I am trying to get the alert when Excerption error happens but there are many hosts and services. In splunk the services and host arent arranged so manually I added the service name and hosts in csv ... See more...
I am trying to get the alert when Excerption error happens but there are many hosts and services. In splunk the services and host arent arranged so manually I added the service name and hosts in csv file. is there a way or similar condition to get log events saying this serivce is getting error is this host with the message
Hi Guys, I have created a simple query with stats command and I'm able to see the required results. If same search is ran by another user he is not able to see results but if that user removes comm... See more...
Hi Guys, I have created a simple query with stats command and I'm able to see the required results. If same search is ran by another user he is not able to see results but if that user removes commands from the search query he is able to see events. I checked permission of that user and it have same roles which I have. So I beleive it's not a permission issue.
  How would I write the props config file for following events, any help will be highly appreciated, thank you!   Thu, 01 Jul 2021 00:20:04 -0400|system|flush_vulns|INFO|-1|Removing old data in Re... See more...
  How would I write the props config file for following events, any help will be highly appreciated, thank you!   Thu, 01 Jul 2021 00:20:04 -0400|system|flush_vulns|INFO|-1|Removing old data in Repository Thu, 01 Jul 2021 00:20:04 -0400|system|flush_vulns|INFO|-1|Successful removal of old  data in Repository Thu, 01 Jul 2021 00:20:05 -0400|system|flush_vulns|INFO|-1|Removing old data in Repository Thu, 01 Jul 2021 00:20:05 -0400|system|flush_vulns|INFO|-1|Successful removal of old data in Repository  
Hello All, I am trying to clean up our indexes and their sizes to ensure that we are keeping the correct amount of data for each index.  I have about 5 to 10 really busy indexes that bring in most o... See more...
Hello All, I am trying to clean up our indexes and their sizes to ensure that we are keeping the correct amount of data for each index.  I have about 5 to 10 really busy indexes that bring in most of the data.   pan_logs ~200GB/day syslog ~10GB/day checkpoing (coming soon) ~250GB/day  wineventlog ~650GB/day network ~180GB/day So question is if when I create an index configuration for example wineventlog     [wineventlog] homePath = volume:hot/wineventlog/db homePath.maxDataSizeMB = 19500000 coldPath = volume:cold/wineventlog/colddb coldPath.maxDataSizeMB = 58500000 thawedPath = /splunk/cold/wineventlog/thaweddb maxHotBuckets = 10 maxDataSize = auto_high_volume maxTotalDataSizeMB = 78000000 disabled = 0 repFactor=auto     So 30 days of hot/warm would be 1.95TB and 90days of cold data would be 5.85TB and the total size would be 78TB data.  The sizes would then be divided by the total number of indexers we have (20) and each indexer should host about 975GB of hot/warm and 2.925TB of cold data.  And Splunk would start to roll data to frozen (dev null) when the max total (Hot/Warm + Cold) data reached 78TB.  Is that correct? Do I need to specify maxTotalDataSizeMB if I am using homePath and coldPath settings?   Thanks ed
Good Afternoon, We are a SAAS client and we're starting to get requests and requirements for historical AppDynamics information for such projects as Telemetry Data analysis which they are requesting... See more...
Good Afternoon, We are a SAAS client and we're starting to get requests and requirements for historical AppDynamics information for such projects as Telemetry Data analysis which they are requesting up to 6 months of historical data from our various APM systems. We've never really had the requirement to persist our data somewhere until now and since we have an Elastic Stack in-house it makes sense to export this information into an Elasticsearch index. Everything I've read points to on-prem installations where you can put scripts/utilities on the servers that captures the information and then exports it out to Elasticsearch through some plug-in such as logstash or and http endpoint plugin. Since we are SAAS and don't have any physical machines that we can use to house scripts or utilities is there a similar or corresponding approach that we can use? Thanks, Bill
I want to know the execution time of scheduled alerts in splunk_instrumentation apps which are scheduled at 3 am.  Nothing is showing when I am opening the "View Recent" option. Even in job manager n... See more...
I want to know the execution time of scheduled alerts in splunk_instrumentation apps which are scheduled at 3 am.  Nothing is showing when I am opening the "View Recent" option. Even in job manager nothing is showing.  Following are the alerts instrumentation.usage.smartStore.config instrumentation.usage.workloadManagement.report instrumentation.usage.authMethod.config instrumentation.usage.healthMonitor.report instrumentation.usage.passwordPolicy.config
Does anyone have a sample stanza for inputs.conf for capturing Windows perfmon stats such as CPU utilization, memory utilization and disk utilization?  I was hoping the stanza would include the actua... See more...
Does anyone have a sample stanza for inputs.conf for capturing Windows perfmon stats such as CPU utilization, memory utilization and disk utilization?  I was hoping the stanza would include the actual counters and such.  Just looking for the basics.  I could not find any good baseline samples. Thank you very much!
I have two different hosts . hostA-1, hostA-2, hostA-3, hostA-4, hostA-5 . hostB-5, hostB-6, hostB-7, hostB-8. I want to compare the specific value from the logs that are matched like Token which are ... See more...
I have two different hosts . hostA-1, hostA-2, hostA-3, hostA-4, hostA-5 . hostB-5, hostB-6, hostB-7, hostB-8. I want to compare the specific value from the logs that are matched like Token which are unique but wanted to find if the value are matched between hostA and hostB and form a table based on that which will show hosts name A and B and below will be the matching token
Hi, I need to create a dashboard panel merging two different search queries. I have below two queries: index=int_gcg_nam_eventcloud_164167 host="mwgcb-ckbla02U*" source="/logs/confluent/kafkaLogs/... See more...
Hi, I need to create a dashboard panel merging two different search queries. I have below two queries: index=int_gcg_nam_eventcloud_164167 host="mwgcb-ckbla02U*" source="/logs/confluent/kafkaLogs/server.log" "Broker may not be available" | rex field=_raw "(?ms)]\s(?P<Code>\w+)\s\[" | search Code="WARN" | stats count | eval mwgcb-ckbla02U.nam.nsroot.net=if(count=0, "Running", "Down") | table mwgcb-ckbla02U.nam.nsroot.net This give me the status of the  broker based on the availability of the indicator "Broker may not be available". index=int_gcg_nam_eventcloud_164167 host="mwgcb-ckbla02U*" source="/logs/confluent/zookeeperLogs/*" "java.net.SocketException: Broken pipe" OR "ZK Down" | rex field=_raw "(?ms)\]\s(?P<Code>\w+)\s" | search Code="WARN" | rex field=_raw "(?ms)\/(?P<IP_Address>(\d+\.){3}\d+)\:\d+" | stats count | eval mwgcb-ckbla02U.nam.nsroot.net=if(count=0, "Running", "Down") | table mwgcb-ckbla02U.nam.nsroot.net This gives me the status of zookeeper based on the availability of the indicators "java.net.SocketException: Broken pipe" OR "ZK Down". Now, I want to merge both the search queries such that I can get the status of both broker and zookeeper in a tabular format.   for e.g.  for the host mwgcb-ckbla02U.nam.nsroot.net Broker             Down Zookeeper    Running   I tried creating a query as below: index=int_gcg_nam_eventcloud_164167 host="mwgcb-ckbla02U*" source="/logs/confluent/kafkaLogs/server.log" OR source="/logs/confluent/zookeeperLogs/zookeeper.log" "Broker may not be available" OR "java.net.SocketException: Broken pipe" OR "ZK Down" | stats count by source | lookup component_lookup.csv "source" | eval Status=if(count=0, "Running", "Down")| table Component,Status   However in any time range where the indicators are not available, it throws output as "No results found" and hence not able to create the dashboard. Please help to get the output in the desired manner. Thanks..!!
Hi, I have a dashboard where some user needs access to fetch the details as a report (.pdf) format every day. The trigger will be via background task in Windows server using (API) and the report... See more...
Hi, I have a dashboard where some user needs access to fetch the details as a report (.pdf) format every day. The trigger will be via background task in Windows server using (API) and the report should be generated via REST API or HEC token calls. I would like to know if HEC token or REST API can be a used as a solution to this requirement. Does it require any API endpoints ? what will be the easiest way to accomplish this.
I receive this error when trying to save the settings. I am running MITTRE ATT&CK app on RHEL on AWS. Where do I get a new API key please? Thank u 
Hello, I'm trying to connect SCOM with "Splunk Addon for Microsoft SCOM" (Version 4.0.0 - on Splunk Enterprise 7.3 Heavy Forwarder on Windows) The connection itself is working fine and I'm able to ... See more...
Hello, I'm trying to connect SCOM with "Splunk Addon for Microsoft SCOM" (Version 4.0.0 - on Splunk Enterprise 7.3 Heavy Forwarder on Windows) The connection itself is working fine and I'm able to retrieve alerts from SCOM e.g. via group=alert which is the following powershell commands from "scom_command_loader.ps1":   "alert" = @('Get-SCOMAlert', 'Get-SCOMAlert | Get-SCOMAlertHistory');   The input looks like this:   & "$SplunkHome\etc\apps\Splunk_TA_microsoft-scom\bin\scom_command_loader.ps1" -groups "alert" -server "SCOM_DEV" -loglevel DEBUG -starttime "2021-08-01T00:00:00+02:00"    Now I don't want to have all alerts which will be produced in SCOM, instead I want to narrow it down only to the events with the name "*Windows Defender*". So for this I've created a new Powershell v3 Modular Input as a copy of the existing one, but using not a group, instead the commands section of the script - see also addon documentation. Section: "Configure inputs through the PowerShell scripted input UI" The example there is:   & "$SplunkHome\etc\apps\Splunk_TA_microsoft-scom\bin\scom_command_loader.ps1" -commands Get-SCOMAlert, Get-SCOMEvent   So I tried to use this. The powershell command is working on the shell when I connect directly to this SCOM system.   & "$SplunkHome\etc\apps\Splunk_TA_microsoft-scom\bin\scom_command_loader.ps1" -commands 'Get-SCOMAlert -Name "*Windows Defender*"' -server "SCOM_DEV" -loglevel DEBUG -starttime "2021-08-01T00:00:00+02:00"   The input is working fine and delivering the Windows Defender Events to Splunk. BUT the problem now is, that it does not create a checkpoint under the path "D:\Splunk\var\lib\splunk\modinputs\scom" like it does when a powershell command without a parameter (-Name "*Windows Defender*") is used. This can be seen in the DEBUG log files of the addon   index=_internal source=*ta_scom.log 2021-08-05 16:37:11 +02:00 [ log_level=WARN pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] End SCOM TA host = ws006914.schaeffler.comsource = D:\Splunk\var\log\splunk\ta_scom.logsourcetype = ms:scom:log:script 2021-08-05 16:37:11 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] Get 13 objects by 'Get-SCOMAlert -Name "*Windows Defender*"' 2021-08-05 16:37:09 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] --> serialize(Get-SCOMAlert -Name "*Windows Defender*") 2021-08-05 16:37:05 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] Get object 'Get-SCOMAlert -Name "*Windows Defender*"' without checkpoint 2021-08-05 16:37:05 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] --> executeCmd SCOM_DEV Get-SCOMAlert -Name "*Windows Defender*" 2021-08-05 16:37:05 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] Command list: Get-SCOMAlert -Name "*Windows Defender*" 2021-08-05 16:37:05 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] --> getCommands (groups=, commands=[Get-SCOMAlert -Name "*Windows Defender*"]) 2021-08-05 16:37:05 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] splunk version 7.3.4 2021-08-05 16:37:02 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] New SCOMManagementGroupConnection success 2021-08-05 16:36:55 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] --> run (groups=, commands=[Get-SCOMAlert -Name "*Windows Defender*"], loglevel=DEBUG) 2021-08-05 16:36:55 +02:00 [ log_level=WARN pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] Start SCOM TA   You can see it is calling the command correctly, but "without checkpoint". When using a default input, it looks like this:   GET Checkpoint [ log_level=DEBUG pid=10384 input=_Splunk_TA_microsoft_scom_internal_used_Events_test ] Got checkpoint '07/26/2021 10:54:39.220' from file 'D:\Splunk\var\lib\splunk\modinputs\scom\###U0NPTV9ERVY=###Get-SCOMAlert' successfully. SET Checkpoint 2021-07-26 14:00:28 +02:00 [ log_level=DEBUG pid=10384 input=_Splunk_TA_microsoft_scom_internal_used_Events_test ] Set checkpoint '07/26/2021 11:54:14.790' to file 'D:\Splunk\var\lib\splunk\modinputs\scom\###U0NPTV9ERVY=###Get-SCOMAlert' successfully.   So the problem will be duplicate data when I would run this regulary. Does anybody has an idea how to fix this? I have the feeling tried everything possible (different formations with _"_ or _'_ at different positions). Also without wildcards in the Name field its not working. I guess it somehow cannot create the checkpoint file. I also tried manipulating the  "scom_command_loader.ps1" script with a new group, which contains my query, but it can also not create the checkpoint file. Thanks in advance Michael
We are running Splunk Stream 7.3. In _internal sourcetype=stream:log we see the following warning messages: " NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 256 recei... See more...
We are running Splunk Stream 7.3. In _internal sourcetype=stream:log we see the following warning messages: " NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 256 received for observation domain id xyz from device 172.x.y.z . Dropping flow data set of size xxxx..." Netflow exporters are configured to send out their templates every so many seconds. Eventually the netflow exporter will send the template and the warning messages will stop. My question is whether that data actually dropped or is it cached until the template is received? Am I losing that data? Similar applications that collect netflow (Cisco Stealthwatch, Wireshark) will cache the data until they receive the template. This has implications when load balancing several hundred exporters to an array of Independent Stream Forwarders in order to determine if session persistence is necessary.