All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good Afternoon, We are a SAAS client and we're starting to get requests and requirements for historical AppDynamics information for such projects as Telemetry Data analysis which they are requesting... See more...
Good Afternoon, We are a SAAS client and we're starting to get requests and requirements for historical AppDynamics information for such projects as Telemetry Data analysis which they are requesting up to 6 months of historical data from our various APM systems. We've never really had the requirement to persist our data somewhere until now and since we have an Elastic Stack in-house it makes sense to export this information into an Elasticsearch index. Everything I've read points to on-prem installations where you can put scripts/utilities on the servers that captures the information and then exports it out to Elasticsearch through some plug-in such as logstash or and http endpoint plugin. Since we are SAAS and don't have any physical machines that we can use to house scripts or utilities is there a similar or corresponding approach that we can use? Thanks, Bill
I want to know the execution time of scheduled alerts in splunk_instrumentation apps which are scheduled at 3 am.  Nothing is showing when I am opening the "View Recent" option. Even in job manager n... See more...
I want to know the execution time of scheduled alerts in splunk_instrumentation apps which are scheduled at 3 am.  Nothing is showing when I am opening the "View Recent" option. Even in job manager nothing is showing.  Following are the alerts instrumentation.usage.smartStore.config instrumentation.usage.workloadManagement.report instrumentation.usage.authMethod.config instrumentation.usage.healthMonitor.report instrumentation.usage.passwordPolicy.config
Does anyone have a sample stanza for inputs.conf for capturing Windows perfmon stats such as CPU utilization, memory utilization and disk utilization?  I was hoping the stanza would include the actua... See more...
Does anyone have a sample stanza for inputs.conf for capturing Windows perfmon stats such as CPU utilization, memory utilization and disk utilization?  I was hoping the stanza would include the actual counters and such.  Just looking for the basics.  I could not find any good baseline samples. Thank you very much!
I have two different hosts . hostA-1, hostA-2, hostA-3, hostA-4, hostA-5 . hostB-5, hostB-6, hostB-7, hostB-8. I want to compare the specific value from the logs that are matched like Token which are ... See more...
I have two different hosts . hostA-1, hostA-2, hostA-3, hostA-4, hostA-5 . hostB-5, hostB-6, hostB-7, hostB-8. I want to compare the specific value from the logs that are matched like Token which are unique but wanted to find if the value are matched between hostA and hostB and form a table based on that which will show hosts name A and B and below will be the matching token
Hi, I need to create a dashboard panel merging two different search queries. I have below two queries: index=int_gcg_nam_eventcloud_164167 host="mwgcb-ckbla02U*" source="/logs/confluent/kafkaLogs/... See more...
Hi, I need to create a dashboard panel merging two different search queries. I have below two queries: index=int_gcg_nam_eventcloud_164167 host="mwgcb-ckbla02U*" source="/logs/confluent/kafkaLogs/server.log" "Broker may not be available" | rex field=_raw "(?ms)]\s(?P<Code>\w+)\s\[" | search Code="WARN" | stats count | eval mwgcb-ckbla02U.nam.nsroot.net=if(count=0, "Running", "Down") | table mwgcb-ckbla02U.nam.nsroot.net This give me the status of the  broker based on the availability of the indicator "Broker may not be available". index=int_gcg_nam_eventcloud_164167 host="mwgcb-ckbla02U*" source="/logs/confluent/zookeeperLogs/*" "java.net.SocketException: Broken pipe" OR "ZK Down" | rex field=_raw "(?ms)\]\s(?P<Code>\w+)\s" | search Code="WARN" | rex field=_raw "(?ms)\/(?P<IP_Address>(\d+\.){3}\d+)\:\d+" | stats count | eval mwgcb-ckbla02U.nam.nsroot.net=if(count=0, "Running", "Down") | table mwgcb-ckbla02U.nam.nsroot.net This gives me the status of zookeeper based on the availability of the indicators "java.net.SocketException: Broken pipe" OR "ZK Down". Now, I want to merge both the search queries such that I can get the status of both broker and zookeeper in a tabular format.   for e.g.  for the host mwgcb-ckbla02U.nam.nsroot.net Broker             Down Zookeeper    Running   I tried creating a query as below: index=int_gcg_nam_eventcloud_164167 host="mwgcb-ckbla02U*" source="/logs/confluent/kafkaLogs/server.log" OR source="/logs/confluent/zookeeperLogs/zookeeper.log" "Broker may not be available" OR "java.net.SocketException: Broken pipe" OR "ZK Down" | stats count by source | lookup component_lookup.csv "source" | eval Status=if(count=0, "Running", "Down")| table Component,Status   However in any time range where the indicators are not available, it throws output as "No results found" and hence not able to create the dashboard. Please help to get the output in the desired manner. Thanks..!!
Hi, I have a dashboard where some user needs access to fetch the details as a report (.pdf) format every day. The trigger will be via background task in Windows server using (API) and the report... See more...
Hi, I have a dashboard where some user needs access to fetch the details as a report (.pdf) format every day. The trigger will be via background task in Windows server using (API) and the report should be generated via REST API or HEC token calls. I would like to know if HEC token or REST API can be a used as a solution to this requirement. Does it require any API endpoints ? what will be the easiest way to accomplish this.
I receive this error when trying to save the settings. I am running MITTRE ATT&CK app on RHEL on AWS. Where do I get a new API key please? Thank u 
Hello, I'm trying to connect SCOM with "Splunk Addon for Microsoft SCOM" (Version 4.0.0 - on Splunk Enterprise 7.3 Heavy Forwarder on Windows) The connection itself is working fine and I'm able to ... See more...
Hello, I'm trying to connect SCOM with "Splunk Addon for Microsoft SCOM" (Version 4.0.0 - on Splunk Enterprise 7.3 Heavy Forwarder on Windows) The connection itself is working fine and I'm able to retrieve alerts from SCOM e.g. via group=alert which is the following powershell commands from "scom_command_loader.ps1":   "alert" = @('Get-SCOMAlert', 'Get-SCOMAlert | Get-SCOMAlertHistory');   The input looks like this:   & "$SplunkHome\etc\apps\Splunk_TA_microsoft-scom\bin\scom_command_loader.ps1" -groups "alert" -server "SCOM_DEV" -loglevel DEBUG -starttime "2021-08-01T00:00:00+02:00"    Now I don't want to have all alerts which will be produced in SCOM, instead I want to narrow it down only to the events with the name "*Windows Defender*". So for this I've created a new Powershell v3 Modular Input as a copy of the existing one, but using not a group, instead the commands section of the script - see also addon documentation. Section: "Configure inputs through the PowerShell scripted input UI" The example there is:   & "$SplunkHome\etc\apps\Splunk_TA_microsoft-scom\bin\scom_command_loader.ps1" -commands Get-SCOMAlert, Get-SCOMEvent   So I tried to use this. The powershell command is working on the shell when I connect directly to this SCOM system.   & "$SplunkHome\etc\apps\Splunk_TA_microsoft-scom\bin\scom_command_loader.ps1" -commands 'Get-SCOMAlert -Name "*Windows Defender*"' -server "SCOM_DEV" -loglevel DEBUG -starttime "2021-08-01T00:00:00+02:00"   The input is working fine and delivering the Windows Defender Events to Splunk. BUT the problem now is, that it does not create a checkpoint under the path "D:\Splunk\var\lib\splunk\modinputs\scom" like it does when a powershell command without a parameter (-Name "*Windows Defender*") is used. This can be seen in the DEBUG log files of the addon   index=_internal source=*ta_scom.log 2021-08-05 16:37:11 +02:00 [ log_level=WARN pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] End SCOM TA host = ws006914.schaeffler.comsource = D:\Splunk\var\log\splunk\ta_scom.logsourcetype = ms:scom:log:script 2021-08-05 16:37:11 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] Get 13 objects by 'Get-SCOMAlert -Name "*Windows Defender*"' 2021-08-05 16:37:09 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] --> serialize(Get-SCOMAlert -Name "*Windows Defender*") 2021-08-05 16:37:05 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] Get object 'Get-SCOMAlert -Name "*Windows Defender*"' without checkpoint 2021-08-05 16:37:05 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] --> executeCmd SCOM_DEV Get-SCOMAlert -Name "*Windows Defender*" 2021-08-05 16:37:05 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] Command list: Get-SCOMAlert -Name "*Windows Defender*" 2021-08-05 16:37:05 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] --> getCommands (groups=, commands=[Get-SCOMAlert -Name "*Windows Defender*"]) 2021-08-05 16:37:05 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] splunk version 7.3.4 2021-08-05 16:37:02 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] New SCOMManagementGroupConnection success 2021-08-05 16:36:55 +02:00 [ log_level=DEBUG pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] --> run (groups=, commands=[Get-SCOMAlert -Name "*Windows Defender*"], loglevel=DEBUG) 2021-08-05 16:36:55 +02:00 [ log_level=WARN pid=2956 input=_Splunk_TA_microsoft_scom_internal_used_Defender_Alerts_test_default_command ] Start SCOM TA   You can see it is calling the command correctly, but "without checkpoint". When using a default input, it looks like this:   GET Checkpoint [ log_level=DEBUG pid=10384 input=_Splunk_TA_microsoft_scom_internal_used_Events_test ] Got checkpoint '07/26/2021 10:54:39.220' from file 'D:\Splunk\var\lib\splunk\modinputs\scom\###U0NPTV9ERVY=###Get-SCOMAlert' successfully. SET Checkpoint 2021-07-26 14:00:28 +02:00 [ log_level=DEBUG pid=10384 input=_Splunk_TA_microsoft_scom_internal_used_Events_test ] Set checkpoint '07/26/2021 11:54:14.790' to file 'D:\Splunk\var\lib\splunk\modinputs\scom\###U0NPTV9ERVY=###Get-SCOMAlert' successfully.   So the problem will be duplicate data when I would run this regulary. Does anybody has an idea how to fix this? I have the feeling tried everything possible (different formations with _"_ or _'_ at different positions). Also without wildcards in the Name field its not working. I guess it somehow cannot create the checkpoint file. I also tried manipulating the  "scom_command_loader.ps1" script with a new group, which contains my query, but it can also not create the checkpoint file. Thanks in advance Michael
We are running Splunk Stream 7.3. In _internal sourcetype=stream:log we see the following warning messages: " NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 256 recei... See more...
We are running Splunk Stream 7.3. In _internal sourcetype=stream:log we see the following warning messages: " NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 256 received for observation domain id xyz from device 172.x.y.z . Dropping flow data set of size xxxx..." Netflow exporters are configured to send out their templates every so many seconds. Eventually the netflow exporter will send the template and the warning messages will stop. My question is whether that data actually dropped or is it cached until the template is received? Am I losing that data? Similar applications that collect netflow (Cisco Stealthwatch, Wireshark) will cache the data until they receive the template. This has implications when load balancing several hundred exporters to an array of Independent Stream Forwarders in order to determine if session persistence is necessary.
When I configure INGEST_EVAL to replace _raw with something else, it duplicates the event.  Splunk Enterprise Version 8.2.1 props.conf: [source::http:splunk_hec_token] TRUNCATE = 500000 SHOULD_L... See more...
When I configure INGEST_EVAL to replace _raw with something else, it duplicates the event.  Splunk Enterprise Version 8.2.1 props.conf: [source::http:splunk_hec_token] TRUNCATE = 500000 SHOULD_LINEMERGE = false KV_MODE = json TRANSFORMS-fdz_event = fdz_event transforms.conf [fdz_event] INGEST_EVAL = _raw="Test" Output:  
Hello, I am trying to install and run a Splunk Univeral Forwarder v8.2.1 on a number of Solaris SPARC 11.3 servers but I am getting this error message. $ /opt/splunkforwarder/bin/splunk start --acc... See more...
Hello, I am trying to install and run a Splunk Univeral Forwarder v8.2.1 on a number of Solaris SPARC 11.3 servers but I am getting this error message. $ /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes ld.so.1: splunk: fatal: relocation error: file /opt/splunkforwarder/bin/splunk: symbol in6addr_any: referenced symbol not found Killed The requirements here https://docs.splunk.com/Documentation/Forwarder/8.2.1/Forwarder/Systemrequirements state the system should have SUNW_1.22.7 or later in the libc.so.1 library, and it does. # pvs /usr/lib/libc.so.1 libc.so.1; SUNWpublic; SUNW_1.23; SUNW_1.22.7; SUNW_1.22.6; SUNW_1.22.5; SUNW_1.22.4; SUNW_1.22.3; SUNW_1.22.2; SUNW_1.22.1; SUNW_1.22; SUNW_1.21.3; SUNW_1.21.2; SUNW_1.21.1; SUNW_1.21; SUNW_1.20.4; SUNW_1.20.1; SUNW_1.20; SUNW_1.19; SUNW_1.18.1; SUNW_1.18; SUNW_1.17; SUNW_1.16; SUNW_1.15; SUNW_1.14; SUNW_1.13; SUNW_1.12; SUNW_1.11; SUNW_1.10; SUNW_1.9; SUNW_1.8; SUNW_1.7; SUNW_1.6; SUNW_1.5; SUNW_1.4; SUNW_1.3; SUNW_1.2; SUNW_1.1; SUNW_0.9; SUNW_0.8; SUNW_0.7; SISCD_2.3; SYSVABI_1.3; SUNWprivate_1.1; Does anyone have any suggestions? Thanks
How to pass a field from subsearch to main search and perform search on another source i am trying  to use  below to search all the UUID's returned from subsearch on path1 to Path2, but the below se... See more...
How to pass a field from subsearch to main search and perform search on another source i am trying  to use  below to search all the UUID's returned from subsearch on path1 to Path2, but the below search string is not working properly  source ="Path2" | eval id=[search source="Path1" "HTTP/1.1\" 500" OR "HTTP/1.1\" 400" OR "HTTP/1.1\" 404" | rex "universal-request-id- (?<UUID>.*?)\s*X-df-elapsed-time-ms" | |return $UUID] suggest me on where i am doing wrong  
I have a data in Splunk like   Fname Lname Country fname1 lname1 USA fname2 lname2 USA fname3 lname3 USA   And I have file in Splunk server that contains in each line  a na... See more...
I have a data in Splunk like   Fname Lname Country fname1 lname1 USA fname2 lname2 USA fname3 lname3 USA   And I have file in Splunk server that contains in each line  a name: MyFile.csv: Name fname1 lname3 fname123   I want to present only the lines that in the Name into CSV if equal to Fname or Lname in my index   In my example the result need to be Fname Lname Country fname1 lname1 USA fname3 lname3 USA       How can I do that?  
I have a scheduled search that outputs the results every 5 minutes using the outputcsv command to local disk. The file is stored with name abc_dns.csv       index=abc |fields _time _raw |fields - ... See more...
I have a scheduled search that outputs the results every 5 minutes using the outputcsv command to local disk. The file is stored with name abc_dns.csv       index=abc |fields _time _raw |fields - _indextime _sourcetype _subsecond |outputcsv abc_dns     Then I am forwarding that file to an external Indexer inputs.conf   [monitor:///opt/splunk/var/run/splunk/csv/abc_dns.csv] index = abc_dns_logs sourcetype = abc_dns #crcSalt = <SOURCE>   Below is the props.conf   [abc_dns] INDEXED_EXTRACTIONS = csv HEADER_FIELD_LINE_NUMBER = 1 KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = structured TRANSFORMS-t1 = eliminate_header   transforms.conf   [eliminate_header] REGEX = ^"_time","_raw"$ DEST_KEY = queue FORMAT = nullQueue     When I validate the results, I am seeing data is getting duplicated on the external Indexer. I attempted to add crcSalt = <SOURCE> to check if it makes any difference, which seemed that it did initially, however, afterwhile, I saw data was getting duplicated again. In reality, there is indeed duplicate data in original logs itself, but overall I am actually seeing data from the monitored file is also getting duplicated. Can anyone please help with this ?
We encounter an error configuring the VMware Carbon Black Cloud application (vmware_app_for_splunk 1.1.1 with Splunk Common Information Model Splunk_SA_CIM 4.20.0) with Splunk Enterprise (8.2.1). In... See more...
We encounter an error configuring the VMware Carbon Black Cloud application (vmware_app_for_splunk 1.1.1 with Splunk Common Information Model Splunk_SA_CIM 4.20.0) with Splunk Enterprise (8.2.1). In Application Configuration API Token Configuration when we select + we get the error message "Something went wrong. TypeError: Cannot read property 'length' of undefined" and "vmware_app_for_splunk: pavo_message:UNKNOWN"
Hello, I performed a "fresh" installation of ES 4.6.1 in a search head cluster through deployer. Splunk app version is 8.0.9.  The apps for the ES were pulled from a repository solution to deployer... See more...
Hello, I performed a "fresh" installation of ES 4.6.1 in a search head cluster through deployer. Splunk app version is 8.0.9.  The apps for the ES were pulled from a repository solution to deployer and then pushed to the search cluster. When I try to open the content management it is stuck in blank and the Incident Review displaying "Operation Failed, Internal Error. __enter__" error. Is there any log file I might check and permission I need to change a this behavior is quite strange? Thank you in advance
Hoping to find some physical copies of the Quick Reference Guide on card stock.  I was hoping they would be available from the Online Splunk store here: https://www.mypromomall.com/splunk  but they a... See more...
Hoping to find some physical copies of the Quick Reference Guide on card stock.  I was hoping they would be available from the Online Splunk store here: https://www.mypromomall.com/splunk  but they are not. I usually pick them up at .conf, but being virtual last year didn't have that opportunity.  I'm working with a rotating base of junior folks who use the heck out of them.  The cards have been an awesome aid in getting them up to speed.  
Hi everyone, I am looking for any document which can help to calculate log source volume. I have 10 different type of log sources i only have their log source description and quantity. Now i have t... See more...
Hi everyone, I am looking for any document which can help to calculate log source volume. I have 10 different type of log sources i only have their log source description and quantity. Now i have told to calculate the estimated total volume per day.  If someone can help me to get the Log source volume calculator.
HI all, in our identity feed there are some instances where different identities are registered with the same email address. ES by default merges using "key" fields and email. I want to disable this... See more...
HI all, in our identity feed there are some instances where different identities are registered with the same email address. ES by default merges using "key" fields and email. I want to disable this behaviour, but I cannot find how to do that. In the documentation it is written "The key field is identity and the default merge convention is email.". Anyone knows how can I change the default merge convention? Thanks Mario
I have a requirement to list the most used indexes in the platform. For this I need to prepare a report which shows when indexes was used last time, and how those indexes were used e.g. if the user u... See more...
I have a requirement to list the most used indexes in the platform. For this I need to prepare a report which shows when indexes was used last time, and how those indexes were used e.g. if the user used it via an ad-hoc search or the index is part of any scheduled saved search. I am looking into audit data model for it , but this is not listing out indexes when it's defined within a macro. for example I have a scheduled saved search which is having a macro with index and source type definition, how to get that index name extracted via audit logs of saved search execution? Any inputs please. Thank you!