All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm having som issues with the application log on some of our windows servers getting spammed with the following messages:     Faulting application name: splunk-winevtlog.exe, version: 1794... See more...
I'm having som issues with the application log on some of our windows servers getting spammed with the following messages:     Faulting application name: splunk-winevtlog.exe, version: 1794.768.23581.39240, time stamp: 0x5c1d9d74 Faulting module name: KERNELBASE.dll, version: 6.3.9600.19724, time stamp: 0x5ec5262a Exception code: 0xeeab5254 Fault offset: 0x0000000000007afc Faulting process id: 0x3258 Faulting application start time: 0x01d787a1d9f141cd Faulting application path: C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe Faulting module path: C:\Windows\system32\KERNELBASE.dll Report Id: 18687572-f395-11eb-8131-005056b32672 Faulting package full name: Faulting package-relative application ID:       Always followed by a 1001 information event like so:       Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: splunk-winevtlog.exe P2: 1794.768.23581.39240 P3: 5c1d9d74 P4: KERNELBASE.dll P5: 6.3.9600.19724 P6: 5ec5262a P7: eeab5254 P8: 0000000000007afc P9: P10: Attached files: These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_splunk-winevtlog_32b957db7bcb27fbdcdd5be64aea86e1b639666_0170a0ed_a993dd7e Analysis symbol: Rechecking for solution: 0 Report Id: 18687572-f395-11eb-8131-005056b32672 Report Status: 4100 Hashed bucket:       I've tried a lot of changes to the Universal Forwarder configuration but nothing i do removes these message. The only thing i've noticed that can helt to remove these messages is by lowering the memory consumption on the server. So far the servers i've seen with these message in the application log are running at 70% and more memory consumption. But 70% memory consumption seems to be normal and i don't see why this should cause the splunk-winevtlog.exe to crash (as often as every minute).   Our version of Splunk Universal Forwarder is 7.2.3. I've checked the "known issues" on splunk docs but can't fint anything related to memory issues for this version. I'm thinking about upgrading the Universal Forwarder to a newer version, but that's just because i can't think og anything else to try. Do anyone else experience this and know what can be done? As a side note: Splunk internal shows absolutely nothing. There are no warnings or errors at all in the internal log on these servers. But the event spamming (crashes) are still logged in the windows application log. Splunk itself does not log or detect a crash it seems?
index="performance" sourcetype="physical_cpu" | addtotals fieldname=CPU_SUM CPU_* | rex mode=sed field=_raw "s/ //g" | eval cpu_cnt=len(_raw)/5 | eval value=CPU_SUM/cpu_cnt | stats avg(value) as... See more...
index="performance" sourcetype="physical_cpu" | addtotals fieldname=CPU_SUM CPU_* | rex mode=sed field=_raw "s/ //g" | eval cpu_cnt=len(_raw)/5 | eval value=CPU_SUM/cpu_cnt | stats avg(value) as avg_val ,max(value) as max_val ,min(value) as min_val by _time host | eventstats max(value) as max_val by host | sort -max_val | where host="host" OR host="host1" OR host="host2" OR host="host3" OR host="host4" | sort max_val desc | table host,max_val,avg_val,min_val im using upper query by get below table, but i'd like to get max_value of host at the time how can i get the to-be table? AS-IS host max_val av_val min_val host1 111 0.111 0.01111 host2 222 0.222 0.02222 host3 333 0.333 0.03333 host4 444 0.444 0.04444 TO-BE time host max_val 2021-08-11 10:00:000 host1 111 2021-08-11 12:00:000 host2 222 2021-08-11 13:00:000 host1 333 2021-08-11 14:00:000 host3 444
Hey Splunk- community,  I need your help again. My data are events which reports disturbments. "action=kommend" marks the start of disturbment, "action=gehend" the end of disturbment (action=0 => ... See more...
Hey Splunk- community,  I need your help again. My data are events which reports disturbments. "action=kommend" marks the start of disturbment, "action=gehend" the end of disturbment (action=0 => disturbment; action=1 => no disturbment).  I have to consider one important condition: the reason of both events schould be same (Störung=X (action=kommend) => Störung=X (action=gehend)). Alternatively there is the possibility to use "transaction" but there excists same problem: How could I command the search to produce time- connected events which hold the status? The actual result looks like first picture following but it should looks like second to compare them in one line chart (compare human reported disturbments with machine reported disturbments). Thank you very much and kind regards from Germany, Felix   Actual result   How it should looks like
Hi All, I want to make a phone call via Unix shell script by using Curl command. For that, I need to call the REST API. I did it vis Twilio REST API. Now I am looking same REST API code (GET), to d... See more...
Hi All, I want to make a phone call via Unix shell script by using Curl command. For that, I need to call the REST API. I did it vis Twilio REST API. Now I am looking same REST API code (GET), to do the same. Any help is highly appreciated. Thanks, Sumit
What should I do to see the value of two counts? I want to see the number of clientips and destinations at the same time. What should I do?
Hi, I have a data stream on the forwarder, streaming on the 514. the data is correctly indexed. But I would like to extract/build some fields from the _raw. In search head, i try with rex field. it... See more...
Hi, I have a data stream on the forwarder, streaming on the 514. the data is correctly indexed. But I would like to extract/build some fields from the _raw. In search head, i try with rex field. it works but it's too long for user. So, i want to do it on forwarder before indexation. Example: _raw: <150> 2021-06-01: 00: 05: 12 localhost blue car=porsche,959 ..... i want build this fields for begining: carbrand : porsche inputs.conf [tcp://my_hostname_client:514] index = car_park sourcetype = sale First WAY: only in props.conf [sale] # i try something EXTRACT-testsale = ^.*car=(?<carbrand>.*)\,$ Second WAY: props + transforms In props.conf [sale] REPORT-testsale = extract-cardata And in transforms.conf [extract-cardata] REGEX = ^.*car=(.*)\,$ FORMAT = carbrand::$1 So, is-it possible to extract field in the _raw on the forwarder from  tcp flow 514 ? If yes, where are my mistakes in my conf? Thks for your returns and help. Best regards.
Also <protocol>://<host>:8088/services/collector/health timing out.
Hello eveyone.   I need to connect to Firebird database (version 2.5) with db connect. I created db_connection_types.conf, but after adding a stanza db connect stops working (says error connecting... See more...
Hello eveyone.   I need to connect to Firebird database (version 2.5) with db connect. I created db_connection_types.conf, but after adding a stanza db connect stops working (says error connecting to server). I looked up in: https://community.splunk.com/t5/All-Apps-and-Add-ons/Adding-firebird-database-connection-to-Splunk-DB-Connect/m-p/158358 https://community.splunk.com/t5/All-Apps-and-Add-ons/DB-Connect-2-and-Firebird-SQL/m-p/323823 https://community.splunk.com/t5/All-Apps-and-Add-ons/DB-Connect-with-FireBird-SQL/m-p/443705 None of them worked for me.
I am trying to export the status code and ajax error code in user experience from browser snapshot data of respective requests for our analytics purpose? Is there any way to do this? I tried with ... See more...
I am trying to export the status code and ajax error code in user experience from browser snapshot data of respective requests for our analytics purpose? Is there any way to do this? I tried with multiple options but am not sure how will be able to export these. I have used analytics API as an alternative to fetch these values but I see ajax error value is always captured null for all the requests on the analytics level. You can see above our requirement is to just capture this status code and ajax error code for the respective API. Please help me with this.
I am very new to splunk and need some help on creating an alert to report on failed domain admin logins. 
Hi,  Currently, our Angular application is configured as "User Experience", We are facing the below issues: 1. We want to trigger an alert when there is a specific HTTP code in the respon... See more...
Hi,  Currently, our Angular application is configured as "User Experience", We are facing the below issues: 1. We want to trigger an alert when there is a specific HTTP code in the response ex: 500, . While configuring health rules, there is no option to select a specific error code. 2. Anomalies are not detected even through enabled the option to overcome these issues Can we configure our application as "Application" so that will give more options to configure the health rules? 
Hello – I hope you can assist me with starting my AppD SaaS Pro trial. I have enrolled into this program a week ago and so far I only received the welcome message via email, with a recommended path ... See more...
Hello – I hope you can assist me with starting my AppD SaaS Pro trial. I have enrolled into this program a week ago and so far I only received the welcome message via email, with a recommended path for exploring the product. I was trying to  follow the recommended steps from the "welcome" email but after several attempts I always ended up  landing on the page where I can download agents... and this is pretty much it.  Kind regards, Andy
All, I've started seeing the following error message on Splunk 8.2.1 since installing alert_manager app and I'd like to clean it up. - Error from my deployment server, from btool checks - Version ... See more...
All, I've started seeing the following error message on Splunk 8.2.1 since installing alert_manager app and I'd like to clean it up. - Error from my deployment server, from btool checks - Version Splunk 8.2.1 - CentOS7 - I have /etc/deployment-apps/alert_manager/README/alert_manager.conf.spec there. So I assume that's what it's looking for. # Error 08-10-2021 13:23:41.948 -0700 WARN Application [28063 MainThread] - No spec file for: /opt/splunk/etc/deployment-apps/alert_manager/default/alert_manager.conf\n 08-10-2021 13:23:41.948 -0700 WARN Application [28063 MainThread] - Invalid key in stanza [alert_manager] in /opt/splunk/etc/deployment-apps/alert_manager/default/alert_actions.conf, line 12: param.urgency (value: low).\n # alert_manager.conf, line 12 under [settings] stanza auto_close_info = false # alert_manager.conf.spec, line 40, under [settings] stanza auto_close_info = [true | false] * Configure if informational events are automatically resolved * Defaults to false   Any ideas on how I'd troubleshoot this? 
I am struggling to follow the documentation to install the dfs manager app. Is there any better resources to follow? Currently I'm stuck figuring out why when setting the java_home field in the serv.... See more...
I am struggling to follow the documentation to install the dfs manager app. Is there any better resources to follow? Currently I'm stuck figuring out why when setting the java_home field in the serv.conf upon restart I am getting an error saying it is an invalid key.
Hi all, I am totally new to SPLUNK. I am going thru the free online class Splunk Fundamentals. I have uploaded the example data into SPLUNK exactly as directed by the directions. However, when I log... See more...
Hi all, I am totally new to SPLUNK. I am going thru the free online class Splunk Fundamentals. I have uploaded the example data into SPLUNK exactly as directed by the directions. However, when I log out then log back in from Admin to a Power user to search, the data is in SPLUNK but it says it is not indexed. Isn't the data automatically indexed upon uploading the files? The instructions do not say specifically, but after uploading the data, it expects you to see 239,625 Events Indexed. Yet for what ever reason, it is not indexding the data, what could cause that issue? I've redon the uploads 7 times now. I even went thru wiped out everything (am working on a VM) re-down loaded SPLUNK and started from scratch, and still no indexing. What the heck am I missing? TIA
I run the following to get a list of Saved / skipped searches thru the Monitoring console for my ES (Splunk ES). I need a field added to show the reason for failure / why skipped the searches. Thanks... See more...
I run the following to get a list of Saved / skipped searches thru the Monitoring console for my ES (Splunk ES). I need a field added to show the reason for failure / why skipped the searches. Thanks a million in advance for your help.   `dmc_set_index_internal` search_group=dmc_group_search_head search_group=* sourcetype=scheduler (status="completed" OR status="skipped" OR status="deferred")             | stats count(eval(status=="completed" OR status=="skipped")) AS total_exec, count(eval(status=="skipped")) AS skipped_exec by _time, host, app, savedsearch_name, user, savedsearch_id | where skipped_exec > 0  
Hoping someone can help here.... We are currently running DNS services on our Windows Active Directory servers (we do not currently have tools/tech in place to stream or otherwise capture this data... See more...
Hoping someone can help here.... We are currently running DNS services on our Windows Active Directory servers (we do not currently have tools/tech in place to stream or otherwise capture this data on the wire --- roadmap item).   We are also running on Splunk Cloud with a Splunk HF (installed on a dedicated stand-alone system) & Splunk UF (installed on the Active Directory server(s) with DNS services running).  So the data flows as follows: Splunk UF (AD Server) -> Splunk HF (dedicated box) -> Splunk Cloud Using this approach, I am able to successfully get the data in to Splunk Cloud.  My issue revolves around parsing the necessary fields.  I am most concerned about getting the DNS entry itself (as part of the initial query) as well as the IP address returned in the DNS response.  Below I have included the raw data, the inputs.conf, props.conf, and transforms.conf.  Please let me know what I am missing as I am at a loss at this point. ======== ======= ======DNS Query Raw Data====== 8/9/2021 7:19:32 AM 1750 PACKET 00000200616CA100 UDP Rcv ::1 1bf5 Q [0001 D NOERROR] A (27)vm3-proxy-pta-NCUS-CHI01P-2(9)connector(3)his(10)msappproxy(3)net(0) UDP question info at 00000200616CA100 Socket = 828 Remote addr ::1, port 62839 Time Query=229843, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x004a (74) Message: XID 0x1bf5 Flags 0x0100 QR 0 (QUESTION) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 0 Z 0 CD 0 AD 0 RCODE 0 (NOERROR) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty ======DNS Response Raw Data====== 8/9/2021 7:19:10 AM 1750 PACKET 000002006188FCC0 UDP Snd ::1 196c R Q [8081 DR NOERROR] A (27)vm3-proxy-pta-NCUS-CHI01P-2(9)connector(3)his(10)msappproxy(3)net(0) UDP response info at 000002006188FCC0 Socket = 828 Remote addr ::1, port 58618 Time Query=229821, Queued=229822, Expire=229825 Buf length = 0x0200 (512) Msg length = 0x00bb (187) Message: XID 0x196c Flags 0x8180 QR 1 (RESPONSE) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 1 Z 0 CD 0 AD 0 RCODE 0 (NOERROR) QCOUNT 1 ACOUNT 2 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 QTYPE A (1) QCLASS 1 ANSWER SECTION: Offset = 0x004a, RR count = 0 TYPE CNAME (5) CLASS 1 TTL 241 DLEN 85 DATA Offset = 0x00ab, RR count = 1 TYPE A (1) CLASS 1 TTL 7 DLEN 4 DATA 20.80.38.248 AUTHORITY SECTION: empty ADDITIONAL SECTION: Empty ======UF inputs.conf====== [monitor://c:\windows\system32\dns\dns.log] disabled = 0 index = dns sourcetype = windows:dns ======UF props.conf====== [windows:dns] SHOULD_LINEMERGE = True BREAK_ONLY_BEFORE_DATE = True EXTRACT-Domain = (?i) .*? \.(?P<Domain>[-a-zA-Z0-9@:%_\+.~#?;//=]{2,256}\.[a-z]{2,6}) EXTRACT-src=(?i) [Rcv|Snd] (?P<source_address>\d+\.\d+\.\d+\.\d+) EXTRACT-Threat_ID,Context,Int_packet_ID,proto,mode,Xid,type,Opcode,Flags_Hex,char_code,ResponseCode,question_type = .+?[AM|PM]\s+(?<Threat_ID>\w+)\s+(?<Context>\w+)\s+(?<Int_packet_ID>\w+)\s+(?<proto>\w+)\s+(?<mode>\w+)\s+\d+\.\d+\.\d+\.\d+\s+(?<Xid>\w+)\s(?<type>(?:R)?)\s+(?<Opcode>\w+)\s+\[(?<Flags_Hex>\w+)\s(?<char_codes>.+?)(?<ResponseCode>[A-Z]+)\]\s+(?<question_type>\w+)\s EXTRACT-Authoritative_Answer,TrunCation,Recursion_Desired,Recursion_Available = (?m) .+?Message:\W.+\W.+\W.+\W.+\W.+AA\s+(?<Authoritative_Answer>\d)\W.+TC\s+(?<TrunCation>\d)\W.+RD\s+(?<Recursion_Desired>\d)\W.+RA\s+(?<Recursion_Available>\d) SEDCMD-win_dns = s/\(\d+\)/./g ======HF inputs.conf====== [splunktcp://:5143] connection_host = x.x.x.x (masking IP) index = dns disabled = 0 ======HF props.conf====== [windows:dns] EXTRACT-Domain = (?i) .*? \.(?<Domain>[-a-zA-Z0-9@:%_\+.~#?;//=]{2,256}\.[a-z]{2,6}) EXTRACT-windows_dns_000001 = (?<thread_id>[0-9A-Fa-f]{4}) (?<Context>[^\s]+)\s+(?<internal_packet_id>[0-9A-Fa-f]+) (?<protocol>UDP|TCP) (?<direction_flag>Snd|Rcv) (?<client_ip>[0-9\.]+)\s+(?<xid>[0-9A-Fa-f]+) (?<type>[R\s]{1}) (?<opcode>[A-Z\?]{1}) \[(?<flags>[0-9A-Fa-f]+) (?<flagAuthoritativeAnswer>[A\s]{1})(?<flagTrucatedResponse>[T\s]{1})(?<flagRecursionDesire>[D\s]{1})(?<flagRecursionAvailable>[R\s]{1})\s+(?<response_code>[^\]]+)\]\s+(?<query_type>[^\s]+)\s+(?<query_name>[^/]+) EXTRACT-windows_dns_000010 = ([a-zA-Z0-9\-\_]+)\([0-9]+\)(?<tld>[a-zA-Z0-9\-\_]+)\(0\)$ EXTRACT-windows_dns_000020 = \([0-9]+\)(?<domain>[a-zA-Z0-9\-\_]+\([0-9]+\)[a-zA-Z0-9\-\_]+)\(0\)$ EXTRACT-windows_dns_000030 = \s\([0-9]+\)(?<hostname>[a-zA-Z0-9\-\_]+)\(0\)$ EVAL-domain = replace(domain, "([\(0-9\)]+)", ".") EVAL-query_domain = ltrim(replace(query_name, "(\([\d]+\))", "."),".") EVAL-type_msg = case(type="R", "Response", isnull(type), "Query") EVAL-opcode_msg = case(opcode="Q", "Standard Query", opcode="N", "Notify", opcode="U", "Update", opcode="?", "Unknown") EVAL-direction = case(direction_flag="Snd", "Send", direction_flag="Rcv", "Received") EVAL-decID = tonumber(xid, 16) REPORT-win_dns = dns_string_lengths, dns_strings REPORT-extractdoms = extractdoms REPORT-extractips = extractips ======HF transforms.conf====== [dns_string_lengths] REGEX = \((\d+)\) FORMAT = strings_len::$1 MV_ADD = true REPEAT_MATCH = true [dns_strings] REGEX = \([0-9]+\)([a-zA-Z0-9\-\_]+)\([0-9]+\) FORMAT = strings::$1 MV_ADD = true REPEAT_MATCH = true [extractdoms] SOURCE_KEY = query_domain REGEX = Name\s+\"(?<NewDomain>[a-zA-Z0-9\[\]\(\)\-\.\_]+\"\n) FORMAT = strings::$1 MV_ADD = true REPEAT_MATCH = true [extractips] REGEX = DATA\s+(?<Answers>[0-9\.]+\n) FORMAT = strings::$1 MV_ADD = true REPEAT_MATCH = true
I created a search head and an indexer and the search head is acting as the master license server. I added the tutorial data to the search head https://docs.splunk.com/Documentation/Splunk/8.2.1/Sear... See more...
I created a search head and an indexer and the search head is acting as the master license server. I added the tutorial data to the search head https://docs.splunk.com/Documentation/Splunk/8.2.1/SearchTutorial/GetthetutorialdataintoSplunk  The indexer is associated with the master license server. On the search head I am getting the license error of 1 orphaned indexer reported by 1 indexer and the message is "this slave indexed data/sourcetype(s) without a corresponding license pool" and the indexer is the master license server and category is orphan_slave.  I read in other answers related to this that the sourcetypes need to be in a license pool. I added the two host of the search head and indexer to the license pool via the specific indexers option however I am still getting the same error. Will this error go away or am I still in violation?
Hi All, Please help me to solve this. desc="Trigger App : Search [Abc_[qwert] asd] number"  I want to fetch the "[Abc_[qwert] asd]" from the above string Thanks
Are the datasets that are included with Splunk Security Essentials updated dynamically or are they static? For example the ransomware_extensions_lookup.csv datasets.