All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello! We have index with cisco events and now we need to parse some fields such as device_mac and device_name. But we can't do it by regex because we get unstructured data from cisco (fields are sw... See more...
Hello! We have index with cisco events and now we need to parse some fields such as device_mac and device_name. But we can't do it by regex because we get unstructured data from cisco (fields are swapped). For example in this log first there is device type, and after mac And the next one comes first mac, and after device type Could you please help me? How i can parse this fields? Thanks!
Dear Community, I have the following search query:   index="myIndex" host="myHost" source="mySource.log" 2021081105302743 "started with profile"   The above gives me the following result:   ... See more...
Dear Community, I have the following search query:   index="myIndex" host="myHost" source="mySource.log" 2021081105302743 "started with profile"   The above gives me the following result:   Progam has run, 2021081105302743 started with profile TEST_PROFILE_01   I would like to remove everything before TEST_PROFILE_01 , giving me just the profile. Beforehand I do not know what profile is used. Therefore I guess what I want is: Remove everything before "profile" Also remove "profile" Then, I want to display the profile in a "Single Value".   I have used the below in a table before, but now that I am using Single Value, I don't know which field to use. Also if I use a string instead of the # below in the table, it won't work. | eval _raw = replace(_raw,"^[^#]*#", "")   I have 2 questions: When using a Single Value Panel, what field do I use in the above search at the position _raw (what to replace it with)? When I search for the data as shown in the query located at the top, the data is shown in the "Event" field. Is this the field I should use? At the position of the # I would like to use "profile", but I don't know how to edit the regex accordingly. I could use some help on this matter. Thanks in advance.
Hello, when i search from index=alfa_cisco_ice and see the errors: AutoLookupDriver - Could not load lookup='LOOKUP-cisco_asa_ids_lookup' reason='Error in 'lookup' command: Must specify one or more ... See more...
Hello, when i search from index=alfa_cisco_ice and see the errors: AutoLookupDriver - Could not load lookup='LOOKUP-cisco_asa_ids_lookup' reason='Error in 'lookup' command: Must specify one or more lookup fields.' Please help, how too fix this problem?  And in inspector i see alot of log like  SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cisco_dest_ipv6'.   SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cisco_fw_connection'    
Greetings-  We clone a working group in LDAP and expecting the cloned group to show in Splunk Ldap page with the new Ldap group name. LDap windows team has indicated that the cloned group looks all... See more...
Greetings-  We clone a working group in LDAP and expecting the cloned group to show in Splunk Ldap page with the new Ldap group name. LDap windows team has indicated that the cloned group looks all good. Is there a configuration to set so I can see the newly cloned LDAP group ?  
i have max results of 300000 in a report.But my shc is failing to send csv in a email. Please find the below settings.I tride to changed them to 300000 still its not working. Also i restarted after t... See more...
i have max results of 300000 in a report.But my shc is failing to send csv in a email. Please find the below settings.I tride to changed them to 300000 still its not working. Also i restarted after the change, Fyi:the report containing lesser then 175000 they are working perfectly fine. Can some one help me with this? $SPLUNK_HOME/etc/system/local/limits.conf [scheduler] max_action_results = 175000   [searchresults] maxresultrows = 175000   $SPLUNK_HOME/etc/system/local/alert_actions.conf   [default] maxresults = 175000
Hi community, i have the following tstats output "| tstats count WHERE fromzone="*INTRANET*" index=*_*_* by index source getport" The getport field is for different indexes always 5 digits long fo... See more...
Hi community, i have the following tstats output "| tstats count WHERE fromzone="*INTRANET*" index=*_*_* by index source getport" The getport field is for different indexes always 5 digits long for e.g. (index A has Port 22001, index B has 25003, index C has 35002) Now i want to filter out all field values from the field getport without the "1" at the end. Thanks for your help!
Hello, I'm asking your help to merge two indexes. The first index is simply JSON documents compound. The second index is made up of JSON documents too but with array of documents. For example: Firs... See more...
Hello, I'm asking your help to merge two indexes. The first index is simply JSON documents compound. The second index is made up of JSON documents too but with array of documents. For example: First index { "field1": "value1", "field2": "value2", } Second index    { ...other fields... documents: [{ "field1": "value1" "field2": "value2" }, { "field1": "value1" "field2": "value2" }] }   I want to be able to retrieve and flatmap documents from the second index and then merge it with the first index to be able to do stats operations. Thank you 
Hello Splunkers. We want to deploy a splunk product in our environment, to monitor Infrastructure along with some Automations, automations like; 1. Predictive analysis - Finding the defect before i... See more...
Hello Splunkers. We want to deploy a splunk product in our environment, to monitor Infrastructure along with some Automations, automations like; 1. Predictive analysis - Finding the defect before it actually appears in the infrastructure.  2. Minimize MTTR, MTTD and MTTI. 3. Executing scripts on destination machine, to resolve the defect, like deletion of some garbage files to free up storage etc..   Thanks in Advance.
Hi all, my question is regarding towards the addon of security Essentials.   i have different instances of Splunk running and all have there own Searches. I ingested these into Security Essentials... See more...
Hi all, my question is regarding towards the addon of security Essentials.   i have different instances of Splunk running and all have there own Searches. I ingested these into Security Essentials (SE). now i want to gather all of content of these different SE instances into one.   now what i dit was use the export function to JSON: From there i got to the manege snapshots page and pressed the export button, here i got a JSON output encoded base64 code. this works! But now!.. if i am searching on my bookmarks i need to restore each snapshot to see that content.. what i want is 1 snapshot with all my content in one (merge all snapshots together).   i tried to merge de contents of the sse_bookmarks_backup but then the restore button does not work.  
Hi Splunkers   I've tried to read some data from MS SQL Server. The data is json like. It works for a while and then I encounter with this message:   ERROR HttpInputDataHandler - Failed processin... See more...
Hi Splunkers   I've tried to read some data from MS SQL Server. The data is json like. It works for a while and then I encounter with this message:   ERROR HttpInputDataHandler - Failed processing http input, token name=db-connect-http-input, channel=n/a, source_IP=127.0.0.1, reply=6, events_processed=802, http_input_body_size=11904838 ERROR HttpInputDataHandler - Parsing error : While expecting event's raw text: String value too long. valueSize=5246755, maxValueSize=5242880, totalRequestSize=11904838     After that no data getting in. Is there any way to increase the maxValueSize? or my problem is originated from elsewhere Thanks in advance
I'm having som issues with the application log on some of our windows servers getting spammed with the following messages:     Faulting application name: splunk-winevtlog.exe, version: 1794... See more...
I'm having som issues with the application log on some of our windows servers getting spammed with the following messages:     Faulting application name: splunk-winevtlog.exe, version: 1794.768.23581.39240, time stamp: 0x5c1d9d74 Faulting module name: KERNELBASE.dll, version: 6.3.9600.19724, time stamp: 0x5ec5262a Exception code: 0xeeab5254 Fault offset: 0x0000000000007afc Faulting process id: 0x3258 Faulting application start time: 0x01d787a1d9f141cd Faulting application path: C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe Faulting module path: C:\Windows\system32\KERNELBASE.dll Report Id: 18687572-f395-11eb-8131-005056b32672 Faulting package full name: Faulting package-relative application ID:       Always followed by a 1001 information event like so:       Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: splunk-winevtlog.exe P2: 1794.768.23581.39240 P3: 5c1d9d74 P4: KERNELBASE.dll P5: 6.3.9600.19724 P6: 5ec5262a P7: eeab5254 P8: 0000000000007afc P9: P10: Attached files: These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_splunk-winevtlog_32b957db7bcb27fbdcdd5be64aea86e1b639666_0170a0ed_a993dd7e Analysis symbol: Rechecking for solution: 0 Report Id: 18687572-f395-11eb-8131-005056b32672 Report Status: 4100 Hashed bucket:       I've tried a lot of changes to the Universal Forwarder configuration but nothing i do removes these message. The only thing i've noticed that can helt to remove these messages is by lowering the memory consumption on the server. So far the servers i've seen with these message in the application log are running at 70% and more memory consumption. But 70% memory consumption seems to be normal and i don't see why this should cause the splunk-winevtlog.exe to crash (as often as every minute).   Our version of Splunk Universal Forwarder is 7.2.3. I've checked the "known issues" on splunk docs but can't fint anything related to memory issues for this version. I'm thinking about upgrading the Universal Forwarder to a newer version, but that's just because i can't think og anything else to try. Do anyone else experience this and know what can be done? As a side note: Splunk internal shows absolutely nothing. There are no warnings or errors at all in the internal log on these servers. But the event spamming (crashes) are still logged in the windows application log. Splunk itself does not log or detect a crash it seems?
index="performance" sourcetype="physical_cpu" | addtotals fieldname=CPU_SUM CPU_* | rex mode=sed field=_raw "s/ //g" | eval cpu_cnt=len(_raw)/5 | eval value=CPU_SUM/cpu_cnt | stats avg(value) as... See more...
index="performance" sourcetype="physical_cpu" | addtotals fieldname=CPU_SUM CPU_* | rex mode=sed field=_raw "s/ //g" | eval cpu_cnt=len(_raw)/5 | eval value=CPU_SUM/cpu_cnt | stats avg(value) as avg_val ,max(value) as max_val ,min(value) as min_val by _time host | eventstats max(value) as max_val by host | sort -max_val | where host="host" OR host="host1" OR host="host2" OR host="host3" OR host="host4" | sort max_val desc | table host,max_val,avg_val,min_val im using upper query by get below table, but i'd like to get max_value of host at the time how can i get the to-be table? AS-IS host max_val av_val min_val host1 111 0.111 0.01111 host2 222 0.222 0.02222 host3 333 0.333 0.03333 host4 444 0.444 0.04444 TO-BE time host max_val 2021-08-11 10:00:000 host1 111 2021-08-11 12:00:000 host2 222 2021-08-11 13:00:000 host1 333 2021-08-11 14:00:000 host3 444
Hey Splunk- community,  I need your help again. My data are events which reports disturbments. "action=kommend" marks the start of disturbment, "action=gehend" the end of disturbment (action=0 => ... See more...
Hey Splunk- community,  I need your help again. My data are events which reports disturbments. "action=kommend" marks the start of disturbment, "action=gehend" the end of disturbment (action=0 => disturbment; action=1 => no disturbment).  I have to consider one important condition: the reason of both events schould be same (Störung=X (action=kommend) => Störung=X (action=gehend)). Alternatively there is the possibility to use "transaction" but there excists same problem: How could I command the search to produce time- connected events which hold the status? The actual result looks like first picture following but it should looks like second to compare them in one line chart (compare human reported disturbments with machine reported disturbments). Thank you very much and kind regards from Germany, Felix   Actual result   How it should looks like
Hi All, I want to make a phone call via Unix shell script by using Curl command. For that, I need to call the REST API. I did it vis Twilio REST API. Now I am looking same REST API code (GET), to d... See more...
Hi All, I want to make a phone call via Unix shell script by using Curl command. For that, I need to call the REST API. I did it vis Twilio REST API. Now I am looking same REST API code (GET), to do the same. Any help is highly appreciated. Thanks, Sumit
What should I do to see the value of two counts? I want to see the number of clientips and destinations at the same time. What should I do?
Hi, I have a data stream on the forwarder, streaming on the 514. the data is correctly indexed. But I would like to extract/build some fields from the _raw. In search head, i try with rex field. it... See more...
Hi, I have a data stream on the forwarder, streaming on the 514. the data is correctly indexed. But I would like to extract/build some fields from the _raw. In search head, i try with rex field. it works but it's too long for user. So, i want to do it on forwarder before indexation. Example: _raw: <150> 2021-06-01: 00: 05: 12 localhost blue car=porsche,959 ..... i want build this fields for begining: carbrand : porsche inputs.conf [tcp://my_hostname_client:514] index = car_park sourcetype = sale First WAY: only in props.conf [sale] # i try something EXTRACT-testsale = ^.*car=(?<carbrand>.*)\,$ Second WAY: props + transforms In props.conf [sale] REPORT-testsale = extract-cardata And in transforms.conf [extract-cardata] REGEX = ^.*car=(.*)\,$ FORMAT = carbrand::$1 So, is-it possible to extract field in the _raw on the forwarder from  tcp flow 514 ? If yes, where are my mistakes in my conf? Thks for your returns and help. Best regards.
Also <protocol>://<host>:8088/services/collector/health timing out.
Hello eveyone.   I need to connect to Firebird database (version 2.5) with db connect. I created db_connection_types.conf, but after adding a stanza db connect stops working (says error connecting... See more...
Hello eveyone.   I need to connect to Firebird database (version 2.5) with db connect. I created db_connection_types.conf, but after adding a stanza db connect stops working (says error connecting to server). I looked up in: https://community.splunk.com/t5/All-Apps-and-Add-ons/Adding-firebird-database-connection-to-Splunk-DB-Connect/m-p/158358 https://community.splunk.com/t5/All-Apps-and-Add-ons/DB-Connect-2-and-Firebird-SQL/m-p/323823 https://community.splunk.com/t5/All-Apps-and-Add-ons/DB-Connect-with-FireBird-SQL/m-p/443705 None of them worked for me.
I am trying to export the status code and ajax error code in user experience from browser snapshot data of respective requests for our analytics purpose? Is there any way to do this? I tried with ... See more...
I am trying to export the status code and ajax error code in user experience from browser snapshot data of respective requests for our analytics purpose? Is there any way to do this? I tried with multiple options but am not sure how will be able to export these. I have used analytics API as an alternative to fetch these values but I see ajax error value is always captured null for all the requests on the analytics level. You can see above our requirement is to just capture this status code and ajax error code for the respective API. Please help me with this.
I am very new to splunk and need some help on creating an alert to report on failed domain admin logins.