All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a panel that is a single value that only shows the Health Status as "UP" or "DOWN".  If it is "UP" I want it to be green. If it is "DOWN" I want it to be red. How can I do this in the source ... See more...
I have a panel that is a single value that only shows the Health Status as "UP" or "DOWN".  If it is "UP" I want it to be green. If it is "DOWN" I want it to be red. How can I do this in the source code? This is a query the shows what the panel is doing. index=index_name  | rename msg.event.healthStatus as healthStatus | dedup healthStatus | table healthStatus
I have a table with more than 50000 hostnames. I want to run a wild card for 5th & 6th character in a hostname list. My list sample: hostname SERVINBBB01 SERRCNAAA01 SERSSPBBC55 SERR... See more...
I have a table with more than 50000 hostnames. I want to run a wild card for 5th & 6th character in a hostname list. My list sample: hostname SERVINBBB01 SERRCNAAA01 SERSSPBBC55 SERRINAAC98 SERWINSSS11   In my search result, I want to get list of hosts with IN in 5th & 6th location. Results should be: hostname SERVINBBB01 SERRINAAC98 SERWINSSS11   With asterisk(*) (example: *IN*) I am not able to get these results, as characters before and after 5th, 6th position are not same always.
So I am writing a query and It all gets piped into stats at the end. There is a value that I want to use to remove lines from stats as the line item is unnecessary. I understand that by will list eve... See more...
So I am writing a query and It all gets piped into stats at the end. There is a value that I want to use to remove lines from stats as the line item is unnecessary. I understand that by will list every item but I'm looking to remove particular lines based upon a certain condition as it will help cleanup my data. What im looking todo is I run a distinct_count on an item and then for every line that the dc result is 0, remove it from my results
I am trying to make a drop down dependent on another dropdown. However my field name has a variable in it as well as the field value.  Example:  experiments__12345=CONTROL  experiments__56752=te... See more...
I am trying to make a drop down dependent on another dropdown. However my field name has a variable in it as well as the field value.  Example:  experiments__12345=CONTROL  experiments__56752=test-1 experiments__12345=control
Hi Splunkerds, I have struggled with powershell for a while and thought that after all the great tips I got from you, I'd repay you by posting a solution. The problem is, that the documentation for ... See more...
Hi Splunkerds, I have struggled with powershell for a while and thought that after all the great tips I got from you, I'd repay you by posting a solution. The problem is, that the documentation for the inputs.conf section on powershell offers little to none help to actually get things right. Some things to note that the documentation seems to miss are: - If your script returns strings, then every line of output will be a single event and it will bypass anything that you do in order to reassemble the events. Forget SHOULD_LINEMERGE and LINE_BREAKER. They are simply ignored as if the UF using the powershell input actually passes cooked events to splunk. - Consequently, as a result of the above: If you want to pass anything more complex than a single line, you have to pass the events as powershell objects Now for a practical and working example, let's pull the output of the certutil.exe into splunk using powershell: First, test the command in powershell to see the raw data:      # certutil -view -out 'NotBefore,NotAfter,SerialNumber,RequestID,CertificateTemplate,DistinguishedName'     Now, for the solution, let's take a look at inputs.conf:     [powershell://CertStore-LocalUser] disabled = 0 script = . "$SplunkHome\etc\apps\<yourappname>\bin\Get-LocalUserCertificates.ps1" schedule = */30 * * * * sourcetype = windows:certs     Note the script line: It's taken me some time to get the invocation right. The docs could be clearer. Here is the content of the Get-LocalUserCertificates.ps1 script located in the <yourappname>/bin directory as an example/template of how to take a linear (string) based input, iterate over the lines, cut them with regexes and create new objects for every Row or Entry section, assign the keys and pass them to splunk:     $certutil = "$($env:SystemRoot)\system32\certutil.exe" $certutilOutput = Invoke-Expression "$certutil -view -out 'NotBefore,NotAfter,SerialNumber,RequestID,CertificateTemplate,DistinguishedName'" $currentKey = $null $currentValue = $null $certutilOutput -split [environment]::NewLine | foreach { switch -regex ($_){ '[Entry|Row] \d+:(.*)' { # New Object found $currentObject = New-Object psobject $currentValue = $null continue } '(\s{2})(?<key>[\w\s]+):\s*(?<value>.*)' { # Add key / value pair to Object if($currentObject){ if( -not $matches.value){ $currentKey = $matches.key.Trim() $currentValue = $null } else { $currentObject | Add-Member -MemberType NoteProperty -Name $($matches.key.Trim()) -Value $($matches.value.trim(@("'",'"','`'))) -Force } } } '^$' { # if it exists, pass the object to splunk if($currentObject){ if($currentKey -and $currentValue){ $currentObject | Add-Member -MemberType NoteProperty -Name $($currentKey) -Value $($currentValue) -Force } # pass the object to splunk $currentObject # reset the object after it has been passed $currentObject = $null $currentKey = $null $currentValue = $null } } default { # Save Values of Multiline Parameters if a current Key exists if($currentObject -and $currentKey){ $currentValue += $_+"`n" } } } } # return the last row object if($currentKey -and $currentValue){ $currentObject | Add-Member -MemberType NoteProperty -Name $($currentKey) -Value $($currentValue) -Force } $currentObject      Note: If your command delivers a list of PS Objects, you just iterate over that list with .Next, passing ever object that your list holds.  Hope it helps somebody! Happy splunkin' Oliver
Hi,  I just installed Splunk_TA_windows on my windows 2016 server. The server is running the splunk uf version 7.3.x and this is a new install.   I am getting this error msg during startup of the S... See more...
Hi,  I just installed Splunk_TA_windows on my windows 2016 server. The server is running the splunk uf version 7.3.x and this is a new install.   I am getting this error msg during startup of the Splunk UF and when I run the btool command   'C:\Program Files\splunkuniversalforwarder\bin\splunk.exe' btool check debug  Checking: C:\Program Files\splunkuniversalforwarder\etc\apps\Splunk_TA_windows\default\transforms.conf Invalid key in stanza [user_account_control_property] in C:\Program Files\splunkuniversalforwarder\etc\apps\Splunk_TA_windows\default\transforms.conf, line 10: external_cmd (value: user_account_control_property.py user AccountControl userAccountPropertyFlag). Invalid key in stanza [user_account_control_property] in C:\Program Files\splunkuniversalforwarder\etc\apps\Splunk_TA_windows\default\transforms.conf, line 11: external_type (value: python). Invalid key in stanza [user_account_control_property] in C:\Program Files\splunkuniversalforwarder\etc\apps\Splunk_TA_windows\default\transforms.conf, line 12: fields_list (value: userAccountControl,userAccountProperty Flag). Invalid key in stanza [dhcp_discard_headers] in C:\Program Files\splunkuniversalforwarder\etc\apps\Splunk_TA_windows\default\transforms.conf, line 19: REGEX (value: ^(?:[^\d]+|\d+[^\d,])). Invalid key in stanza [dhcp_discard_headers] in C:\Program Files\splunkuniversalforwarder\etc\apps\Splunk_TA_windows\default\transforms.conf, line 20: DEST_KEY (value: queue). Invalid key in stanza [dhcp_discard_headers] in C:\Program Files\splunkuniversalforwarder\etc\apps\Splunk_TA_windows\default\transforms.conf, line 21: FORMAT (value: nullQueue). Invalid key in stanza [auto_kv_for_microsoft_dhcp] in C:\Program Files\splunkuniversalforwarder\etc\apps\Splunk_TA_windows\default\transforms.conf, line 24: DELIMS (value: ","). Invalid key in stanza [auto_kv_for_microsoft_dhcp] in C:\Program Files\splunkuniversalforwarder\etc\apps\Splunk_TA_windows\default\transforms.conf, line 25: FIELDS (value: msdhcp_id,date,time,description,ip,nt_host,mac ). Invalid key in stanza [msdhcp_signature_lookup] in C:\Program Files\splunkuniversalforwarder\etc\apps\Splunk_TA_windows\default\transforms.conf, line 28: filename (value: msdhcp_signatures.csv). <......SNIP ...> Invalid key in stanza [dns_recordclass_lookup] in C:\Program Files\splunkuniversalforwarder\etc\apps\Splunk_TA_windows\default\transforms.conf, line 1267: filename (value: dns_recordclass_lookup.csv). Invalid key in stanza [geo_us_states] in C:\Program Files\splunkuniversalforwarder\etc\apps\search\default\transforms.conf, line 2: external_type (value: geo). Invalid key in stanza [geo_us_states] in C:\Program Files\splunkuniversalforwarder\etc\apps\search\default\transforms.conf, line 3: filename (value: geo_us_states.kmz). Invalid key in stanza [geo_countries] in C:\Program Files\splunkuniversalforwarder\etc\apps\search\default\transforms.conf, line 6: external_type (value: geo). Invalid key in stanza [geo_countries] in C:\Program Files\splunkuniversalforwarder\etc\apps\search\default\transforms.conf, line 7: filename (value: geo_countries.kmz). Invalid key in stanza [geo_attr_us_states] in C:\Program Files\splunkuniversalforwarder\etc\apps\search\default\transforms.conf, line 10: filename (value: geo_attr_us_states.csv). Invalid key in stanza [geo_attr_countries] in C:\Program Files\splunkuniversalforwarder\etc\apps\search\default\transforms.conf, line 13: filename (value: geo_attr_countries.csv). Invalid key in stanza [geo_hex] in C:\Program Files\splunkuniversalforwarder\etc\apps\search\default\transforms.conf, line 16: external_type (value: geo_hex). Looks like there's a syntax error on every line in de default transforms.conf file.  Upgraded from Splunk UF 7.3.3 to Splunk UF 7.3.9 - same problem.  This is a default Splunk UF install. No other application is deployed to this UF. 
Hi Team, We try to deploy the Dell EMC ECS App for Splunk Enterprise V1.1.0 and we have an issue during the configuration. We have a loop on the step "Setup default values" and after we have a new ... See more...
Hi Team, We try to deploy the Dell EMC ECS App for Splunk Enterprise V1.1.0 and we have an issue during the configuration. We have a loop on the step "Setup default values" and after we have a new page explained that the configuration has not been fully configured yet. Could you help about this issue please ? Thank in advance.
Hello everyone! I'm new to splunk, but I'd like to monitory my Splunk Enterprise instance with prometheus and grafana. I'd like to get system statistics about how many messages are ingested by the s... See more...
Hello everyone! I'm new to splunk, but I'd like to monitory my Splunk Enterprise instance with prometheus and grafana. I'd like to get system statistics about how many messages are ingested by the splunk process (with the sources as labels), what the internal health of the system is (various message queues), processing time for messages, lookup times for various external sources, etc. I figured that one way of collecting such data would be by scraping various logs with grok_exporter and exporting the relevant metrics to a prometheus instance. However, I'm new to splunk and I don't know what the relevant logs are. Could you guys point me in the right direction? I'd also like to scrape various warnings/errors generated by splunk (and also license usage messages) so that I can forward them to Alert Manager. In my searches I did find this project https://github.com/ehershey/splunk-exporter, but it exports only the most basic data (splunk status as up or down).   Any advice is welcome! Regards, Adrian
Hello respective,     i have a CSV type file which contains multiple lines of data. when i upload into the splunk few line of data merged into a single event. i try to put few blank lines in between... See more...
Hello respective,     i have a CSV type file which contains multiple lines of data. when i upload into the splunk few line of data merged into a single event. i try to put few blank lines in between those lines, but still the lines are in a single event.  Ex. (csv file data) First line in csv, first row, first word Second line in csv, second row, second word Third line in csv, third row, third word. the above data i upload into splunk, then second and third line should one each single event, but it's coming in one event.  then i update the file. Ex. (csv file data updated extra line in between lines) First line in csv, first row, first word   Second line in csv, second row, second word   Third line in csv, third row, third word. but still the result is same. it's not making 3 events, it's merging 2 lines to one event.   
Hi, I'm sure I'm not the first to ask this question, but I can't seem to find an answer that covers what I am trying to achieve. I have an index which collects job stats - start, end, fail, success... See more...
Hi, I'm sure I'm not the first to ask this question, but I can't seem to find an answer that covers what I am trying to achieve. I have an index which collects job stats - start, end, fail, success etc What I would like to do is create a table to display all the jobs I am interested in in one column, then the start, end and run times and a status column.  Like this -  Column A     Column B        Column C   Column D  Column E Jobname     Start Time      End Time    Run Time    Status abc                 08:00                08:01           1                    Success The search below gives me everything EXCEPT I cannot calculate 'Run Time' because the events are separate.  I've tried with 'streamstats' and 'transaction' without any success. index=foo sourcetype=bar_prd "p-foo*" earliest=-6h | rex "JOB: (?<j>p-foo-[a-z\-]+)" | rex "STATUS: (?<s>\w+)\s" | eval ST=if(s="RUNNING",_time,"") | eval ET=if(s="SUCCESS",_time,"") | eval Status=if(s="SUCCESS","Success","") | eval ST=strftime(ST,"%Y-%m-%d %H:%M:%S.%Q") | eval ET=strftime(ET,"%Y-%m-%d %H:%M:%S.%Q") | stats values(ST) as "Start Time", values(ET) as "End Time", values(Status) by j   As ever, I'd be very grateful for assistance.
Hi I'm trying to join data from same index but with different marker field and multiple values in second index. Example of rows:   TS=06/22/2021 08:50:39:390|Type=A|Ids=550 TS=06/22/2021 08:51:39:... See more...
Hi I'm trying to join data from same index but with different marker field and multiple values in second index. Example of rows:   TS=06/22/2021 08:50:39:390|Type=A|Ids=550 TS=06/22/2021 08:51:39:390|Type=B|Ids=495,550,698 What I want is merge record Type=A with Record Type=B and get how much time passed (so TS from TypeB - TS from TypeA). Basically records with Type=A will always have one Ids, and with Type=B can have 1 or more Ids.   Any ideas what could be the best? selfjoin in this case is not possible as the Type=B is multivalve field .
We have a SHC at version 8.1.3.  When we try to use "earliest" and "latest" in search we get results based on the earliest and latest however, its search events based on the time picker.  IE - If I c... See more...
We have a SHC at version 8.1.3.  When we try to use "earliest" and "latest" in search we get results based on the earliest and latest however, its search events based on the time picker.  IE - If I create a search "index=main earliest=-15m latest=now" and the time picker is set to "24hours", the search will search all the events from the past 24 hours yet only display the results for the last 15 minutes.  If I test this same search outside of our SHC, on a standalone instance and use the "-15m" in search I get back the last minutes of events however I am ONLY search the last 15 minutes of events.  The search does not care about what is selected in the time picker.  As well, in the job inspector I see the "Your time range was substituted based on your search string" message as I would expect.  In the SHC cluster, I do not see this message.   To add to the weirdness.  If I include a sourcetype in my search "index=main  sourcetype=stuff earliest=-15m latest=now" It works as expected and I see the message about substituting the timerage in the job inspector.  However, If I include more then one sourcetype, then it does NOT substitute the timerange.
Can I have an Index Cluster runnning on both RHEL 7 and RHEL 8? We are looking to migrate our Splunk estate from RHEL 7 over to RHEL 8. As we have an existing Index Cluster, the plan is Start - RH... See more...
Can I have an Index Cluster runnning on both RHEL 7 and RHEL 8? We are looking to migrate our Splunk estate from RHEL 7 over to RHEL 8. As we have an existing Index Cluster, the plan is Start - RHEL 7 Index Cluster Build the new RHEL 8 Indexers put the cluster into maintenance mode add RHEL 8 indexers into the existing RHEL 7 Index Cluster disable maintenance mode re-balance data accros the RHEL 7 / 8 cluster splunk offline --enforce-counts on RHEL 7 indexer(s) to be removed End - RHEL 8 Index Cluster
We have two Splunk Cloud instances - dev and production.  Developer create splunk app like ap123456 with dashboard and other knowledge objects in dev instances with all the ingested dev data. Everyth... See more...
We have two Splunk Cloud instances - dev and production.  Developer create splunk app like ap123456 with dashboard and other knowledge objects in dev instances with all the ingested dev data. Everything test out well and ready to promote to production. How can we export application? We tried to use API "rest /services/apps/local/AP123456/package", but it is not an support option. With amount of test data and number of developers, we consciously decided Enterprise is not viable options. Are there any alternative way to export the application from Splunk Cloud? Any ideas or suggestions are truly welcome. Thanks Jeffrey  
Context is structured sourcetypes such as JSON.  First, Does use of TIMESTAMP_FIELDS require INDEXED_EXTRACTIONS? (The Web UI suggests so.) In Bug: Duplicate values with INDEXED_EXTRACTION?, @badrin... See more...
Context is structured sourcetypes such as JSON.  First, Does use of TIMESTAMP_FIELDS require INDEXED_EXTRACTIONS? (The Web UI suggests so.) In Bug: Duplicate values with INDEXED_EXTRACTION?, @badrinath_itrs referred to an intense case study The Indexed Extractions vs. Search-Time Extractions Splunk Case Study regarding INDEXED_EXTRACTIONS: To summarize, Indexed Extractions should be used with caution. Splunk gives a pretty fair warning against using them in almost any doc that references Indexed Extractions, including their definition on Splexicon. Then, I realized that for JSON documents whose timestamp fields falls beyond 128 characters, it is better to set INDEXED_EXTRACTIONS=json in conjunction with TIMESTAMP_FIELDS. (There is an index-time penalty to set MAX_TIMESTAMP_LOOKAHEAD too large.) INDEXED_EXTRACTIONS=json then causes duplicate values at search time unless KV_MODE is set to none on search head.  Given Splunk's extraordinary search time capabilities, if I can use TIMESTAMP_FIELDS in conjunction with INDEXED_EXTRACTIONS=none, the problem would be solved without touching KV_MODE.  Is this possible? Secondly, because INDEXED_EXTRACTIONS=json nearly demands use of KV_MODE=none, wouldn't it be useful for the Web GUI to automatically set KV_MODE=none when "Indexed Extractions" selector points to a structured sourcetype?  The user can still override in Advanced view, but the presence of this default can save lots of headaches for people like me.
Hello, I am running into an issue with some spath and mvexpand functions in splunk. I get the following error: "output will be truncated at 3700 results due to excessive memory usage." after search... See more...
Hello, I am running into an issue with some spath and mvexpand functions in splunk. I get the following error: "output will be truncated at 3700 results due to excessive memory usage." after searching here few previous answer worked. However it is not working out for me  Here is my search  index=ehub-loop |rex "(?:((?:\[BEGIN LOGGING AT (?<Event_Timestamp>.*)\]\n)?)((?:(?P<Event_log_entry>(?s).*)\n)?)((?:\[END LOGGING])?))" offset_field=_extracted_fields_bounds | rex field=Event_log_entry max_match=0 "^(?<single_log_entry>.+)\n*" offset_field=_extracted_fields_bounds | mvexpand single_log_entry | rex field=single_log_entry "(?P<log_Timestamp>\d{4}\-\d{2}\-\d{2}\s\d{2}:\d{2}:\d{2}\,\d{3})\s+(?P<log_level>[^ ]+)\s+\[(?P<Thread_Number>[^ ]+)\]\s+(?P<Class_Name>[^ ]+)\s+\-\s+(?P<log_msg>(?s).*)" offset_field=_extracted_fields_bounds |stats count(Class_Name) as Error_Count by Class_Name,log_level,log_msg Each event will be like [BEGIN LOGGING AT 2021-05-20 21:00:12,505] 2021-05-21 12:09:40,460 Loglevel [Threadid] Classname - logmsg 2021-05-21 12:09:40,476 Loglevel [Threadid] Classname - logmsg 2021-05-21 12:09:40,507 Loglevel [Threadid] Classname - logmsg 2021-05-21 12:09:40,507 Loglevel [Threadid] Classname - logmsg 2021-05-21 12:09:40,522 Loglevel [Threadid] Classname - logmsg [END LOGGING] Please help me out
Hi. I want to  move email from primary mailbox to its archive online version. To achieve that I tried to use 'move email' method provided by EWS app, but I don't know how to specify folder parameter ... See more...
Hi. I want to  move email from primary mailbox to its archive online version. To achieve that I tried to use 'move email' method provided by EWS app, but I don't know how to specify folder parameter for archive mailbox.  Thanks for any help
I am upgrading our TA-Canary app (Thinkst Canary Add-on for Splunk | Splunkbase) With upgrading I mean completely removing the old version (which is working fine btw) and then installing the new TA-... See more...
I am upgrading our TA-Canary app (Thinkst Canary Add-on for Splunk | Splunkbase) With upgrading I mean completely removing the old version (which is working fine btw) and then installing the new TA-canary on our Heavy Forwarder. The first thing you need to perform is a setup in the setup page. You need to enter your API url and Console API Auth token. (I use the same as the old working app!) We get the error:  ERROR: Delete user failed, cannot find the credential information with id : credential:ta_canary_settings_realm:admin: How can we get this app to work?
I am trying to access my dashboard definition as an xml file for which I'm using Splunk rest apis but I'm always getting error:404 object not found /action forbidden  curl -ku "<username>:<password>... See more...
I am trying to access my dashboard definition as an xml file for which I'm using Splunk rest apis but I'm always getting error:404 object not found /action forbidden  curl -ku "<username>:<password>" https://<host>/servicesNS/<adminof app>/<appname>/data/ui/views/<dashboardname> even if I generate session key for attaching in header field of calls ,it's throwing the same error. What could be the possible reasons?
Hi @gcusello , Can you please help me to design a Splunk query to show whether a particular user has been coming into the office at Mascot (and/ or Erskine Park), or otherwise working from home (or ... See more...
Hi @gcusello , Can you please help me to design a Splunk query to show whether a particular user has been coming into the office at Mascot (and/ or Erskine Park), or otherwise working from home (or elsewhere). I'd like to structure the results to show a table listing logon time and IP address - like this: Workstation    Last Login          User 10.11.12.13    15-11-01 10:00:00   user1 10.12.13.14    15-11-01 15:34:02 Regards, Rahul