All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I'm interested to know more about RBA Navigator, anyone have the communication method to Matt Snyder the app creator? I would like to know more information about the list of available features,... See more...
Hi, I'm interested to know more about RBA Navigator, anyone have the communication method to Matt Snyder the app creator? I would like to know more information about the list of available features, Use Cases (if possible), and installation guide. Thanks.
Hi, i am trying to use custom javascript file to customize some button actions in my dashboard, but it doesn't work and i don't know why. I'm using the last version of Splunk enterprise My custom s... See more...
Hi, i am trying to use custom javascript file to customize some button actions in my dashboard, but it doesn't work and i don't know why. I'm using the last version of Splunk enterprise My custom script is in the folder  $SPLUNK_HOME/etc/apps/app_name/appserver/static/. I have  tried to restart Splunk web, use the bumps button but nothing works. Can anyone help me?  Simple xml dashboard code <form version="1.1" theme="dark" script="button.js"> <search> <query> | makeresults | eval field1="test", field2="test1", field3="lll", field4="sgsgsg" </query> <earliest></earliest> <latest>now</latest> <done> <set token="field1">$result.field1$</set> <set token="field2">$result.field2$</set> <set token="field3">$result.field3$</set> <set token="field4">$result.field4$</set> </done> </search> <label>stacked_inputs</label> <fieldset submitButton="false" autoRun="true"></fieldset> <row> <panel> <title>title</title> <input id="test_input1" type="text" token="field1"> <label>field1</label> <default>$field1$</default> <initialValue>$field1$</initialValue> </input> <input id="test_input2" type="text" token="field2"> <label>field2</label> <default>$field2$</default> <initialValue>$field2$</initialValue> </input> <html> <style> #test_input2 { padding-left: 30px !important; } </style> </html> </panel> </row> <row> <panel> <input id="test_input3" type="text" token="field3"> <label>field3</label> <default>$field3$</default> <initialValue>$field3$</initialValue> </input> <input id="test_input4" type="text" token="field4"> <label>field4</label> <default>$field4$</default> <initialValue>$field4$</initialValue> </input> </panel> </row> <row> <panel> <html> <form> <div> <div> <label>Password</label> <input type="text" value="$field4$"/> <br/> <input type="password" id="exampleInputPassword1" placeholder="Password"/> </div> </div> <button type="submit" class="btn btn-primary">Submit</button> </form> <button onclick="test()">Back</button> <button onclick="test1()">Back1</button> <button id="back" data-param="test">Back2</button> </html> </panel> </row> </form> Javascript code As you can see i have tried different methods Thanks for your help.  
Hello, I have a WSUS server that is using the Windows Internal Database (WID). I would like to ingest WSUS service logs into Splunk, store them, and then parse them for further analysis. Could someo... See more...
Hello, I have a WSUS server that is using the Windows Internal Database (WID). I would like to ingest WSUS service logs into Splunk, store them, and then parse them for further analysis. Could someone guide me on the best approach to achieve this? Specifically: What is the best way to configure Splunk to collect logs from the WSUS service (and database if necessary)? Are there any best practices or recommended add-ons for parsing and indexing WSUS logs in Splunk? Thanks in advance for your help!
I have a working dashboard that displays a number of metrics and KPIs for the previous week.  Today, I was asked to expand that dashboard to include a dropdown of all previous weeks over the last yea... See more...
I have a working dashboard that displays a number of metrics and KPIs for the previous week.  Today, I was asked to expand that dashboard to include a dropdown of all previous weeks over the last year   Using this query I was able to fill in my dashboard dropdown pretty easily | makeresults | eval START_EPOCH = relative_time(_time,"-1y@w1") | eval END_EPOCH = START_EPOCH + (60 * 60 * 24 * 358) | eval EPOCH_RANGE = mvrange(START_EPOCH, END_EPOCH, 86400 * 7) | mvexpand EPOCH_RANGE | eval END_EPOCH = EPOCH_RANGE + (86400 * 7) | eval START_DATE_FRIENDLY = strftime(EPOCH_RANGE, "%m/%d/%Y") | eval END_DATE_FRIENDLY = strftime(END_EPOCH, "%m/%d/%Y") | eval DATE_RANGE_FRIENDLY = START_DATE_FRIENDLY + " - " + END_DATE_FRIENDLY | table DATE_RANGE_FRIENDLY, EPOCH_RANGE | reverse Using this I get a dropdown with values such as  10/07/2024 - 10/14/2024 09/30/2024 - 10/07/2024   And so on, going back a year. Adding it to my search as a token has been more challenging though. Here's what I'm trying to do: index=someIndex earliest=$token_epoch$ latest=$token_epoch$+604800  Doing this I get "Invalid latest_time: latest_time must be after earliest_time."   I've seen some answers around here that involve running the search then using WHERE to apply earliest and latest.  I'd like to avoid that because the number of records that would have to pulled before I could filter on earliest and latest is in the many millions.  I've also considered using the timepicker but my concern there is the users who use this dashboard will pick the wrong dates.  I'd like to limit that by hardcoding the first and last days of the search via the dropdown. Is there a way to accomplish relative earliest and latest dates/times like this?
like in the subject, i am looking at events with different fields and delimeters i want to say if the event contains thisword then rex blah blah blah elseif the event contains thisotherword then rex... See more...
like in the subject, i am looking at events with different fields and delimeters i want to say if the event contains thisword then rex blah blah blah elseif the event contains thisotherword then rex blah blah blah i suspect this is simple but thought to ask
I have many Dashboard Studio dashboards by now.  Those created earlier function normally.  But starting a few days ago, new dashboard created cannot use "Open in Search" function any more; the magnif... See more...
I have many Dashboard Studio dashboards by now.  Those created earlier function normally.  But starting a few days ago, new dashboard created cannot use "Open in Search" function any more; the magnifying glass icon is greyed out.  "Show Open In Search Button" is checked.  Any insight?  My servers is 9.2.2 and there has been no change on server side.
I have a question about breaking up a single line of data to send to the Splunk Indexer.  We sending data which can have over 50,000 characters on a single line.  I would like to know if there is a ... See more...
I have a question about breaking up a single line of data to send to the Splunk Indexer.  We sending data which can have over 50,000 characters on a single line.  I would like to know if there is a way to break up the data on the source server with the universal forwarder before sending it to the indexer and then reassemble it after it arrives at the indexer.   We would like to know if this is possible rather than having to increase the Truncate size on the indexer to take all the data at once.  
Have working query to give me list of all printers, total job count, total page count and show location of printers using a lookup.  Sample Data, Lookup and query is:    Sample Data print logs from ... See more...
Have working query to give me list of all printers, total job count, total page count and show location of printers using a lookup.  Sample Data, Lookup and query is:    Sample Data print logs from index=printer prnt_name   jobs   pages_printed   size_paper CS001             1          5                               letter CS001             1         10                            11x17 CS002             1         20                            11x17 CS003             1         10                             letter CS003             1         15                            11x17 Lookup Data from printers.csv prnt_name   location CS001             office                CS002            dock                CS003            front                   Splunk Query index=printer    | stats count sum(pages_printed) AS tot_prnt_pgs by prnt_name,    | lookup printers.csv prnt_name AS prnt_name OUTPUT location    | stats sum(count) AS print_jobs by prnt_name    | table prnt_name, location, count,  tot_prnt_pgs Splunk Query Results prnt_name     location    count      tot_prnt_pgs  CS001               office         2               15                           CS002               dock           1               20                           CS003               front           2               25        I have been trying to use a (count (eval(if...))) clause but not sure how ot implement it or if that is the correct way to get the results I am after.  I have been using various arguments from other Splunk posts but can't seem to make it work.  Below is the output I am trying to get  Output looking for:  "ltr" represents letter and lgl represents 11x7.   prnt_name     location    count      tot_prnt_pgs    ltr_count    ltr_tot_pgs    lgl_count     lgl_tot pgs CS001               office         2               15                            1                    5                         1                      10 CS002               dock           1               20                            0                    0                         1                      20 CS003               front           2                25                           1                    10                       1                      15 Appreciate any time give on this.
Hi I try to Ingest macOS logd into Splunk Cloud. When I enable logd input it didn't work. Based on logs it use wrongly "log show" command.   log show --style ndjson --no-backtrace --no-debug --no-... See more...
Hi I try to Ingest macOS logd into Splunk Cloud. When I enable logd input it didn't work. Based on logs it use wrongly "log show" command.   log show --style ndjson --no-backtrace --no-debug --no-info --no-loss --no-signpost --predicate 'subsystem == "com.apple.TimeMachine" && eventMessage CONTAINS[c] "backup"' --start 2024-10-18 16:47:55 --end 2024-10-18 16:48:25   It should be   log show --style ndjson --no-backtrace --no-debug --no-info --no-loss --no-signpost --predicate 'subsystem == "com.apple.TimeMachine" && eventMessage CONTAINS[c] "backup"' --start "2024-10-18 16:47:55" --end "2024-10-18 16:48:25"   Have anyone noticed this and have anyone any fix for it or should I just create a support ticket? r. Ismo
Hello team, I am confused to see multiple apps of Carbon Black for SOAR. Can you please suggest which one is preferable in which use case? 
Iam using splunk to generate as below.It is run for 2 days date range where am trying to compare the count ClassName 16-Oct-24 17-Oct-24 ClassA 544 489 ClassB 39 47 ClassC 193... See more...
Iam using splunk to generate as below.It is run for 2 days date range where am trying to compare the count ClassName 16-Oct-24 17-Oct-24 ClassA 544 489 ClassB 39 47 ClassC 1937 2100   My splunk query is as under index=myindex RecordType=abc ClassName IN ( "ClassA", "ClassB", "ClassC") | bucket _time span=1d | stats avg(cpuTime) as avgCpuTime by ClassName _time | xyseries ClassName _time avgCpuTime I need below output which has an extra column that gives the comparision.How can we tweak this query?Is there another way to achieve this in more visually appealing manner ClassName 16-Oct-24 17-Oct-24 %Reduction ClassA 544 489 10% ClassB 39 47 -21% ClassC 1937 2100 -8%
Hello All, We are encountering an issue with the Splunk Update Password API. When we make a request to update a user’s password, the API returns a 200 OK status code, but instead of the expected... See more...
Hello All, We are encountering an issue with the Splunk Update Password API. When we make a request to update a user’s password, the API returns a 200 OK status code, but instead of the expected JSON response, we are receiving an HTML response. Additionally, despite the successful status code, the password is not being updated on the server. however it was worked earlier, we have verified it's giving issue on both OnPrem and cloud instance. splunk enterprise version: 9.3.1.0 Followed following official documentation: https://docs.splunk.com/Documentation/Splunk/9.3.1/RESTREF/RESTaccess#authentication.2Fusers
Hi.   We are just starting to use Splunk Infrastructure monitoring, and have added the "Splunk Infrastructure Monitoring Add-on".   We have created the connection to Splunk Observability, without... See more...
Hi.   We are just starting to use Splunk Infrastructure monitoring, and have added the "Splunk Infrastructure Monitoring Add-on".   We have created the connection to Splunk Observability, without any problems, and admins can run the search command "| sim flow query=..." without any problems.   The problem arises when a normal user is trying the same command, where we get the following error message: "Error in "sim" command: Splunk Infrastructure Monitoring API Connection not configured."   I have reviewed all permissions I could think of, and except for a couple og view, there is public access. I can't find any new capabilities, that might need to be added.   If anyone can point me in the right direction, it would be greatly appriciated.   Kind regards
Hi, I’m trying to ingest CSV data (without a timestamp) using a Universal Forwarder (UF) running in a fresh container. When I attempt to ingest the data, I encounter the following warning in the _i... See more...
Hi, I’m trying to ingest CSV data (without a timestamp) using a Universal Forwarder (UF) running in a fresh container. When I attempt to ingest the data, I encounter the following warning in the _internal index, and the data ends up being ingested with a timestamp from 2021. This container has not previously ingested any data, so I’m unsure why it defaults to this date. 10-18-2024 03:42:00.942 +0000 WARN DateParserVerbose [1571 structuredparsing] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Wed Jan 13 21:06:54 2021). Context: source=/var/data/sample.csv|host=splunk-uf|csv|6215   Can someone explain why this date is being applied, and how I can prevent this issue?  
I have subquery result as host1 host2 host2 And I want to put this all host result as host=*  in the main query. 1. subquery | inputlookup test.csv | search cluster="cluster1" | stats count by ... See more...
I have subquery result as host1 host2 host2 And I want to put this all host result as host=*  in the main query. 1. subquery | inputlookup test.csv | search cluster="cluster1" | stats count by host | fields - count   2. main query using subquery, as index=abc host="*" host="*" is subquery result. Or is there any way to extract subquery result as host IN (host1, host2, host3) in main query?  
Had some basic queries - Can splunk be deployed as a CNF on Redhat OpenShift Cloud Platform ?? Also is the same deployment tested with Cisco UCS servers. Starting up with Splunk so any quick refere... See more...
Had some basic queries - Can splunk be deployed as a CNF on Redhat OpenShift Cloud Platform ?? Also is the same deployment tested with Cisco UCS servers. Starting up with Splunk so any quick references for the deployment selection & Scaling guide would be helpful.
I am having some issues getting this to work correctly. It does not return all the results. I have different records in different sourcetypes under the same index. sourcetypeA eventID = computer... See more...
I am having some issues getting this to work correctly. It does not return all the results. I have different records in different sourcetypes under the same index. sourcetypeA eventID = computerName.sessionID infoIWant1 = someinfo1 infoIWant2 = someinfo2   SourcetypeB's events are broken into events that I need to correlate. sourcetypeB event1------------------------------------------------------- sessionID= sessionNo1 direction=receive -----------------------------------------------------------------   event2-------------------------------------------------------- sessionID=sessionNo1 direction=send -----------------------------------------------------------------   I attempted the below search using the transaction command to correlate the records in sourcetypeB. index=INDEX sourcetype=sourcetypeA | rex field=eventID "\w{0,30}+.(?<sessionID>\d+)" | do some filter on infoIWant fields here | join type=inner sessionID [ search index=INDEX sourcetype=sourcetypeB | transaction sessionID | where eventcount==2 | fields sessionID duration ] | chart count by duration
Hello all, I recently discovered Linux capabilities, in particular the option CAP_DAC_READ_SEARCH from the AmbientCapabilities parameter in services, and realised it was actually implemented on Splu... See more...
Hello all, I recently discovered Linux capabilities, in particular the option CAP_DAC_READ_SEARCH from the AmbientCapabilities parameter in services, and realised it was actually implemented on Splunk UF 9.0+. I was happy to see this included in the service of UFs, but I then found it was not enabled by default on Splunk Enterprise (I was using 9.3.1), so I attempted to create an override for the service, including the aforementioned parameter. Unfortunately, I was unable to ingest logs for which the user running splunk did not have the permissions. Funnily enough, I tried to set some monitoring on /var/log/messages through the GUI; I was able to see the logs when selecting the sourcetype, but then I got an error "Parameter name: Path is not readable" when submitting the conf. I also get an insufficient permission message in the internal logs when forcing the monitoring of /var/log/messages via an inputs.conf. I read on an older post that this behaviour comes from the use of an inappropriate function when checking the permissions on the file... So my questions to the community and Splunk employees are : Are capabilities in services supported for Splunk Enterprise? If so, how can I set them up? If not, will they be supported at some point? How would you collect logs on a HF or standalone instance, where the user running splunk has no rights on the logs to ingest? Thanks
I have a multi select drop down menu with field names as values.    When i one or mone values from the drop down menu, those fields/columns need to totaled. I tried the below code as sugged by Me... See more...
I have a multi select drop down menu with field names as values.    When i one or mone values from the drop down menu, those fields/columns need to totaled. I tried the below code as sugged by Meta AI..But it is not producing any result. Please help me <dashboard> <label>Sum Selected Fields</label> <row> <panel> <input type="dropdown" token="selected_fields"> <label>Select Fields</label> <choice value="field1">Field 1</choice> <choice value="field2">Field 2</choice> <choice value="field3">Field 3</choice> </input> <chart> <search> | eval sum_fields="$selected_fields$" | stats sum(eval(split(sum_fields, ","))) as Total by Jobname </search> </chart> </panel> </row> </dashboard>
I have the following fabricated search which is a pretty close representation of what I actually want to do and gives me the results I want... (index=_audit (action=search OR action=GET_PASSWORD)) O... See more...
I have the following fabricated search which is a pretty close representation of what I actually want to do and gives me the results I want... (index=_audit (action=search OR action=GET_PASSWORD)) OR (index=_internal [ search index=_audit (action=search OR action=GET_PASSWORD) | dedup user | table user] ) | stats count(eval(index="_audit")) as count, values(clientip) as clientip,count(eval(index="_internal")) as internalCount by user i.e for everyone who has performed a search or GET_PASSWORD in one index, I want to know something about them gathered from both indexes.  I can't get past the feeling that I shouldn't need to repeat the "index=_audit (action=search OR action=GET_PASSWORD)" search, which in the actual search is whole lot of SPL, so duplicating it makes things untidy.  Macros aside, can anyone come up with a more elegant solution?