All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a lookup table file that changes periodically. Replacing the lookup table this time around resulted in multiple errors. I have tried a few things to get around the error, including deleting th... See more...
I have a lookup table file that changes periodically. Replacing the lookup table this time around resulted in multiple errors. I have tried a few things to get around the error, including deleting the lookup table file, definition and automatic lookup. In the process of creating the new lookup table file, definition and lookup, encountered multiple errors as described below. Lookup table file has been uploaded, with global permissions. When creating the lookup definitions, got this error message: "Encountered the following error while trying to save: An object with name=[myfile] already exists". To get around the error, added '06' in front of the file name which worked. Ensured permissions are global. When creating the automatic lookups, got this error message: "Your entry was not saved. The following error was reported: SyntaxError: Unexpected token '<', "<p class=""... is not valid JSON." Canceled out and was able to go back in and create an automatic lookup. Ensured permissions are global. Unfortunately, the lookup is not working in my searches so I suspect there are still latent issues with Splunk reconciling the lookup. With all of these errors, do I need to clear out some config file to get the lookup table working again? If so, how do I do that?
I have a splunk table where there are few columns having dropdown (eg ddcol1, ddcol2 ) and at last there is a column having button(submit) The functionality of the button is to mark all the dropdow... See more...
I have a splunk table where there are few columns having dropdown (eg ddcol1, ddcol2 ) and at last there is a column having button(submit) The functionality of the button is to mark all the dropdowns with a default value(ok) I have written the below js , but the dropdown doesnt update on click of the button    require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, TableView) { // Add dropdown to table var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { return _(["ddcol1","ddcol2","submit"]).contains(cell.field); }, render: function($td, cell) { console.log(cell.field); if (cell.field == "ddcol1" || cell.field == "ddcol2") { if(cell.value == "dd") { var strHtmlInput = ` <select> <option value="ok">ok</option> <option value="not ok">Not Ok</option> </select> `; $td.append(strHtmlInput).on("change", function(e) { console.log(e.target.value) }); } else { $td.append(cell.value); } } if (cell.field == "submit"){ console.log(cell.field); var strHtmlInput1=`<input type='button' class='table-button' value='All Ok'></input>`; $td.append(strHtmlInput1); } } }); $(document).on("click",".table-button",function(){ console.log("clicked"); console.log($(this).parents("tr").find("td[data-cell-index='6'").find(":selected").text()); $(this).parents("tr").find("td[data-cell-index='6']").find(":selected").value="ok"; console.log('selected'); console.log($(this).parents("tr").find("td[data-cell-index='6'").find(":selected").text()); }); var sh = mvc.Components.get("tbl1"); if (typeof(sh) != "undefined") { sh.getVisualization(function(tableView) { // Add custom cell renderer and force re-render tableView.table.addCellRenderer(new CustomRangeRenderer()); tableView.table.render(); }); } });    @niketn @kamlesh_vaghela 
Hi, I'm trying to join two searches where the first search includes a single field with multiple values. The matching field in the second search ONLY ever contains a single value. The search ONLY ... See more...
Hi, I'm trying to join two searches where the first search includes a single field with multiple values. The matching field in the second search ONLY ever contains a single value. The search ONLY returns matches on the join when there are identical values for search 1 and search 2. In other words if search 1 has a field named id, and contains field1=a and field2=b and the second search contains field2=b, results aren't looking as expected The search will ONLY return results if search 1 contains a single value for field 1 Any suggestions on how to join a search with multiple values?
I would like to use the "Log Event" alert action to store all fields that are in the result of the search into an index.   Is there a token (like $result.fieldname$) which gives back not only one s... See more...
I would like to use the "Log Event" alert action to store all fields that are in the result of the search into an index.   Is there a token (like $result.fieldname$) which gives back not only one specified field of the result but all fields?
I have an index called index=advanced_hunting and in this index there is a field called category, where there are several categories like  AdvancedHunting-DeviceFileEvents AdvancedHunting-DeviceN... See more...
I have an index called index=advanced_hunting and in this index there is a field called category, where there are several categories like  AdvancedHunting-DeviceFileEvents AdvancedHunting-DeviceNetworkEvents etc I'm trying to look into the AdvancedHunting-DeviceFileEvents category to look at what files have been deleted, but the logs in that specific category don't feature the properties.AccountName i.e. the person, it does however, have a filed called properties.DeviceName. However, the category of AdvancedHunting-DeviceLogonEvents does have a field called properties.AccountName, as well as properties.DeviceName. So I was wondering if it's possible to connect the properties.AccountName and properties.DeviceName together so I can see who has deleted something
Hi, I am new to splunk and trying to upload data for practising. I amd using the data from the the below link. https://docs.splunk.com/Documentation/Splunk/9.0.5/SearchTutorial/Getthetutorialdata... See more...
Hi, I am new to splunk and trying to upload data for practising. I amd using the data from the the below link. https://docs.splunk.com/Documentation/Splunk/9.0.5/SearchTutorial/GetthetutorialdataintoSplunk When I try to upload the tutorialdata.zip, It loads for a long time and I get a read timeout When I extract the tutorialdata.zip and upload a single log file . No issues in uploading, It uploads the logs file and when I click Next --> Set Source Type is empty and not displaying the raw data (as shown in the screenshot) and If I continue till the end, submit and  search for the index/source, I get 0 events found. Could you please suggest If I am missing anything. I am currently using a free license option.       
Hi Team,   We are now migrated to Splunk cloud platform version 9.0.x and the logs we are collecting from different log sources by installing Splunk components (universal forwarders) which is with ... See more...
Hi Team,   We are now migrated to Splunk cloud platform version 9.0.x and the logs we are collecting from different log sources by installing Splunk components (universal forwarders) which is with the version 8.1. Now, due to vulnerability report we would like to upgrade Splunk Universal Forwarder version to 9.0.x or 9.4.x. Could you please confirm whether can we perform such upgradation directly from 8.1 to 9.4.x directly? or will it have any impacts? if yes, please let us know what are pre-requisites needs to be looked into? and also let us know the steps/procedures as well.  Thank you!!
Hi Everyone, Need one help, I am trying to drilldown from a single value box developed in Dashboard Studio to another simple xml dashboard with values passed down as a token. But here is a ... See more...
Hi Everyone, Need one help, I am trying to drilldown from a single value box developed in Dashboard Studio to another simple xml dashboard with values passed down as a token. But here is a complex part. The search populating the single value at Dashboard Studio provides two fields as a result :  1) count_id 2) list_of_id I want to show 'count_id' on Single Value and allow drilldown to pass "list_of_id". I have tried $row.list_of_id$ & also $row.list_of_id.value$
Hi, Require to combine events having one field value same and create single row . Query:  index=webmethods_dev5555_index 0000000001515185 | rex field=_raw "(?<wmDateTime>[\d\-:\s]+) .*" | rex... See more...
Hi, Require to combine events having one field value same and create single row . Query:  index=webmethods_dev5555_index 0000000001515185 | rex field=_raw "(?<wmDateTime>[\d\-:\s]+) .*" | rex field=messageId "(?<docNum>\d+)\|\|(?<whoNum>.*)" | eval wmcreateDateTime= if( like( message, "%request from EWM%" ), wmDateTime,"") | eval wmconfirmDateTime=if( like( message, "%request sent to Exacta successfully%" ), wmDateTime,"") | eval wmsentDateTime=if( like( message, "%ready to send to Exacta%" ), wmDateTime,"") | lookup wminterface_mapping.csv wmInterface as interface OUTPUT Interface | table whoNum,Interface,wmcreateDateTime,wmconfirmDateTime,wmsentDateTime   OUTPUT:       We Want output in below format   suggest query to get desired output.
I have my report where I have written something like this- |dbxquery query="select COLUMN1,COLUMN2,COLUMN3,COLUMN4 FROM TABLE1 WHERE COLUMN1 IN ('xyz') ",connection="XXX" I have added this to m... See more...
I have my report where I have written something like this- |dbxquery query="select COLUMN1,COLUMN2,COLUMN3,COLUMN4 FROM TABLE1 WHERE COLUMN1 IN ('xyz') ",connection="XXX" I have added this to my dashboard by using query- | savedsearch rep1 | chart values(column3) AS Status BY column2 column4 | fillnull value="-" | table column1 column2 ...... In my dashboard,values of COLUMN1 are not displayed. Is there any way how i can display value of column1 i.e;xyz
Hello all, I need help to understand the difference between two fields run_time (fetched from index: _internal) and total_run_time (fetched from index:_audit). I tried to execute search for same ... See more...
Hello all, I need help to understand the difference between two fields run_time (fetched from index: _internal) and total_run_time (fetched from index:_audit). I tried to execute search for same id and for events with same timestamp in the two searches  I observed different values for the two fields. Any guidance or information will be very helpful. Thank you Taruchit
Hello Community, I have a table: Filename Status file1             1 file2             0     | eval Status=if(where Status = 0, "missing file", Status)     If Status = 0 I want to... See more...
Hello Community, I have a table: Filename Status file1             1 file2             0     | eval Status=if(where Status = 0, "missing file", Status)     If Status = 0 I want to replace 0 with "missing file". Filename         Status file2             file missing What is the best way to do this? Thanks in advance
{"log":"{\\"instanceId\\":\\"abc-fdh-48f-4432\\",\\"requestType\\":\\"ABC\\"} Using the above sample log, how to extract the request type and instanceId fields values?
Hi I'm trying to use spath to break doen json log, but it duplicates these two fields "time" and "@timestamp" when I create a table! While raw logs show fields that contain only one timestamp!   ... See more...
Hi I'm trying to use spath to break doen json log, but it duplicates these two fields "time" and "@timestamp" when I create a table! While raw logs show fields that contain only one timestamp!   Here is my query: index="myindex" | spath input=_raw | dedup time | table time _time @timestamp _raw   Here is output: time                                                       _time                                             @timestamp 2023-06-16T12:27:54.907Z        2023-06-18 15:55:30         2023-06-18T12:23:01.109495047Z 2023-06-16T12:27:54.907Z                                                              2023-06-18T12:23:01.109495047Z   here is raw log: _raw {"server":"mysrv","tags":["_dateparsefailure"],"results":{"statement_id":0},"uniq":"026","@timestamp":"2023-06-18T12:23:01.109495047Z","@version":"1","success":"true","type":"in","http_poller_metadata":{"input":{"http_poller":{"response":{"status_code":200,"headers":{"date":"Sun, 18 Jun 2023 12:27:54 GMT","x-influxdb-build":"OSS","x-request-id":"8cae1609-0dd3-11ee-8ace-005056b7dda2","request-id":"8cae1609-0dd3-11ee-8ace-005056b7dda2","x-influxdb-version":"1.7.8","transfer-encoding":"chunked","content-type":"application/json"},"elapsed_time_ns":4031,"status_message":"OK"},"request":{"retry_count":0,"name":"cpu","host":{"hostname":"logsrv"},"original":{"url":"https://192.168.1.1:8086/query?pretty=true&db=mydb&q=SELECT%20*%20FROM%20%22msg%22%20WHERE%20time%20%3E%20now()%20-%202d%20limit%203600","headers":{"Authorization":"Token mytoken"},"method":"get"}}}}},"time":"2023-06-16T12:27:54.907Z","name":"msg","count":1,"connectionname":"myconnection"}
Hello All, I tried to extract data from DOORS Next Gen. After importing the data, I found that few fields are missing.  Search I made, index=2w_triumph_7inch_doorsnext_export | search ModuleID... See more...
Hello All, I tried to extract data from DOORS Next Gen. After importing the data, I found that few fields are missing.  Search I made, index=2w_triumph_7inch_doorsnext_export | search ModuleID=121994| chart count by "RB_Status" "RB_Status" is also a field. But "Artifact type" field is missing in the field list, no results found when attempting to search with "Artifact type". Similarly, when the data is empty (i.e  there are 10 empty values are in "RB_Status", it was not taken into count when chart command is given) Thanks in ad
Hi everyone, I have a pretty huge multisearch query with multiple inputlookups, untangling the spaghetti monster which is my Kafka environment, and multiple applications usage thereof across a huge n... See more...
Hi everyone, I have a pretty huge multisearch query with multiple inputlookups, untangling the spaghetti monster which is my Kafka environment, and multiple applications usage thereof across a huge number of microservices. The query calculates latency based on a combination of metrics at each point (CDC of source db, and prometheus details of producer to consumer metrics, and to final destination db) to give a source to destination latency for a huge number of topics. This makes up the base search of a dashboard, to provide latency and then SLA% per app. So ppl can select from a dropdown and see, oh Kafka for Application X is running at avg 1.5s latency, and past 24hours the SLA threshold for AppX is 2s and it's SLA% is 99.98 over that. Now this is pretty great considering the heavy lifting the main query is doing in Splunk, and it gives pretty quick real time or hourly / daily SLA stats output. However, even though it is for a health metric & problem detection/resolution -- it is SLA and ultimately theres a desire for some level of historical tracking approaching reporting level. With visibility over longer periods like weekly & monthly  & quarterly for SLA performance ...This is where things start to slow, and also the number of applications/complexity of Kafka this query will target, is only going to scale further. ...So this is a bit of an esoteric question, but i'm wondering if theres any Splunk dashboard options or approach to optimize something like this. Traditionally of course with things like network traffic, say Cacti or something more suited to that, this kind of thing would be pulling results from a database, reducing the heavy crunching all the time.  Is there any approach I could use for a dashboard that might save results or data to make this more efficient to that end? Could the SLA% results start being saved or indexed? Or say upon first load, in the background its crunching the latency results going back in time from now to -1month, and the other dash searches are using those results to say, report on the daily/weekly results? Or anything else.. Up for any ideas because aside from base search id referencing in dash panels, i havent approached anything like this with Splunk as yet..
My Indexer cluster with smartstore was working fine with the below config     [default] remotePath = volume:remote_store/$_index_name repFactor = auto # Configure the remote volume. [volume:re... See more...
My Indexer cluster with smartstore was working fine with the below config     [default] remotePath = volume:remote_store/$_index_name repFactor = auto # Configure the remote volume. [volume:remote_store] storageType = remote path = s3://splunk_data/       However, suddenly now its unable to start and giving the following error on all indexers when Splunk starts due to which Splunk is unable to start       Problem parsing indexes.conf: Cannot load IndexConfig: Unable to load remote volume "remote_store" of scheme "s3" referenced by index "_audit": Could not get s3 region from the metadata endpoint Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue       To fix the above issue, I added below attribute to indexes.conf  in the hopes that it will get the region       remote.s3.endpoint = https://s3.<region_name>.amazonaws.com       but after that I am getting the below error     Problem parsing indexes.conf: Cannot load IndexConfig: Unable to load remote volume "remote_store" of scheme "s3" referenced by index "_audit": Could not find access_key and/or secret_key in a configuration file, in environment variables or via the AWS metadata endpoint. Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue       can someone please help fix this issue ?
I'm trying to create an alert for [index="abc*" | spath "LogMessage.message.msg" | search "LogMessage.message.msg"="status code = 500"] where : Per Second - if there are more than 2 Events consec... See more...
I'm trying to create an alert for [index="abc*" | spath "LogMessage.message.msg" | search "LogMessage.message.msg"="status code = 500"] where : Per Second - if there are more than 2 Events consecutively for 5 minutes Per Minute -  if there are more than 5 Events consecutively for 15 minutes
We have this dashboard that recently started alerting us on a risky command. We were using the fit command.    I followed the docs and added the following line to the newly created commands.... See more...
We have this dashboard that recently started alerting us on a risky command. We were using the fit command.    I followed the docs and added the following line to the newly created commands.conf that i had put in the apps local folder to push.  [fit] is_risky = false According to the docs, i assumed that this would just disable the warning for using that command. After i put it into the specified apps local folder, /export/opt/splunk/etc/shcluster/apps/<app-name>/local, I pushed the bundle and it seemed to have put it in the apps default folder on the search heads. But the biggest issue here is that once i pushed that bundle, splunk doesn't recognize the fit.py file any more. I tested putting that commands.conf in the apps default folder, same thing. I tested this a few times, and while im glad that the bundle pushes were working, im a bit confused as to why splunk no longer recognizes that command even though im only using the is_risky=false, which should only stop the warning.    Any help on this matter would be appreciated. Thank you. And if you could also answer as to why the local file in the app's directory is pushing to the apps default folder on the search heads, that would be a bonus. Thank you. 
Hi guys, I am making a dashboard regarding a leaving employee , which follow his mail traffic. The dashboard is working fine , i want to make it dynamic so everyone else can insert a new employee... See more...
Hi guys, I am making a dashboard regarding a leaving employee , which follow his mail traffic. The dashboard is working fine , i want to make it dynamic so everyone else can insert a new employee id and the dashboard will load the new data. I have tried using the dropdown box , but couldnt make it work as i intended. this is my SPL- index=****** FromUser="Adam.Levin" | eval Data=RecipientUser+"@"+RecipientDomain+","+HasAttachments | eval MegaBytes=round((BytesSent/1024)/1024,2) | table Data MegaBytes HasAttachments