All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  @lawrence_magpoc , I am running with Splunk Universal Forwarder 9.0.2 in one of my Linux client machine and recently for the past couple of days i am getting this events in the internal logs and ... See more...
  @lawrence_magpoc , I am running with Splunk Universal Forwarder 9.0.2 in one of my Linux client machine and recently for the past couple of days i am getting this events in the internal logs and it seems like its getting crashed and once again the service is getting started automatically. [build 17e00c557dc1] 2024-02-08 05:26:15 Received fatal signal 6 (Aborted) on PID 1908113. Cause: Signal sent by PID 1908113 running under UID 9991. Crashing thread: TcpOutEloop Registers: RIP: [0x00007F65EB39AACF] gsignal + 271 (libc.so.6 + 0x4EACF)   ERROR TcpOutputQ [1908232 TcpOutEloop] - Unexpected event id=30 ERROR TcpOutputQ [1908232 TcpOutEloop] - Unexpected event id=29   So how to fix this issue and also in which config file we need to add in the client machine where UF is running. autoBatch=false  
@lawrence_magpoc , I am running with Splunk Universal Forwarder 9.0.2 in one of my Linux client machine and recently for the past couple of days i am getting this events in the internal logs and it ... See more...
@lawrence_magpoc , I am running with Splunk Universal Forwarder 9.0.2 in one of my Linux client machine and recently for the past couple of days i am getting this events in the internal logs and it seems like its getting crashed and once again the service is getting started automatically. [build 17e00c557dc1] 2024-02-08 05:26:15 Received fatal signal 6 (Aborted) on PID 1908113. Cause: Signal sent by PID 1908113 running under UID 9991. Crashing thread: TcpOutEloop Registers: RIP: [0x00007F65EB39AACF] gsignal + 271 (libc.so.6 + 0x4EACF)   ERROR TcpOutputQ [1908232 TcpOutEloop] - Unexpected event id=30 ERROR TcpOutputQ [1908232 TcpOutEloop] - Unexpected event id=29   So how to fix this issue and also in which config file we need to add in the client machine where UF is running. autoBatch=false  
Have you defined a source type (props.conf) as well?
Hi Community, I have to upgrade the SAP agents and I would like to know if HTTP SDK instances are running directly on SAP application servers or if HTTP SDK instances and SDK manager are running on... See more...
Hi Community, I have to upgrade the SAP agents and I would like to know if HTTP SDK instances are running directly on SAP application servers or if HTTP SDK instances and SDK manager are running on separate Linux machines. How do I identify it? Thanks
OR run <splunkweb>/en-US/debug/refresh
Hi, I have configured the below input   [monitor://C:\path\file.csv] disabled = false sourcetype = test index = somename   Data is getting ingested , but not extracting as fields.   Can you... See more...
Hi, I have configured the below input   [monitor://C:\path\file.csv] disabled = false sourcetype = test index = somename   Data is getting ingested , but not extracting as fields.   Can you please help
How have you defined the ingestion? Please share the configuration files (anonymised of course)
Hi, I have a customer who has a 50GB Enterprise license on one network and he wants to add SIEM, but only for a separate network which has a measly 5GB of daily volume. He understandably feels very ... See more...
Hi, I have a customer who has a 50GB Enterprise license on one network and he wants to add SIEM, but only for a separate network which has a measly 5GB of daily volume. He understandably feels very strongly about being forced to purchase an equivalent 50GB SIEM license when all he needs is 5GB and its even on a completely separate network. Is it possible to have a separate Enterprise + SIEM license for a second network on the same site? I heard claims that is illegal as far as Splunk is concerned, is there a basis to those claims? Thanks in advance for your responses.
Common search for the linux and windows. index=_internal sourcetype=splunkd component=BucketMover "Will attempt to freeze" | rex field=_raw "(/|\\\)splunk(/|\\\)(?P<index_name>[^\/]+)(/|\\\)(db|col... See more...
Common search for the linux and windows. index=_internal sourcetype=splunkd component=BucketMover "Will attempt to freeze" | rex field=_raw "(/|\\\)splunk(/|\\\)(?P<index_name>[^\/]+)(/|\\\)(db|colddb)(/|\\\)db_(?P<latest_event>[\d]+)_(?P<earliest_event>[\d]+)_(?P<bucket_number>[^\']+)\' (?P<reason>.*)" | convert ctime(earliest_event) as earliest_event | convert ctime(latest_event) as latest_event | convert ctime(_time) as Log_TimeStamp | table Log_TimeStamp,index_name,bucket_number,earliest_event,latest_event,reason | sort - Log_TimeStamp  
Now it's 2024, did you find a solution? I'm facing the same problem right now.
The chart command will sort string lexicographically, so change your range map to deliver numbers (3, 5, 15, 30, 100), then convert to strings after the chart command |inputlookup acn_ticket_unresol... See more...
The chart command will sort string lexicographically, so change your range map to deliver numbers (3, 5, 15, 30, 100), then convert to strings after the chart command |inputlookup acn_ticket_unresolved_dessertholdings_kv | eval age=((now() - epoc_time_submitted)/86400),total_age=round(age,2) |rangemap field=total_age "3"=0-3.00 "5"=3.01-15.00 "15"=15.01-30.00 "30"=30.01-100.00 "100"=100.01-1000.00 | chart count as count1 over range by priority | eval range=">".range." days" | rename priority as Priority
Updated Answer for this thread. @willtseng0217  Please try below full sample code below your requirement. collections.conf [my_status_data] enforceTypes = true field.status = string field.unique_i... See more...
Updated Answer for this thread. @willtseng0217  Please try below full sample code below your requirement. collections.conf [my_status_data] enforceTypes = true field.status = string field.unique_id = string   transforms.conf   [my_status_data_lookup] external_type = kvstore collection = my_status_data fields_list = _key, status, unique_id   XML <dashboard version="1.1" theme="dark" script="test.js"> <label>js for button on table cell</label> <row> <panel> <table id="table1"> <search id="SearchA"> <query>| makeresults count=10 | eval A=1 | accum A | eval B=random() | lookup my_status_data_lookup unique_id as A output status | eval action= case(isnull(status),"Ack",status=="Ack","Unack", 1=1,"Ack") +"|"+ A | eval status = if(isnull(status),"",status) +"|"+A </query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>   test.js   require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function (_, $, mvc, TableView) { console.log("Hie 65100"); var SearchA = mvc.Components.get("SearchA"); let CustomCellRenderer = TableView.BaseCellRenderer.extend({ canRender: function (cell) { // Enable this custom cell renderer for the confirm field return _(["action", "status"]).contains(cell.field); }, render: function ($td, cell) { if (cell.field == "action") { const cell_value = cell.value.split("|")[0] let unique_id = cell.value.split("|")[1]; let button_id = "action_btn_" + unique_id; let div_id = "status_div_" + unique_id; console.log(cell_value); let button = $('<button />', { value: cell_value, id: button_id, on: { click: function () { console.log(unique_id, button_id); console.log(div_id); let div_value = $('#' + div_id).html(); let new_status = ""; if (div_value == "Ack") { $('#' + div_id).html("Unack"); $('#' + button_id).html("Ack"); new_status = "Unack" } else { $('#' + div_id).html("Ack"); $('#' + button_id).html("Unack"); new_status = "Ack" } update_collection_data(unique_id, new_status); } } }).addClass("extend_expiry btn-sm btn btn-primary").html(cell_value); $td.html(button) } if (cell.field == "status") { console.log(cell.value); const cell_value = cell.value.split("|")[0] const cell_id = cell.value.split("|")[1] console.log(cell_value); console.log(cell_id); let div_id = "status_div_" + cell_id; let html = `<div id="` + div_id + `">` + cell_value + `</div>` $td.html(html) } } }); function update_collection_data(unique_id, status) { var record = { status: status, unique_id: unique_id, _key: unique_id } let collection = "my_status_data"; $.ajax({ url: '/en-US/splunkd/__raw/servicesNS/nobody/search/storage/collections/data/' + collection + '/' + unique_id, type: "POST", async: true, contentType: "application/json", data: JSON.stringify(record), success: function(returneddata) { console.log("Updated!", returneddata) }, error: function(xhr, textStatus, error) { console.error("Error Updating!", xhr, textStatus, error); $.ajax({ url: '/en-US/splunkd/__raw/servicesNS/nobody/search/storage/collections/data/' + collection, type: "POST", async: true, contentType: "application/json", data: JSON.stringify(record), success: function(returneddata) { console.log("Added!", returneddata) }, error: function(xhr, textStatus, error) { console.error("Error Adding!", xhr, textStatus, error); } }); } }); } let sh = mvc.Components.get("table1"); if (typeof (sh) != "undefined") { sh.getVisualization(function (tableView) { // Add custom cell renderer and force re-render tableView.table.addCellRenderer(new CustomCellRenderer()); tableView.table.render(); }); } });     I hope this will help you. Thanks KV If any of my replies help you to solve the problem Or gain knowledge, an upvote would be appreciated.    
Hi,   I have ingested an csv file by creating an input on a windows server. But the challenge is the logs are not getting extracted as fields  I want the data to be extracted in fields. Can someo... See more...
Hi,   I have ingested an csv file by creating an input on a windows server. But the challenge is the logs are not getting extracted as fields  I want the data to be extracted in fields. Can someone please help me in extracting all the fields from the log.   Thank you
There is no simple answer to this. You can use the ReST interface to find all the views (dashboards) and look through the code to find the searches, but even then, indexes may be obfuscated through t... See more...
There is no simple answer to this. You can use the ReST interface to find all the views (dashboards) and look through the code to find the searches, but even then, indexes may be obfuscated through the use of macros, etc. Having found dashboards with definitions that reference indexes, you might want to check whether anyone actually uses the dashboards. Same gores for reports, alerts, etc. Perhaps you need to narrow down your question. Are you interested in whether a particular index is used? What is your ultimate aim?
Hi Ryan, I have followed the documentation- but none of predefined templates could pull out the required data from synthetic events- Property Value event.correlationId event.errorCode event.erro... See more...
Hi Ryan, I have followed the documentation- but none of predefined templates could pull out the required data from synthetic events- Property Value event.correlationId event.errorCode event.errorDesc event.id event.measurementId event.name event.scheduleId event.scriptExitCode event.sessionStatus These are properties which will be present in event details tab- Can you help me route to the right pre defined variable which can fetch these details . Thank you, Jahnavi
@yuanliu , My goal is to identify status  enabled correlation searches that have triggered notables within the past 30 days.
Hi @Polarbear, Please see here on how to troubleshoot further: Re: Install issue on Server 2016 - Splunk Community. Cheers,    - Jo.  
It there any best way to find if an index used in any of the saved searches, alerts, reports and dashboard
Hi @leobsksd , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @manas, good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated