All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Considerations to upgrade from Enterprise 9.1.1 to 9.4.2,  while its also a deployment server. 
index=*sap sourcetype=FSC* | fields _time index Eventts ID FIELD_02 FIELD_01 CODE ID FIELD* source | rex field=index "^(?<prefix>\d+_\d+)" | lookup lookup_site_ids.csv prefix as prefix output name... See more...
index=*sap sourcetype=FSC* | fields _time index Eventts ID FIELD_02 FIELD_01 CODE ID FIELD* source | rex field=index "^(?<prefix>\d+_\d+)" | lookup lookup_site_ids.csv prefix as prefix output name as Site | eval name2=substr(Site,8,4) | rex field=Eventts "(?<Date>\d{4}-\d{2}-\d{2})T(?<Time>\d{2}:\d{2}:\d{2}\.\d{3})" | fields - Eventts | eval timestamp = Date . " " . Time | eval _time = strptime(timestamp, "%Y-%m-%d %H:%M:%S.%3N") | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S.%3N"), Condition="test" | eval Stamp = strftime(_time, "%Y-%m-%d %H:%M:%S.%3N") | lookup Stoppage.csv name as Site OUTPUT Condition Time as Stamp | search Condition="Stoppage" | where Stamp = Time | eval index_time = strptime(Time, "%Y-%m-%d %H:%M:%S.%3N") | eval lookup_time = strftime(Stamp, "%Y-%m-%d %H:%M:%S.%3N") | eval CODE=if(isnull(CODE),"N/A",CODE), FIELD_01=if(isnull(FIELD_01),"N/A",FIELD_01), FIELD_02=if(isnull(FIELD_02),"N/A",FIELD_02) | lookup code_translator.csv FIELD_01 as FIELD_01 output nonzero_bits as nonzero_bits | eval nonzero_bits=if(FIELD_02="ST" AND FIELD_01="DA",nonzero_bits,"N/A") | mvexpand nonzero_bits | lookup Decomposition_File.csv Site as name2 Alarm_bit_index as nonzero_bits "Componenty_type_and_CODE" as CODE "Component_number" as ID output "Symbolic_name" as Symbolic_name Alarm_type as Alarm_type Brief_alarm_description as Brief_alarm_description Alarm_solution | eval Symbolic_name=if(FIELD_01="DA",Symbolic_name,"N/A") , Brief_alarm_description=if(FIELD_01="DA",Brief_alarm_description,"N/A") , Alarm_type=if(FIELD_01="DA",Alarm_type,"N/A") , Alarm_solution=if(FIELD_01="DA",Alarm_solution,"N/A") | fillnull value="N/A" Symbolic_name Brief_alarm_description Alarm_type | table Site Symbolic_name Brief_alarm_description Alarm_type Alarm_solution Condition Value index_time Time _time Stamp lookup_time  
i have upgrade Splunk enterprise 9.3.1 to 94.2, already restore /etc, but now forwarder managment dose not show any universal phoning home
Hello , I am trying to change in the search itself to change the span in timechart.  So if the hour is say greater than 7 and less than 19 make the span=10m  else 1hr example | eval hour=strftime(... See more...
Hello , I am trying to change in the search itself to change the span in timechart.  So if the hour is say greater than 7 and less than 19 make the span=10m  else 1hr example | eval hour=strftime(_time,"%H") | eval span=if(hour>=7 AND hour<19,"10m","1h") |timechart span=span count(field1) ,count(field2) by field3
Hi Everyone! I wrote a search query to get the blocked count of emails for last 6months and below is my query- | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from d... See more...
Hi Everyone! I wrote a search query to get the blocked count of emails for last 6months and below is my query- | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time | eval Source="Email" | eval Month=strftime(_time, "%b") | stats sum(Blocked) as Blocked by Source Month | eventstats sum(Blocked) as Total by Source | appendpipe [ stats values(Total) as Blocked by Source | eval Month="Total" ] | xyseries Source Month Blocked | fillnull value=0   and its output looks something like this - The only issue is in the output the month field is not chronologically sorted instead it is alphabetical. I intend to sort it chronologically. I tried with the below query as well to achieve the desired output but no go- | eval MonthNum=strftime(_time, "%Y-%m"), MonthName=strftime(_time, "%b") | stats sum(Blocked) as Blocked by Source MonthNum MonthName | eventstats sum(Blocked) as Total by Source | appendpipe [ stats values(Total) as Blocked by Source | eval MonthNum="9999-99", MonthName="Total" ] | sort MonthNum | eval Month=MonthName | table Source Month Blocked   Could someone please help here! Thanks In advance
Is there a Special Log In for Veterans Workforce Program?    Am I currently signed in as a regular user? I signed up for the Veteran's Workforce Program a while back and thought I got a confirmation... See more...
Is there a Special Log In for Veterans Workforce Program?    Am I currently signed in as a regular user? I signed up for the Veteran's Workforce Program a while back and thought I got a confirmation but now can't find it. Under that program is there a free program for Splunk Enterprise Security?  When I find it under this login there is a price for that course. That course is pre approved by CompTIA for PDUs to renew my Security X so that's why I want to take it. Any help would be appreciated. Ralph P Steen Jr
When importing playbooks from the Splunk Research repository https://research.splunk.com/playbooks/  the imported playbooks appear with "Input" status and cannot be activated through the standard int... See more...
When importing playbooks from the Splunk Research repository https://research.splunk.com/playbooks/  the imported playbooks appear with "Input" status and cannot be activated through the standard interface. Additionally, attempts to delete these inactive playbooks result in errors or incomplete deletion processes. Question is :  1. Is there a best way to import and activate it? (However, it still needs configuration like an API) 2. Why can't I delete this from the playbook list even though I have logged in with an admin privilege account ?      
Need to know while am adding the data in splunk am getting the below error  Same data would be like :- { "version": "200", "predictions": [ { "ds": "2023-01-01T01:00:00", "y"... See more...
Need to know while am adding the data in splunk am getting the below error  Same data would be like :- { "version": "200", "predictions": [ { "ds": "2023-01-01T01:00:00", "y": 25727, "yhat_lower": 23595.643771045987, "yhat_upper": 26531.786203915904, "marginal_upper": 26838.980030149163, "marginal_lower": 23183.715141246714, "anomaly": false }, { "ds": "2023-01-01T02:00:00", "y": 24710, "yhat_lower": 21984.478022195697, "yhat_upper": 24966.416390280523, "marginal_upper": 25457.020250925423, "marginal_lower": 21744.743048120385, "anomaly": false }, { "ds": "2023-01-01T03:00:00", "y": 23908, "yhat_lower": 21181.498740796877, "yhat_upper": 24172.09825724038, "marginal_upper": 24449.705257711226, "marginal_lower": 20726.645610860345, "anomaly": false },
Hi Team,    I need help with Manage engine ticketing tool integration with Splunk i have researched in Google did not find any exact document please provide your inputs if anyone has integrated the... See more...
Hi Team,    I need help with Manage engine ticketing tool integration with Splunk i have researched in Google did not find any exact document please provide your inputs if anyone has integrated these one.   Goal    1) CMDB integration  2) Automatically create a ticket for each splunk enterprise security alerts
Hi, got some problem in my searches since a few days. I really don´t know what happend and no one changed the configuration.   In search or dashboards for Cisco Network I get for every search the... See more...
Hi, got some problem in my searches since a few days. I really don´t know what happend and no one changed the configuration.   In search or dashboards for Cisco Network I get for every search the error "Eventtype 'wineventlog_security' does not exist or is disabled"   search example:  Index=firewall  The question is why when I search an completly unrelated index to windows it shows the error from the eventtype from the Windows-TA ? and also it doesn´t show any results.
Hello,  I would like to create timechart that counts number of tests with different statuses (e.g. statuses 'OK', 'ERROR', 'WARN' etc) for last 30 days (per each day). The problem is that it should ... See more...
Hello,  I would like to create timechart that counts number of tests with different statuses (e.g. statuses 'OK', 'ERROR', 'WARN' etc) for last 30 days (per each day). The problem is that it should take only latest log with status per test (e.g. I have Login test (id 151), it has couple events/logs with different statuses, and I would like to take for that test last log/event with latest status.  I have a problem to combine 'latest' and 'distinct_count' with timechart.  When I do following search, I get duplicates of logs for test (e.g. I should have every day count of 62 (tests) for all statuses):  basesearch | timechart span=1d distinct_count(test) as tests by status e.g. on day 2025-05-26 test 'Login test (id 151)' have one event with status 'OK' and another one with status 'Blad', and the duplicate is shown here. When I want to combine 'latest' to timechart I get distinct_count results only for last day: basesearch | stats latest(status) as statuses latest(test) as tests latest(_time) as myTime by test | eval _time=myTime | timechart span=1d distinct_count(tests) by statuses   I appreciate help how to combine timechart, distinct_count and latest all together.  
Hi Team, I have added Red & Green color to Status column, I want to add the same to severity column as well. Can some one suggest me some commands   I have used below commands to add color to sta... See more...
Hi Team, I have added Red & Green color to Status column, I want to add the same to severity column as well. Can some one suggest me some commands   I have used below commands to add color to status field.   <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="Status"> <colorPalette type="map">{"failed":#D93F3C,"finished":#31A35F,"Critical":#D93F3C,"Informational":#31A35F}</colorPalette> </format>  
Hello, im on splunk enterprise  Im facing with this error on my Dashboard : Failed to load source for JointJS Diagram visualization   Did you have any idea about this ? Regards.
I have a serious problem, please help me.   We have an HAProxy server that receives logs from various clients and forwards them to a Splunk Heavy Forwarder. The problem is that HAProxy replaces the c... See more...
I have a serious problem, please help me.   We have an HAProxy server that receives logs from various clients and forwards them to a Splunk Heavy Forwarder. The problem is that HAProxy replaces the client's IP address with its own (in TCP). The question is: how can we have the client's IP address for each log in the Splunk Heavy Forwarder?
require([     'splunkjs/mvc',     'splunkjs/mvc/searchmanager',     'splunkjs/mvc/tableview',     'splunkjs/mvc/simplexml/ready!',     'jquery' ], function(mvc, SearchManager, TableView, ignore... See more...
require([     'splunkjs/mvc',     'splunkjs/mvc/searchmanager',     'splunkjs/mvc/tableview',     'splunkjs/mvc/simplexml/ready!',     'jquery' ], function(mvc, SearchManager, TableView, ignored, $) {       // Define a simple cell renderer with a button     var ActionButtonRenderer = TableView.BaseCellRenderer.extend({         canRender: function(cell) {             return cell.field === 'rowKey';          },         render: function($td, cell) {             $td.addClass('button-cell');                var rowKey = cell.value             var $btn = $('<button class="btn btn-success">Mark Solved</button>');               $btn.on('click', function(e) {                 e.preventDefault();                 e.stopPropagation();                   var searchQuery = `| inputlookup sbc_warning.csv                     | eval rowKey=tostring(rowKey)                     | eval solved=if(rowKey="${rowKey}", "1", solved)                     | outputlookup sbc_warning.csv`;                   var writeSearch = new SearchManager({                     id: "writeSearch_" + Math.floor(Math.random() * 100000),                     search: searchQuery,                     autostart: true                 });                   writeSearch.on('search:done', function() {                     console.log("Search completed and lookup updated");                     var panelSearch = mvc.Components.get('panel_search_id');                     if (panelSearch) {                         panelSearch.startSearch();                         console.log("Panel search restarted");                     }                 });             });               $td.append($btn);         }     });       // Apply the renderer to the specified table     var tableComponent = mvc.Components.get('sbc_warning_table');     if (tableComponent) {         tableComponent.getVisualization(function(tableView) {             tableView.table.addCellRenderer(new ActionButtonRenderer());             tableView.table.render();         });     } }); in this i want name of the button to be "unsolved" initially and when somebody clicks it the name should change to solved
Doing a performance/stress test using Enterprise Trial license. I wonder if there is a way to get rid of 500MB/day limit. If no, what would be a good practice to do the test with larger limitation?... See more...
Doing a performance/stress test using Enterprise Trial license. I wonder if there is a way to get rid of 500MB/day limit. If no, what would be a good practice to do the test with larger limitation? Does a new installation of Splunk can reset the limit?
Hello AppDynamics Support, We are experiencing a persistent issue integrating the PHP Agent on a Red Hat 9.5 server running PHP 8.3. Below are the technical details and steps we've already taken.  ... See more...
Hello AppDynamics Support, We are experiencing a persistent issue integrating the PHP Agent on a Red Hat 9.5 server running PHP 8.3. Below are the technical details and steps we've already taken.    Technical Information PHP Agent version: 24.11.0.1340 PHP version: 8.3 OS: Red Hat Enterprise Linux 9.5 Apache MPM: event Controller:xxx:443 HTTP Proxy: xxx:3128    Problem Description The agent initializes properly and correctly detects all necessary settings (controller host, account name, node, etc.). However, the following error is always present in the logs: [config.ConfigChannel] could not send config request This prevents the agent from registering or communicating with the controller.   Troubleshooting Steps Already Taken controller-info.xml file is valid and in place DNS resolution of the controller is working HTTPS connection via proxy tested and successful SSL certificate is valid Ports (443) are open and reachable via proxy Apache/PHP-FPM restarted cleanly  
Hi, I'm trying to rewrite a given query and then execute it. I need to do some complex lookups which can't be done with a regular macro then I thought about having a python command that will fetch t... See more...
Hi, I'm trying to rewrite a given query and then execute it. I need to do some complex lookups which can't be done with a regular macro then I thought about having a python command that will fetch the query and reconstruct it. The issue I'm having is how to execute the new query? I've tried with the SDK but the run time is much higher + the results return to the statistics page. I've tried to inject the query into a field and then use map but it also wasn't successful. Any idea that works? Maybe something I didn't try or whether if you know that one of that methods should work. Thanks.
Hi, we are trying to get some of the pretrained models from Splunk ESCU app running but without success so far. When I run any of their searches, it fails due to missing response. The search job indi... See more...
Hi, we are trying to get some of the pretrained models from Splunk ESCU app running but without success so far. When I run any of their searches, it fails due to missing response. The search job indicates that the connection eventually times out. In our FW logs I can clearly see that the connections are being dropped with a (seemingly odd) message info that says Invalid TCP packet - source / destination port 0. I verified that message by running a tcpdump on the corresponding search head and re-initiated the apply command. Indeed, it attempts to connect on Port 0.  I also verified the YAML of the service in the Network section of Openshift and it righfully points to api / tcp 5000. I can also connect to the exposed API of the Pod via Curl. At this point im not sure where and what exactly is going wrong. Any hints would be greatly appreciated. KR
Hi Everyone,  I encountered an error in UBA, specifically related to the 'caspida-outputconnector'. While the issue can be resolved by restarting UBA, I would like to understand the root cause. I ha... See more...
Hi Everyone,  I encountered an error in UBA, specifically related to the 'caspida-outputconnector'. While the issue can be resolved by restarting UBA, I would like to understand the root cause. I have already reviewed the configuration file at '/etc/caspida/local/conf/uba-site.properties' and confirmed that everything appears to be correct. I have also tested the HEC token, and it is functioning properly. Does anyone have experience or guidance on how to troubleshoot and identify the root cause of this issue?