All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi ALL, We have '3-Panels' containing table like id (table1,table2,table3) which has same Header, while implement the Row Expansion JScript in the dashboard, we are only getting the result in 1st Pa... See more...
Hi ALL, We have '3-Panels' containing table like id (table1,table2,table3) which has same Header, while implement the Row Expansion JScript in the dashboard, we are only getting the result in 1st Panel. Below is the JavaScript we are using, require([ 'splunkjs/mvc/tableview', 'splunkjs/mvc/chartview', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc', 'underscore', 'splunkjs/mvc/simplexml/ready!'],function( TableView, ChartView, SearchManager, mvc, _ ){ var EventSearchBasedRowExpansionRenderer = TableView.BaseRowExpansionRenderer.extend({ initialize: function(args) { // initialize will run once, so we will set up a search and a chart to be reused. this._searchManager = new SearchManager({ id: 'details-search-manager', preview: false }); this._chartView = new TableView({ managerid: 'details-search-manager', pageSize: "50", 'charting.legend.placement': 'none' }); }, canRender: function(rowData) { // Since more than one row expansion renderer can be registered we let each decide if they can handle that // data // Here we will always handle it. return true; }, render: function($container, rowData) { // rowData contains information about the row that is expanded. We can see the cells, fields, and values // We will find the sourcetype cell to use its value var sourcetypeCell = _(rowData.cells).find(function (cell) { return cell.field === 'Site'; }); //update the search with the sourcetype that we are interested in this._searchManager.set({ search: 'index=centralstauts Site=' + sourcetypeCell.value + ' Parameter=Estado Quality=192 | eventstats latest(DateTime) as LastUpdated by Turbine, Quality |stats latest(VALUE) as Status by Turbine, LastUpdated |eval Status =case(Status="100.000" , "Running", Status="75.000" , "Ready", Status="50.000" , "Pause", Status="25.000" , "Stop", Status="0.000" , "Emergency", Status="125.000" , "ManualStop", (Status="150.000") OR (Status="-150.000") , "CommunicationFailure") |table LastUpdated, Turbine, Status',earliest_time: "-3d@d",latest_time: "now", preview: true, cache: true}); // $container is the jquery object where we can put out content. // In this case we will render our chart and add it to the $container $container.append(this._chartView.render().el); } }); var tableElement = mvc.Components.getInstance("table1"); tableElement.getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.addRowExpansionRenderer(new EventSearchBasedRowExpansionRenderer()); }); var tableElement = mvc.Components.getInstance("table2"); tableElement.getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.addRowExpansionRenderer(new EventSearchBasedRowExpansionRenderer()); }); var tableElement = mvc.Components.getInstance("table3"); tableElement.getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.addRowExpansionRenderer(new EventSearchBasedRowExpansionRenderer()); }); });  
hello , we have a problem with mrollup procedure for metrics indexes. We have setup e daily rollup for a metrics index; raw data at 5 minutes We see some metrics not rolled-up as if some kind of "... See more...
hello , we have a problem with mrollup procedure for metrics indexes. We have setup e daily rollup for a metrics index; raw data at 5 minutes We see some metrics not rolled-up as if some kind of "limit" had exceeded and splunk was not able to complete its work. Any suggestion about the parameter to be set or the reason of this behaviour? thank you
Hello there. While troubleshooting a completely other issue I noticed that if I try to send data to HEC input, every connection ends with RST. So the connection looks like this: 1) Normal TCP hands... See more...
Hello there. While troubleshooting a completely other issue I noticed that if I try to send data to HEC input, every connection ends with RST. So the connection looks like this: 1) Normal TCP handshake 2) TLS negotiation 3) Huge chunk of encrypted data from the client to the HEC input (surely, the HTTP request) 4) Small chunk of encrypted data from splunk to the client (apparently - the HTTP response). 5) And here is where it goes weird - RST from the client to the splunk server. Firstly I suspected the client but tests showed that it's not specific to any particular client. I used rsyslog with omhttp, I used curl - same result. It seems to be connected to using TLS over the connection because curl-ing over non-TLS http to HEC inputs ends properly with FIN/ACK. On the other hand however, if I open the connection with openssl s_client and try to perform the request manually, the connection stays open until I close it manually (then s_client sends proper FIN). So I'm a bit lost. The RST's are most peculiar because you'd either expect a keep-alive (ok, with single request from curl you don't need keep-alive) and more requests over the same connection or a proper FIN/ACK. It happens regardless of whether it's just server-supplied certificate or a mutual auth setup. I checked in 8.1.2 at home as well as in 7.3.8 at customer's site - same result. So I'm a bit puzzled here - is it some bug in HTTPS implementation that the session gets somehow invalidated so that the client resets it or does it have something to do with the configuration? Anyone encountered similar problem?
Hello, I am currently experimenting with AppDynamics. I'm trying to install .Net agent for Linux in Container. I followed the steps from the 'Use a Dockerfile' section in this tutorial(https://docs.... See more...
Hello, I am currently experimenting with AppDynamics. I'm trying to install .Net agent for Linux in Container. I followed the steps from the 'Use a Dockerfile' section in this tutorial(https://docs.appdynamics.com/display/PRO21/Install+the+.NET+Agent+for+Linux+in+Containers). I built the image and I got HTTP error 401 when running the container.  I did some googling and think that I am missing the cert since I omitted this part of the Dockerfile. How do I get the 'on-premises cert'?
  My task 1. Prepare a new dashboard that shows the following: 2. Need to shows the new dashboard ----> InstanceID, InstanceType,CPUCreditBalance 3.I need to write a Splunk alert on a condition... See more...
  My task 1. Prepare a new dashboard that shows the following: 2. Need to shows the new dashboard ----> InstanceID, InstanceType,CPUCreditBalance 3.I need to write a Splunk alert on a condition like 4. CPU credit balance < 30 then send alert email
Hi,    I have a Linux machine running on Centos 6.10 with a quad-core processor  (16 threads)   On Splunk, is there a way to monitor each of its threads? Because sometimes, the threads reach ... See more...
Hi,    I have a Linux machine running on Centos 6.10 with a quad-core processor  (16 threads)   On Splunk, is there a way to monitor each of its threads? Because sometimes, the threads reach 100% and my system stops working until the threads are down to the current percentage values as in the above screenshot. Any degree of help will be appreciated.   Regards.
Hi Appdynamics team, Greetings for the day! Im trying to configure the windows service application to monitior using .net agent referring the below link. https://docs.appdynamics.com/display/PRO20... See more...
Hi Appdynamics team, Greetings for the day! Im trying to configure the windows service application to monitior using .net agent referring the below link. https://docs.appdynamics.com/display/PRO20X/Configure+the+.NET+Agent+for+Windows+Services+and+Standa... here is the part of config.xml file which i have added <standalone-applications> <standalone-application executable="FULL PATH OF EXECUTABLE" command-line="-x"> <tier name="Windows Service"/> <profiler> <instrument-defaultdomain enabled="true"/> </profiler> </standalone-application> </standalone-applications> but unable to get the transactions. Please let me know if i missed any more configuration steps. I have seen the below messages in the "agentlog" as below.  Info SystemInformation Environment variable 'APP_POOL_ID' is not set. Trying to extract App Pool ID from managed command line  2021-03-31 15:31:10.2771 15488 Service 1 3 Info SystemInformation Managed command line did not have 'APP_POOL_ID'. Trying to extract App Pool ID from native command line  2021-03-31 15:31:10.2771 15488 Service 1 3 Info SystemInformation The app pool ID is 2021-03-31 15:31:10.2771 15488 Service 1 3 Info Configuration appDomainName="EXE NAME" appDomainId=1 iis-app=null site=null port=null appPoolId=null commandLine="" 2021-03-31 15:31:10.2911 15488 Service 1 3 Info AgentCommunicator named pipe = \\.\pipe\AppDynamicsAgentIPC 2021-03-31 15:31:10.4096 15488 Service 1 3 Warn Configuration Problem loading the configuration: System.NullReferenceException: Object reference not set to an instance of an object. at AppDynamics.Agent.Configuration.ValidateAndFixConfig(AppAgentConfiguration appAgentConfiguration) at AppDynamics.Agent.Configuration..ctor(StandaloneAgentConfiguration standaloneAgentConfig, SensitiveDataScrubber dataScrubber) I could able to get the transactions for IIS Applications. please help me in resolving the issue.  Thanks in advance.
Hi, how will summary index actually work in relation to 'time based searches' maybe the summary index could have no time value on each record?   We are replacing a lookup with a summary index. ... See more...
Hi, how will summary index actually work in relation to 'time based searches' maybe the summary index could have no time value on each record?   We are replacing a lookup with a summary index.   we have 2000 entries in the lookup --> those entries will be pushed to summary index via a scheduled search   The lookup will be updated daily --> The updated data will go to summary Index   What will happen to old data that is already there in the summary Index?  
What measures, steps or SPL searches do you use on daily basis to defend against Ransomware?
Hello    We have implemented multiple NEAP which has to be applied depending on various conditions. How can I write a SPL query to find which episode/incident used which NEAP.    Regards,  Sangh... See more...
Hello    We have implemented multiple NEAP which has to be applied depending on various conditions. How can I write a SPL query to find which episode/incident used which NEAP.    Regards,  Sanghamitra Mitra
Hi All,   I have a panel displaying tabular data. I want to display the top 3 rows from that table in my template block. How do we achieve this?
Hello, I need to find the duration between two events. I went over the solutions on Splunk, but still can't get the calculation. Both sentToSave and SaveDoc have the time stamp already formatted tha... See more...
Hello, I need to find the duration between two events. I went over the solutions on Splunk, but still can't get the calculation. Both sentToSave and SaveDoc have the time stamp already formatted that is why I used the case function. I am able to see the fields populate with their time stamps, but I am not able to get the Duration field to populate the duration - it simply does not populate at all. Need some help on how to get the Duration - any advise? Below is my query (index=souce1 dept=qvc event="sentToSave" ) OR (index=source dept=save area=saveDoc) | eval saveDocTime=case(area="saveDoc", TimeStamp), sentToSaveTime=case(event="sentToSave", TimeStamp) | eval Duration=saveDocTime-sentToSaveTime | stats values(Duration) as Duration earliest(sentToSaveTime) as sentToSaveTime latest(saveDocTime) as saveDocTime by emailRequest | where isNotNull(sentToSaveTime) AND isNotNull(saveDocTime)  
Does anybody know what my inputs.conf setting for filter_data must look like for a key-value like below? I cannot find any detailed documentation on filter_data. And see same error when I try to use... See more...
Does anybody know what my inputs.conf setting for filter_data must look like for a key-value like below? I cannot find any detailed documentation on filter_data. And see same error when I try to use URL encoding.  [snow://pa_job_logs4_javascript] table = pa_job_logs filter_data = run_time>javascript&colon;gs.getDurationDate('0 3:0:0')&sys_created_onONLast 3 months@javascript&colon;gs.beginningOfLast3Months()@javascript&colon;gs.endOfLast3Months() 2021-03-31 18:04:32,776 ERROR pid=22626 tid=MainThread file=snow.py:_check_filter_data:74 | Got unexpected value run_time>javascript&colon;gs.getDurationDate('0 3:0:0')&sys_created_onONLast 3 months@javascript&colon;gs.beginningOfLast3Months()@javascript&colon;gs.endOfLast3Months() of 'filter_data' field for input 'pa_job_logs4_javascript'. Enter filter parameters in key1=value1&key2=value2 format. You can either change it in inputs.conf file or edit 'Filter parameters' on Inputs page. Above table filter seems to work with the REST API Explorer in Service Now, i.e. sysparm_query=run_time.... Is there some trick to simplify it? or a work-around? Splunk Add-on for ServiceNow version 4.0.0
Goal is to parse new events based on this source value into multiline events split each time a new date is encountered. Currently the events are not splitting correctly based on the date, as well ... See more...
Goal is to parse new events based on this source value into multiline events split each time a new date is encountered. Currently the events are not splitting correctly based on the date, as well as the AM/PM not being discovered causing all of the morning and afternoon data to overlap in the first 12 hours of the _time field. Time stamp from log <31-Mar-2021 3:18:04 o'clock AM MDT> <Error> <Default> [source::/fmw/product/.../servers/logs/domain-1.out] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE = <\d+-\w+ TIME_PREFIX = < TIME_FORMAT = %d-%b-%Y %I:%M:%S o'clock %p %Z MAX_TIMESTAMP_LOOKAHEAD = 40 [source::/fmw/product/.../servers/logs/domain-2.out] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE = <\d+-\w+ TIME_PREFIX = < TIME_FORMAT = %d-%b-%Y %I:%M:%S o'clock %p %Z MAX_TIMESTAMP_LOOKAHEAD = 40
| dbxquery connection=Realtime shortnames=t query="select * from table_a a where a.id = ? and a.create_dt_tm &lt;= trunc(sysdate, 'MI') - interval '60' minute" params="1234567"   For the above ... See more...
| dbxquery connection=Realtime shortnames=t query="select * from table_a a where a.id = ? and a.create_dt_tm &lt;= trunc(sysdate, 'MI') - interval '60' minute" params="1234567"   For the above query, how to make the create_dt_tm qualification to be bind variable?
hi,  I have a heavy forwarder configured this way inputs.conf [udp://514] sourcetype = syslog index = abc connection_host = ip disabled = 0 _TCP_ROUTING =MyRoute So any server sending ... See more...
hi,  I have a heavy forwarder configured this way inputs.conf [udp://514] sourcetype = syslog index = abc connection_host = ip disabled = 0 _TCP_ROUTING =MyRoute So any server sending syslog on UDP 514 is hitting my heavy forwarder and getting sent to my indexers.  I want to block events from a set of servers, as I don't want them to be indexed. They are consuming my licenses.  I am trying to do this in props.conf and transforms.conf props.conf [host::XX.XX.XXX.XXX] TRANSFORMS-abc = drop_events and in my transforms.conf, I have this  [drop_events] REGEX =. DEST_KEY = queue FORMAT = nullQueue But this is not dropping the events from this server. I can still events being indexed real time when i search.  What am I doing wrong? I have a few more servers which I need to block this way.  Any help is greatly appreciated. 
How can I change the default color of bar chart and how can I use different color for different value.
When I upload file access_30DAY.log, I always get "Unspecified upload error. Refresh and try again." I've tried everything suggested in the blogs to no avail.   I refreshed.  I tried Safari.  I trie... See more...
When I upload file access_30DAY.log, I always get "Unspecified upload error. Refresh and try again." I've tried everything suggested in the blogs to no avail.   I refreshed.  I tried Safari.  I tried incognito mode.  I chmod'd 777 the file.  I tried uploading the original Splunk_f1_Data.zip file.  (Zipping gets past the "Unspecified upload" error, but when I get to the Uploading File modal, it stays "Processing" forever.  Similarly, I zipped up access_30DAY.log into access_30DAY.log.zip, but still couldn't proceed.) What does "Unspecified upload error. Refresh and try again." mean?  Could we update the code to more accurately reflect the problem and how to resolve it?
Hello All, I'm attempting to convert a splunk instance from windows to Linux but am having a hard time understanding the process fully. These are the steps https://docs.splunk.com/Documentation/Sp... See more...
Hello All, I'm attempting to convert a splunk instance from windows to Linux but am having a hard time understanding the process fully. These are the steps https://docs.splunk.com/Documentation/Splunk/latest/Installation/MigrateaSplunkinstance Windows has splunk 7.2.5 with ITSI installed. I've used 7zip to tar the splunk home dir to a file called splunk.tar.gz Am I supposed to have splunk installed on Linux at 7.2.5? The same way you would for any instillation and then copy over the splunk.tar.gz from the windows server... Then extract it's content over the Linux out of the box 7.2.5 Linux instance? Or am I supposed to copy extract the windows splunk.tar.gz file to opt/splunk location and then Install splunk 7.2.5? Trying to extract the splunk.tar.gz file in Linux works to put the bits in the /opt/splunk location so at first glance everything looks fine...but if you go to try and run "splunk start" from the bin directory it tells you "splunk start" isn't a know command. So somewhere down the line I'm getting this fairly easy process confused. Can someone please spin me in the right direction, what am I doing wrong? What's out of sequence?     Edit: sooo I see I missed step 3, that details installing splunk on the Linux node.   What's really confusing here is it doesn't tell you that you have to grab pieces of the windows install to copy over top of the Linux dirs..  My assumption was you copy the entire windows home directory overtop or to /opt/splunk in Linux... And that's doesn't appear to be the case. The instructions are not clear here and that's where there's confusion
I ran into a timeformatting issue with some of my logs due to the string starting with the following time format resulting in the indexing of months of events as a 24 hour clock (AM/PM events are bot... See more...
I ran into a timeformatting issue with some of my logs due to the string starting with the following time format resulting in the indexing of months of events as a 24 hour clock (AM/PM events are both in the same 12 hour time frame. "<26-Mar-2021 12:59:56 o'clock AM MDT> <Error>......" I am trying to run a search based on a dashboard panel that is using the make results command to gather the index times that are +- 12 hours surrounding the event i am searching, i then use rex and formatting to try to correct the _time to proper AM/PM values, this appears to work up to the point before the "| search" command.  I am unable to use earliest/latest values to do a second time bounded search on the corrected time stamps. I am hoping someone can offer some advice of how to fix the time stamps to properly reflect _time values, and then search those results for a subset of events to report on. .conf files have already been updated to fix the _time when indexed moving forward, i am just stuck trying to evaluate the older data. Data/duration are example data (these are populated by drilldown options via a dashboard)     index=myindex source=mysource [ | makeresults | eval duration="00:16:35" | eval earliest=round(strptime("2021/03/24 11:23:90","%Y/%m/%d %H:%M:%S")-43200) | rex field=duration "(?<hour>\d*?):(?<min>\d*?):(?<sec>\d*)" | eval latest=earliest+(hour*3600)+(min*60)+sec+43200 | dedup earliest,latest | table earliest,latest | format ] | rex "^<(?<raw_time>.*?)>.*" | eval _time=strptime(raw_time,"%d-%b-%Y %I:%M:%S o'clock %p %Z") | search [ | makeresults | eval duration="00:16:35" | eval earliest=round(strptime("2021/03/24 11:23:90","%Y/%m/%d %H:%M:%S")) | rex field=duration "(?<hour>\d*?):(?<min>\d*?):(?<sec>\d*)" | eval latest=earliest+(hour*3600)+(min*60)+sec | dedup earliest,latest | table earliest,latest | format ]