All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi Every one i am new to splunk , but here my query goes: Sample Data and json :  {id: 1 , executor: "executor1" , timestamp:2020-07-16T02:02:02.566} {id: 1 , executor: "executor2" , timestamp:202... See more...
hi Every one i am new to splunk , but here my query goes: Sample Data and json :  {id: 1 , executor: "executor1" , timestamp:2020-07-16T02:02:02.566} {id: 1 , executor: "executor2" , timestamp:2020-07-16T02:02:02.570} now my requirement is to group and list the data by id  and also calculate the timestamp difference between executor1 and executor 2 (as they are sequential steps and logging is also done sequentially) so i did  " stats list(executor)  as executors , list(timestamp) as logtime by id " . and the table comes like this:- id | executors | logtime 1 | executor 1| 2020-07-16T02:02:02.566     | executor 2 |2020-07-16T02:02:02.570 now i want to calculate the difference between the logtime or timestamp of executors and apply it on stats command only . P.s. number of executors can increase dynamically required  result:- id | executors | logtime | time difference  1 | executor 1| 2020-07-16T02:02:02.566 | 0     | executor 2 |2020-07-16T02:02:02.570| 0.004     P.s. the above is the description of 1 row only with 4 columns thanks in advance
Hi, Q1. We are trying to push data using Splunk SDK for java and using attachWith() to ingest the data. But how we can override host, sourcetype and time field using the same. Q2. attach() takes A... See more...
Hi, Q1. We are trying to push data using Splunk SDK for java and using attachWith() to ingest the data. But how we can override host, sourcetype and time field using the same. Q2. attach() takes Args argument with which we can override host and sourcetype but time is again not there.  As mentioned in the documentation, our preference is also using attachWIth. Please suggest the right way to do the same. myIndex.attachWith(new ReceiverBehavior() { public void run(OutputStream stream) throws IOException { Writer out = new OutputStreamWriter(stream, "UTF8"); for (int i = 0; i < 2; i++) { out.write("This is my msg"); } out.flush(); } }); We are referring https://dev.splunk.com/enterprise/docs/java/sdk-java/howtousesdkjava/howtogetdatasdkjava 
Hi there  We presently have a setup where error codes are extracted and put into their own field.  The way it's been setup is that we extract all instances where it matches a certain regex pattern... See more...
Hi there  We presently have a setup where error codes are extracted and put into their own field.  The way it's been setup is that we extract all instances where it matches a certain regex pattern (e.g. "ERROR-" -> ERROR-1234, ERROR-5869 etc) meaning as the error code is repeated in the event we get multiple instances of the same item in the field (e.g. "ErrorField" = ERROR-1234, ERROR-1234, ERROR-5869). Doing 'stats count by ErrorField' seems to return all items, even the ones that are repeated as we'd get ERROR-1234 = 700 on the stats count, but a simple search where ErrorField=ERROR-1234 returns say 300 events only. I've tried to come up with a way to filter out the duplicates in the search but have so far come up short.  My two possibles are creating a table of field instances, and then using that to launch a second search, or performing a sub search to capture the unique fields that are then used to count the events by field.  What I have so far is as follows: index=index ErrorField="*" | stats count by ErrorField | replace ERROR-X-* with ERROR-X- in ErrorField | where count > 200 AND ErrorField!="ERROR-X-" | table ErrorField In short I've got a search that counts, then does a replace/where to weed out values below a certain threshold/don't match a certain criteria then putting what remains in a table.  It's the results from this table I want to use in the next search and count the total events for each item in that table, but have so far failed to be able to do this. Is this possible to do, and is there a right way to do this, either continuing with the method above or using a subsearch?  Alternately is it possible to remove duplicates in the original field extraction so this isn't necessary?  Although that option may not be possible as the field extraction isn't handled by ourselves and I can't say too much about it. Thank you!
I was interested in SBF and was browsing the links below. https://docs.splunk.com/Documentation/SBF/-Latest-/AdminManual/Install However, I noticed that the following message is displayed. >Splu... See more...
I was interested in SBF and was browsing the links below. https://docs.splunk.com/Documentation/SBF/-Latest-/AdminManual/Install However, I noticed that the following message is displayed. >Splunk Business Flow is no longer available for purchase as of June 20, 2020.Customers who have already purchased Business Flow will continue to have support and maintenance per standard support terms for the remainder of contractual commitments. Does this mean that I can no longer install new SBFs in the future?
Hi There, I have a lookup table that generate one lookup table yet it has some custom column that do not have values by default, and I add custom values to them. The columns which I want to update ... See more...
Hi There, I have a lookup table that generate one lookup table yet it has some custom column that do not have values by default, and I add custom values to them. The columns which I want to update by SPL are host index firstTime lastTime totalCount   The columns which I want to update their values manually are: accepted_delta is_expected   Below is a screenshot of this table. Currently we are using the following daily SPL to update this table, but I am really bad at SPL, the bug is that it will just overwrite everything that in this lookup table and all my custom values (columns of accepted_delta, is_expected) all gone. | tstats count as totalCount earliest(_time) as firstTime latest(_time) as lastTime where index!="main" by index host | fields host index firstTime lastTime totalCount accepted_delta is_expected | fillnull value=1000000000 accepted_delta | fillnull value="false" is_expected | outputlookup  nonforwarderhost.csv  
Hello, I have 2 application servers, running the same application. In Splunk, I would like to handle the logs separately. To do this I collect the data separate indexes. Splunk has a nice app for th... See more...
Hello, I have 2 application servers, running the same application. In Splunk, I would like to handle the logs separately. To do this I collect the data separate indexes. Splunk has a nice app for this application with a lot of dashboards and reports.   I have two operators, but they have access just for one of the indexes. What I should do, if I would like that they can use the Splunk-app, but just for that index, where they have access?  If I set the app's search macro to index A, Index B is out.  Adding a new search macro for index B means I need to add that all the existing searches-dashboards-reports. How I can install the app 2 times with two different settings, pointing the different indexes?
I have installed Splunk_TA_nix add-on on my universal forwarder to send Linux logs, What is the difference between forwarding the logs through the add-on and forwarding logs through /etc/system/local... See more...
I have installed Splunk_TA_nix add-on on my universal forwarder to send Linux logs, What is the difference between forwarding the logs through the add-on and forwarding logs through /etc/system/local/inputs.conf.?Will both does the same thing.?Will the Splunk_TA_nix add-on extract the fields from the linux logs (/var/log/messages,/var/log/maillog) which will be CIM compatible.? 
Hi All, I have recently deployed Splunk Cisco ISE app into production and its working well for normal users.But as an admin i'm unable to see the dashboard panels populating. All the panel searches... See more...
Hi All, I have recently deployed Splunk Cisco ISE app into production and its working well for normal users.But as an admin i'm unable to see the dashboard panels populating. All the panel searches are based on the eventtype and when I run those searches as an admin Im not getting the results but when I login as a normal test user and run those searches its working fine. Eg : eventtype=cisco-ise | timechart usenull=f count by MESSAGE_CLASS This search works for normal user but not with admin login.Im really surprised about this issue.Please let me know where I need to troubleshoot to resolve this issue.   Thanks.
After spending two days reading almost all forum posts related to this error message, including translating questions from Chinese to Spanish, I finally found the cause of this error message: Erro... See more...
After spending two days reading almost all forum posts related to this error message, including translating questions from Chinese to Spanish, I finally found the cause of this error message: Error message Search results might be incomplete: the search process on the peer:indexador1 ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search. Search results might be incomplete: the search process on the peer:indexador2 ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search. [indexador1] Search process did not exit cleanly, exit_code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info. [indexador2] Search process did not exit cleanly, exit_code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info.   cause  If you want to setup a trial Splunk Enterprise distributed deployment consisting of multiple Splunk Enterprise instances communicating with each other, each instance must use its own self-generated Enterprise Trial license. This differs from a distributed deployment running a Splunk Enterprise license, where you will configure a license master to host all licenses." https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/TypesofSplunklicenses
Why are  we seeing logs from year ago even we use sumarriesonly=t | tstats summariesonly=t earliest(_time) as EarliestDateEpoch from datamodel=Authentication where earliest=-8mon | eval EarliestDat... See more...
Why are  we seeing logs from year ago even we use sumarriesonly=t | tstats summariesonly=t earliest(_time) as EarliestDateEpoch from datamodel=Authentication where earliest=-8mon | eval EarliestDate=strftime(EarliestDateEpoch,"%m-%d-%Y")   Even the summary range = 1month, i just want to get the earliest date of the summaries.
Hi I have configured SQS-Based S3 Input to get cloudtrail log, but the SQS Batch Size is limited to 1 to 10. I have also configre Interval (in seconds) to zero. But my queue is also growing. Can I ... See more...
Hi I have configured SQS-Based S3 Input to get cloudtrail log, but the SQS Batch Size is limited to 1 to 10. I have also configre Interval (in seconds) to zero. But my queue is also growing. Can I break the limitation or other approach to get cloudtrail log in s3? I am using the architect below (I have more then one hundred account)   Thanks
I can't work out where to go to update milling information in Splunk Cloud. There doesn't appear to be any option for this in billing. Should I be looking somewhere else?
Hi, Alert is getting triggered, sendmail works fine but webhook not working.  if i search index=_internal action=webhook I see below error : ERROR sendmodalert - action=webhook - Execution of ale... See more...
Hi, Alert is getting triggered, sendmail works fine but webhook not working.  if i search index=_internal action=webhook I see below error : ERROR sendmodalert - action=webhook - Execution of alert action script failed INFO sendmodalert - action=webhook STDERR - Sending POST request to url=http://XXXXXXXX/ with size=448 bytes payload And in the splunkd.log i see below error : 07-15-2020 18:35:41.311 +0200 WARN ScriptRunner - Killing script, probably timed out, grace=5sec, script="bla/bla/splunk/etc/apps/alert_webhook/bin/webhook.py --execute" 07-15-2020 18:35:41.314 +0200 ERROR sendmodalert - action=webhook - Execution of alert action script failed 07-15-2020 18:35:41.314 +0200 ERROR sendmodalert - Error in 'sendalert' command: Alert script execution failed. 07-15-2020 18:35:41.314 +0200 ERROR SearchScheduler - Error in 'sendalert' command: Alert script execution failed., search='sendalert webhook results_file=   Do I have to pass the token also along with the url in the webhook configuration page ? Currently in the triggeracgtion -> Webhook  -> url -> i have just added the client url like this : http://IPoftheclientmachine:port/ DO i have to append this with some token or something else at the end of the url ?
Hi, I have a question for UF.   1. From the capture below, it seems that UF has parsingQueue. As I understand, UF dose not parse. Parsing is HF or Indexer's role. Am I wrong? Why is there parsingQu... See more...
Hi, I have a question for UF.   1. From the capture below, it seems that UF has parsingQueue. As I understand, UF dose not parse. Parsing is HF or Indexer's role. Am I wrong? Why is there parsingQueue inside UF pipeline? (Let's say I just collect log data, not structured-csv file.) 2. If it is correct that UF has parsingQueue, how to control the size? Is it related to maxQueueSize in outputs.conf or [queue] in limits.conf? 3. From below image, what is difference between parsingQueue and tcpout_queue, and how to control size for each of them?
Hello, im doing a table with data from a sumarized index by month and I need help with the values of the month field I have this: The months are misplaced. my query is: index=mysumar... See more...
Hello, im doing a table with data from a sumarized index by month and I need help with the values of the month field I have this: The months are misplaced. my query is: index=mysumarizedindex source=mysumarizedreport  | bin _time span=1mon |rename _time as fecha attack as "Tipo de ataque" | eval fecha=strftime(fecha, "%B-%Y") | chart count by "Tipo de ataque" fecha thanks  
Hi Team,   I have extracted a field which contains some response. From that response in that field I need only certain values to be displayed is it possible. For eg : The Response is below  comes... See more...
Hi Team,   I have extracted a field which contains some response. From that response in that field I need only certain values to be displayed is it possible. For eg : The Response is below  comes within a field called Response. Sometimes there will be multiple effectrate in the Response. Please advise. de":null,"AABBCCDDDEEDDDoProcess": &&&&&&&&&&&,"effectrate":1.0", "Test" From this I wanted to display only "effectrate:1.0" No Response 1 effectrate:1.0
Hi Guys, I am trying find changes in office 365 ip address and URL using SPL by comparing results from today to yesterday. Probably there is an efficient way of doing this too!   Script: ind... See more...
Hi Guys, I am trying find changes in office 365 ip address and URL using SPL by comparing results from today to yesterday. Probably there is an efficient way of doing this too!   Script: index=dp source="rest://Query" earliest=-1d@d latest=now | stats values(tcpPorts) as tcpPorts_t values(udpPorts) as udpPorts_t values(ips{}) as ips_t by urls{} |  appendcols     [search index=dp source="rest://Query" earliest=-2d@d latest=-1d@d | stats values(tcpPorts) as tcpPorts_y values(udpPorts) as udpPorts_y values(ips{}) as ips_y by  urls{} ] | eval change=if("tcpPorts_t"="tcpPorts_y" OR "udpPorts_t"="udpPorts_y" or "ips_t"="ips_y", "Change", "No Change") | join type=left  change     [search index=dp source="rest://Query" earliest=-1d@d latest=now | stats values(tcpPorts) as tcpPorts_t values(udpPorts) as udpPorts_t values(urls{}) as urls{}_t by ips{} |  appendcols     [search index=dp source="rest://Query" earliest=-2d@d latest=-1d@d | stats values(tcpPorts) as tcpPorts_y values(udpPorts) as udpPorts_y values(urls{}) as urls{}_y by ips{} ] | eval change=if("tcpPorts_t"="tcpPorts_y" OR "udpPorts_t"="udpPorts_y" or "urls{}_t"="urls{}_y", "Change", "No Change") ] | table  change tcpPorts_t tcpPorts_y udpPorts_t udpPorts_y ips_t ips_y urls{}_t urls{}_y |  sort - change   Ip address are appearing ok but getting just 1 value for url. Not too sure if Makemv will help here?   Labels 
Hi , I need to replace value of _time with special extracted log time event. I am using this search but its not working .  Log event  : 20200625_22:44:35.090 (thread=1): User ID: xxxx ....| rex fi... See more...
Hi , I need to replace value of _time with special extracted log time event. I am using this search but its not working .  Log event  : 20200625_22:44:35.090 (thread=1): User ID: xxxx ....| rex field=_raw "^(?P<newtime>[^ ]+)" | eval newtime =strptime(newtime, "%m/%d/%Y") | eval _time = 'newtime' | table newtime _time   I am trying to capture the time "20200625_22:44:35.090" in newtime and put this value in _time   Please help   
I am trying to make my own COVID-19 dashboard. I was wondering if I could use the code provided on the CDC website.  You can find this code by visiting https://www.cdc.gov/coronavirus/2019-ncov/case... See more...
I am trying to make my own COVID-19 dashboard. I was wondering if I could use the code provided on the CDC website.  You can find this code by visiting https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/cases-in-us.html and scroll down to "Deaths by Jurisdiction" and click on "Add US Map to your website"  <div data-post-id="50079" data-site-id="499" data-site-root-folder="coronavirus" class="mb-3" data-host="www.cdc.gov" data-theme="theme-cyan" data-cdc-widget="cdcMaps" data-config-url="/coronavirus/2019-ncov/cases-updates/us-case-count-maps-charts/map-cases-us-DeathOnly_Desktop.json"> </div> <script src="//tools.cdc.gov/TemplatePackage/contrib/widgets/tp-widget-external-loader.js"></script>  What is wrong with this piece of code I added to my dashboard? There are no errors but I don't see any panel on the dashboard.  <row> <panel> <html> <title>Deaths by Jurisdiction </title> <div data-post-id="50079" data-site-id="499" data-site-root-folder="coronavirus" class="mb-3" data-host="www.cdc.gov" data-theme="theme-cyan" data-cdc-widget="cdcMaps" data-config-url="/coronavirus/2019-ncov/cases-updates/us-case-count-maps-charts/map-cases-us-DeathOnly_Desktop.json"> </div> <script src="//tools.cdc.gov/TemplatePackage/contrib/widgets/tp-widget-external-loader.js"></script> </html> </panel> </row>  
The .csv file that I am using as input has a column name that begins with a percent sign ("% Complete").  I just noticed that the indexer has been ignoring this column.  Even when I updated the sourc... See more...
The .csv file that I am using as input has a column name that begins with a percent sign ("% Complete").  I just noticed that the indexer has been ignoring this column.  Even when I updated the sourcetype to use the first row as column names and it recognized the existence of the column (renamed it as simply "Complete"), it still won't show up as available to use in reports / dashboards.  Is there a way to get around this that does not involve changing the column name?  Thanks.