All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello! I am new in Splunk Search.   I am using this query to find all hosts to which a specific update was installed: source="WinEventLog:System" | search "KB4579311" | stats last(Keywords) as last... See more...
Hello! I am new in Splunk Search.   I am using this query to find all hosts to which a specific update was installed: source="WinEventLog:System" | search "KB4579311" | stats last(Keywords) as lastStatus by _time, host | search lastStatus="Installation, Failure” But I need a query to find all hosts and create a table with hosts to which this update wasn't installed. It turns out that I need to display all hosts that were not found in the request above. Need help with it. Thank you!
Windows add-on 8.0.0, Splunk 8.0.4. No matter the interval settings in inputs.conf, they seem to run at random times. For example on one host alone, the "service" checker ran 9 times in one hour. Th... See more...
Windows add-on 8.0.0, Splunk 8.0.4. No matter the interval settings in inputs.conf, they seem to run at random times. For example on one host alone, the "service" checker ran 9 times in one hour. The setting in the stanza is for once a day (86400). I've tried setting it other values -- nothing seems to matter. Also happening on all other inputs (sourcetype=WinHostMon) that have an interval setting. Disk, for example (also set interval = 86400) is running 2-16 times for host in one hour. I've searched for this, and heard the descriptions of the scripts "taking a long time to run" yadda, yadda... but come on, not all of them...and these aren't scripts (and we have arguably over powered hardware running this). This is generating a *lot* of entries for our small test group of only 200. Thoughts? Thanks. Mike
It appears that according to the documentation, I should be able to configure Trend to forward all module events to Splunk via syslog, and then store it into a file and monitor that file with Splunk ... See more...
It appears that according to the documentation, I should be able to configure Trend to forward all module events to Splunk via syslog, and then store it into a file and monitor that file with Splunk as an input, and that sourcetype is supposed to be "deepsecurity".   On the "details" page on Splunk's website for the app - it states this: "It is highly recommended that you follow the Splunk best practices for syslog, and that you configure rsyslog or syslog-ng to write syslog output to a file which can then be collected by a Splunk forwarder and sent to the Splunk server. You need to ensure that the Splunk forwarder sets the sourcetype to deepsecurity when forwarding events to a Splunk receiver."   I have an on-prem HF, and a Splunk Cloud Search Head.  The app is installed on the Search Head, and the HF is storing the trend data as "deepsecurity" sourcetype.  Then, what's supposed to happen (to my understanding) is the app is configured to rewrite the sourcetype field to the appropriate module.   Ex:  deepsecurity -> deepsecurity-antimalware   The problem -------------------------- Search Head gets the data and the app never rewrites the sourcetype, so everything is being seen as "deepsecurity" and none of the dashboard panels populate.   What am I doing wrong?  Do I need the Trend app on my HF as well?  The HF doesn't index anything.
Hi All, I have used below query to check the usage of the dashboards. But I am not able to get all the users. Can someone guide me on that. index=_internal sourcetype=splunkd_ui_access Infrastructur... See more...
Hi All, I have used below query to check the usage of the dashboards. But I am not able to get all the users. Can someone guide me on that. index=_internal sourcetype=splunkd_ui_access Infrastructure NOT splunkd user!="-" | rex field=uri "^/[^/]+/app/(?[^/]+)/(?[^?/\s]+)" | search NOT dashboard IN (alert alerts dashboards dataset datasets data_lab home lookup_edit reports report search splunk) | stats count by app dashboard user
Thanks in advance for any responses... Scenario: I have a Heavy Forwarder Installed in my AWS environment sending my data to splunk cloud... works fine for any servers in AWS with a UF sending to ... See more...
Thanks in advance for any responses... Scenario: I have a Heavy Forwarder Installed in my AWS environment sending my data to splunk cloud... works fine for any servers in AWS with a UF sending to HF and then to cloud We have two accounts: AccountA has a read-only access right assigned for an audit role across all services. And has read-only access to an S3 storage bucket containing all logs... AWS forwarder is an EC2 instance under this account We created a User in AccountA with With Cross-Account Assume Permission that will enable it to assume a role in AccountB with Full Read-Only Access to S3 bucket but get errors... AccountB has a logging archive role and read-only access to an S3 bucket where all logs from all services are written to the S3 bucket. what is the best way to configure the add-on to pull the logs from this s3 bucket... there are so many input options but we tried S3 Inputs/Access Logs/Generic S3 with the account and role...
Hi, I have put together a correlation which looks as user account lockouts, and have had it to send an email (using the GUI Adaptive Response Actions). My issue is that if the correlation detects a... See more...
Hi, I have put together a correlation which looks as user account lockouts, and have had it to send an email (using the GUI Adaptive Response Actions). My issue is that if the correlation detects a single lockout within it's search window, it will generate the email, however, if there are multiple results returned, it fails to send out emails.   Example correlation search:       | tstats summariesonly=true count earliest(_time) as FirstSeen latest(_time) as LastSeen values(All_Changes.src) as Computer values(All_Changes.Account_Management.dest_nt_domain) as All_Changes.Account_Management.dest_nt_domain from datamodel=Change where All_Changes.result_id=4740 All_Changes.Account_Management.src_nt_domain=mydomain by All_Changes.Account_Management.src_nt_domain All_Changes.user All_Changes.result_id All_Changes.result All_Changes.signature | eval FirstSeen=strftime(FirstSeen,"%Y-%m-%d %H:%M.%S"), LastSeen=strftime(LastSeen,"%Y-%m-%d %H:%M.%S") | rename All_Changes.* as * | rename Account_Management.* as * | `thales_get_asset(dest_nt_domain)` | `get_identity4events(user)` | table FirstSeen LastSeen src_nt_domain dest_nt_domain user_original user_first user_email Computer result_id result signature count Computer_description Computer_ip Computer_lookup_source Computer_owner       With the Email Adaptive response configured something like so:     To: $result.user_email$ Subject: $name$ - $result.user_original$ Body: Hi $result.user_first$, [some explanation stuff here] Between $result.FirstSeen$ and $result.LastSeen$, your account $result.user_original$ has been locked out $result.count$ times. These account lockouts occurred on the following systems: $result.dest_nt_domain$ [blah blah etc]       Any idea what I am doing wrong here, any advice on how to progress, etc? Cheers, Sheamus
Hello, I've setup a source for Splunk Cloud using the monitor file source like this:   [monitor://C:\Logs\*.log] disabled = 0 sourcetype = companyname:appname   Data comes in fine, but the dateti... See more...
Hello, I've setup a source for Splunk Cloud using the monitor file source like this:   [monitor://C:\Logs\*.log] disabled = 0 sourcetype = companyname:appname   Data comes in fine, but the datetimeutc offset in this log is incorrect. What is the correct and best way resolve this.  Redoing the setup is an option as it will only take a few hours to reload the historical data.  
Hi Splunk Community, I am building a Python modular input using the Splunk add-on builder tool and I want to add a custom configuration option 'performance_custom_metrics' which is optional and set ... See more...
Hi Splunk Community, I am building a Python modular input using the Splunk add-on builder tool and I want to add a custom configuration option 'performance_custom_metrics' which is optional and set only in inputs.conf, not via the data input UI configuration defined when building the add-on. Everything seems fine for the most part, but when I try to add a new data input I get the following error about the optional config option: The following required arguments are missing: performance_custom_metrics  Because performance_custom_metrics is not defined in the add-on UI configuration I am not allowed to create inputs via the UI. Only manual creation via inputs.conf works as expected. I've set required_on_create and required_on_edit both to False when defining the argument in inputs.py: scheme.add_argument(smi.Argument( 'performance_custom_metrics', required_on_create=False, required_on_edit=False)) From solnlib/packages/splunklib/modularinput/argument.py #60: :param required_on_edit: ``Boolean``, whether this arg is required when editing an existing modular input of this kind. :param required_on_create: ``Boolean``, whether this arg is required when creating a modular input of this kind. Has anyone any idea what other option I need to set to make performance_custom_metrics an optional input parameter for my add-on?
I would like to ask a doubt: for the following time format, we can use the following timestamp, just for an example time format:2020-11-09 11:20:35 timestamp:%Y-%m-%d %H:%M:%S ... See more...
I would like to ask a doubt: for the following time format, we can use the following timestamp, just for an example time format:2020-11-09 11:20:35 timestamp:%Y-%m-%d %H:%M:%S   here is my doubt for the following 13 digit epoch time format which timestamp can we use? time format:1589479343000 timestamp:?  working on the Eventgen app to generate the 13 digit epoch time.   Thanks in Advance
During an indexer cluster rolling restart we are missing events for a certain index and these events appear to be lost when one of the indexers just came back up after the bounce. Can it be?
Hello,  I am trying to do a search query using JSON.  It works if I use the normal form format, but not JSON. Working command: curl -u $USER-k https://<URL>/services/search/jobs/export -d search="se... See more...
Hello,  I am trying to do a search query using JSON.  It works if I use the normal form format, but not JSON. Working command: curl -u $USER-k https://<URL>/services/search/jobs/export -d search="search index=myIndex sourcetype=activitylog EVENT=START | dedup USER | stats count" Not working command: curl -u $USER--header "Content-Type: application/json" --request POST --data '{"search":"search index=myIndex sourcetype=activitylog EVENT=START | dedup USER | stats count"}' https://<URL>/services/search/jobs/export -->  Reports: <msg type="FATAL">Empty search.</msg> Any ideas why the JSON format doesnt work?  thanks! 
I am getting following error when i am trying to configure Resilient app on Splunk. Error while posting to url=/servicesNS/nobody/SA-Resilient/admin/sa_resilientconfig/config Please help  
I want to be able to see the host name in search results rather than IP. In this case, the "host" I am looking for is the name of the firewall, router, or switch sending the log message. The host nam... See more...
I want to be able to see the host name in search results rather than IP. In this case, the "host" I am looking for is the name of the firewall, router, or switch sending the log message. The host names have been added to our DNS servers and nslookup returns the correct info. Any ideas on how to do this??   Thanks..
Hi I'm trying to repeat the example for replace in the Splunk documentation, within a dashboard: (https://docs.splunk.com/Documentation/SplunkCloud/8.1.2009/SearchReference/TextFunctions) I'm runni... See more...
Hi I'm trying to repeat the example for replace in the Splunk documentation, within a dashboard: (https://docs.splunk.com/Documentation/SplunkCloud/8.1.2009/SearchReference/TextFunctions) I'm running this in a dashboard, triggered by a drilldown:   <drilldown> <eval token="p1_ttr_left">replace("1/14/2017", "^(\d{1,2})/(\d{1,2})/", "\2/\1/")</eval> </drilldown>   It doesn't seem to work, nothing happens to the token (I'm writing it to the dashboard output).
Hi everyone, I have an issue when trying to generate a scheduled view and send it via email. Reading similar issues in the community, I set the following parameters in the savedsearches.conf file: ... See more...
Hi everyone, I have an issue when trying to generate a scheduled view and send it via email. Reading similar issues in the community, I set the following parameters in the savedsearches.conf file:   action.email.maxtime = 3600m dispatch.max_time = 14400   increased the limits in limits.conf:   [pdf] render_endpoint_timeout = 14400 [scheduler] scheduled_view_timeout = 240m   and the splunkd timeout in web.conf:   splunkdConnectionTimeout = 14400   In other words, I tried to extend the timeout to 4 hours. Nevertheless, after 1 hour the process tries to finalize the results and send the email, but it doesn't find any result and fails. This is the first line of the process inspector (see the timeout still set on 3600 seconds):   This search has completed in 3.608,538 seconds, but did not match any events   And this is the python.log (anonymized), see that seach starts at 12:00 and sendmail tries to send mail after 1 hour:   2020-11-09 12:00:48,466 +0100 INFO sendemail:1149 - sendemail pdfgen_available = 1 2020-11-09 12:00:48,466 +0100 INFO sendemail:1294 - sendemail:mail effectiveTime=1604919600 2020-11-09 13:00:48,502 +0100 ERROR __init__:499 - Socket error communicating with splunkd (error=('The read operation timed out',)), path = /services/pdfgen/render 2020-11-09 13:00:48,503 +0100 ERROR sendemail:1156 - An error occurred while generating a PDF: Failed to fetch PDF (SplunkdConnectionException): Splunkd daemon is not responding: ("Error connecting to /services/pdfgen/render: ('The read operation timed out',)",) 2020-11-09 13:00:48,503 +0100 INFO sendemail:1175 - buildAttachments: not attaching csv as there are no results 2020-11-09 13:00:52,118 +0100 INFO sendemail:134 - Sending email. subject="Splunk Dashboard: '<DASHBOARD_NAME>'", results_link="https://<SERVER_NAME>:8000/app/<app_name>/@go?sid=", recipients="[u'user_name@domain_name.it']", server="smtprelay.domain_name.it:25"   Has anybody ever had the same issue? Can anyone help me? Thanks in advance.
Hi Splunk Community, I am building a Python modular input using the Splunk add-on builder tool and I want to add a custom configuration option 'performance_custom_metrics' which is optional and set ... See more...
Hi Splunk Community, I am building a Python modular input using the Splunk add-on builder tool and I want to add a custom configuration option 'performance_custom_metrics' which is optional and set only in inputs.conf, not via the data input UI configuration defined when building the add-on. Everything seems fine for the most part, but when I try to add a new data input I get the following error about the optional config option: The following required arguments are missing: performance_custom_metrics  Because performance_custom_metrics is not defined in the add-on UI configuration I am not allowed to create inputs via the UI. Only manual creation via inputs.conf works as expected. I've set required_on_create and required_on_edit both to False when defining the argument in inputs.py: scheme.add_argument(smi.Argument( 'performance_custom_metrics', required_on_create=False, required_on_edit=False)) From solnlib/packages/splunklib/modularinput/argument.py #60: :param required_on_edit: ``Boolean``, whether this arg is required when editing an existing modular input of this kind. :param required_on_create: ``Boolean``, whether this arg is required when creating a modular input of this kind. Has anyone any idea what other option I need to set to make performance_custom_metrics an optional input parameter for my add-on?
Hello i have data model acceleration configured for all time. i have data indexed for 16 months but when im running my query with    summariesonly=true    im getting less results than when im r... See more...
Hello i have data model acceleration configured for all time. i have data indexed for 16 months but when im running my query with    summariesonly=true    im getting less results than when im running    summariesonly=false   and the latest data is a year ago also, i see this error msg in the data model page :   Error in data model "events_prod" : JSON file contents not available.    i don't know if it is related or not ... i couldn't find any errors in the logs i've added to datamodels.conf this stanza:   acceleration.allow_skew = 100%   but it didn't help. any suggestion ? thanks sarit 
Hi, I have configured email server settings in Splunk and I am not receiving any emails, but for same email configurations on my java sample code I am receiving emails, please help. Regards, Harish
I am rendering a table for each row of a table. But the table is not getting scrolled to other pages. On first reload of the dashboard the "Next" page option is working but when collapsing other rows... See more...
I am rendering a table for each row of a table. But the table is not getting scrolled to other pages. On first reload of the dashboard the "Next" page option is working but when collapsing other rows this option stops working. Here is my java script I am using:   require([ 'splunkjs/mvc/tableview', 'splunkjs/mvc/chartview', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc', 'underscore', 'splunkjs/mvc/simplexml/ready!'],function( TableView, ChartView, SearchManager, mvc, _ ){ var EventSearchBasedRowExpansionRenderer = TableView.BaseRowExpansionRenderer.extend({ initialize: function(args) { // initialize will run once, so we will set up a search and a chart to be reused. this._searchManager = new SearchManager({ id: 'details-search-manager', preview: false }); this._TableView = new TableView({ managerid: 'details-search-manager' }); }, canRender: function(rowData) { // Since more than one row expansion renderer can be registered we let each decide if they can handle that // data // Here we will always handle it. return true; }, render: function($container, rowData) { // rowData contains information about the row that is expanded. We can see the cells, fields, and values // We will find the sourcetype cell to use its value var run_date_cell = _(rowData.cells).find(function (cell) { return cell.field === 'runDate'; }); var job_stage_cell = _(rowData.cells).find(function (cell) { return cell.field === 'jobStage'; }); var job_sub_stage_cell = _(rowData.cells).find(function (cell) { return cell.field === 'jobSubStage'; }); var model_desc_cell = _(rowData.cells).find(function (cell) { return cell.field === 'modelDescription'; }); var channel_cell = _(rowData.cells).find(function (cell) { return cell.field === 'channel'; }); var job_status_cell = _(rowData.cells).find(function (cell) { return cell.field === 'jobStatus'; }); //update the search with the sourcetype that we are interested in //this._searchManager.set({ search: 'index=_internal sourcetype=' + sourcetypeCell.value + ' | timechart count'}); this._searchManager.set({ search: 'index = abc sourcetype = xyz source="/test/test.log" NOT "training stats" AND NOT "testing stats" | dedup runDate,jobStage,jobSubStage,channel,modelDescription,logMessage | search runDate="'+run_date_cell.value+'" jobStage="'+job_stage_cell.value+'" modelDescription="'+model_desc_cell.value+'" channel="'+channel_cell.value+'" jobSubStage="'+job_sub_stage_cell.value+'" | dedup _time,logMessage | sort _time | table _time,logMessage'}); // $container is the jquery object where we can put out content. // In this case we will render our chart and add it to the $container $container.append(this._TableView.render().el); } }); var tableElement = mvc.Components.getInstance("job_status"); tableElement.getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.addRowExpansionRenderer(new EventSearchBasedRowExpansionRenderer()); }); });  
Hi, I have an app which collects logs and I have configured it to send data to a local enterprise instance of splunk and splunk cloud, I can see the data in the local enterprise instance but not in ... See more...
Hi, I have an app which collects logs and I have configured it to send data to a local enterprise instance of splunk and splunk cloud, I can see the data in the local enterprise instance but not in the cloud instances I do see the following which shows connection has been established   11-09-2020 10:19:08.710 +0000 INFO Metrics - group=tcpin_connections, ingest_pipe=0, 73.202.128.135:57829:9997, connectionType=cookedSSL, sourcePort=57829, sourceHost=3.202.128.13, sourceIp=3.202.128.13, destPort=9997, kb=0.326171875, _tcp_Bps=10.774122169132621, _tcp_KBps=0.010521603680793575, _tcp_avg_thruput=0.011187444180915055, _tcp_Kprocessed=10.939453125, _tcp_eps=0.03225785080578629, _process_time_ms=0, evt_misc_kBps=0, evt_raw_kBps=0, evt_fields_kBps=0, evt_fn_kBps=0, evt_fv_kBps=0, evt_fn_str_kBps=0, evt_fn_meta_dyn_kBps=0, evt_fn_meta_predef_kBps=0, evt_fn_meta_str_kBps=0, evt_fv_num_kBps=0, evt_fv_str_kBps=0, evt_fv_predef_kBps=0, evt_fv_offlen_kBps=0, evt_fv_fp_kBps=0, build=a6754d8441bf, version=8.0.3, os=Linux, arch=x86_64, hostname=splunkforwarder, guid=8ECC1FB9-30F3-4F5E-AB9A-9668E6BCCDDD, fwdType=full, ssl=true, lastIndexer=107.22.176.26:9997, ack=false root@splunkforwarder:/home/linux# /opt/splunk/bin/splunk search 'index=_internal source=*metrics.log* destHost | dedup destHost' Splunk username: splunk Password: 11-09-2020 10:14:48.227 +0000 INFO StatusMgr - destHost=prd-p-f4rpr.splunkcloud.com, destIp=107.22.176.26, destPort=9997, eventType=connect_fail, publisher=tcpout, sourcePort=8089, statusee=TcpOutputProcessor 11-09-2020 10:02:51.476 +0000 INFO StatusMgr - destHost=inputs.prd-p-f4rpr.splunkcloud.com, destIp=107.22.176.26, destPort=9997, eventType=connect_done, publisher=tcpout, sourcePort=8089, statusee=TcpOutputProcessor 11-09-2020 10:02:21.773 +0000 INFO StatusMgr - destHost=192.168.5.131, destIp=192.168.5.131, destPort=9997, eventType=connect_done, publisher=tcpout, sourcePort=8089, statusee=TcpOutputProcessor root@splunkforwarder:/home/linux#