All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to set up Content-Security-Policy, and I need a way to collect violation reports. I was hoping to use Splunk HEC to do this, but so far I am not getting anything in.  If I CURL the collec... See more...
I am trying to set up Content-Security-Policy, and I need a way to collect violation reports. I was hoping to use Splunk HEC to do this, but so far I am not getting anything in.  If I CURL the collector URL, I can see the result in Splunk. But nothing from Browsers. Sourcetype is set as json-no-tiimestamp, and I am not sending a response back.  I could not find  as single example of doing this, so if this is not correct please let me know. Are there any considerations I have not thought of here?  I am not restricting access by IP for the external NAT that leads to the HEC for this instance.  I do not, however, have a proper SSL cert.  Will browsers like Chrome refuse to POST if the HEC only has a self-signed?
  Hi Everyone I have 2 queries  1) mysearchquery | table xyz   2) mysearchquery| table abc And these two queries does not have anything common I need to compare xyz and abc in a single query  ... See more...
  Hi Everyone I have 2 queries  1) mysearchquery | table xyz   2) mysearchquery| table abc And these two queries does not have anything common I need to compare xyz and abc in a single query  How can I do that. Thank you in advance 
I am trying to configure the Shodan app for Phantom. In the account settings, the only field to set is the Shodan API key. When I copy and paste from my Shodan account, I get this error whenever I pr... See more...
I am trying to configure the Shodan app for Phantom. In the account settings, the only field to set is the Shodan API key. When I copy and paste from my Shodan account, I get this error whenever I press 'test connectivity': "Response is not a valid JSON. Expecting value: line 1 column 1 (char 0). API Key is not valid. No action executions found." I have tried resetting the api key and using the new value, and get the same error.  Is there something I'm missing?  
Hey all, I seem to have ran into a snag when it comes to a specific feature I am trying to implement. My team wants to implement a feature with the input dropdowns on the webpage where the startin... See more...
Hey all, I seem to have ran into a snag when it comes to a specific feature I am trying to implement. My team wants to implement a feature with the input dropdowns on the webpage where the starting value would be set to the realname of the account currently logged in. It is important to note we are building an HTML dashboard, and we have recently transitioned from splunk enterprise to cloud. So far I was able to retrieve account information using splunk/mvc/sharedmodels and the following code:   var temp; temp = SharedModels.get("user").entry; console.log(temp);   I know how to get the username, realname, email, etc. What I have an issue is there seems to be no documentation about this library anywhere, but maybe I am overlooking it? Most questions I have found where using this has been the answer were posted in 2013. Is there a better library I should be using that fits Splunk best practices in 2020? 
Hi guys, I have just finished building my Splunk Application and would like to upload to the Splunk store for people to download. What is the process for submitting an app to the Splunk store?   T... See more...
Hi guys, I have just finished building my Splunk Application and would like to upload to the Splunk store for people to download. What is the process for submitting an app to the Splunk store?   Thank you, Marco
I'm generating a timechart, with a 5 period simple moving average. I'm only searching over a week, with the span set to 1 day. This results in only 3 moving average data points. What would be ideal ... See more...
I'm generating a timechart, with a 5 period simple moving average. I'm only searching over a week, with the span set to 1 day. This results in only 3 moving average data points. What would be ideal is to get the moving average data points from the earlier data (days in previous week) added to the result of my 1 week search. I investigated the possibility of extending the search over two weeks, and then truncating the results, but I do not think this is possible.  Is what I'm trying at all possible?
Hi, In "splunk_app_db_connect" I've defined this input configuration: [ALERT_SNO_MISMATCH] connection = PDBAPP_SYSTEM_SCAN description = Search all 5 minutes for "SNO mismatch" disabled = 0 i... See more...
Hi, In "splunk_app_db_connect" I've defined this input configuration: [ALERT_SNO_MISMATCH] connection = PDBAPP_SYSTEM_SCAN description = Search all 5 minutes for "SNO mismatch" disabled = 0 index = temp index_time_mode = dbColumn input_timestamp_column_number = 1 input_timestamp_format = yyyy-MM-dd HH:mm:ss.SSS interval = */5 * * * * mode = batch query = select to_char(ORIGINATING_TIMESTAMP,'YYYY-MM-DD HH24:MI:SS.FF3') AS TIME, MESSAGE_TEXT\ from v$diag_alert_ext\ where component_id like '%rdbms%'\ and message_text like '%SNO mismatch for LOCAL TRAN%'\ and originating_timestamp>sysdate-1/288; query_timeout = 300 sourcetype = csv fetch_size = 10000 Running the SQL statement in SQL explorer I receive such a result set: TIME MESSAGE_TEXT 2020-10-21 15:30:08.379 SNO mismatch for LOCAL TRAN 14.4.82391 2020-10-21 15:31:07.907 SNO mismatch for LOCAL TRAN 11.30.78254 2020-10-21 15:31:08.709 SNO mismatch for LOCAL TRAN 27.33.68134 2020-10-21 15:31:42.296 SNO mismatch for LOCAL TRAN 20.28.52198   But as repetitive running db_input-job it fails always with this error: 2020-10-21 15:30:33.144 +0200 [QuartzScheduler_Worker-21] ERROR org.easybatch.core.job.BatchJob - Unable to process Record: {header=[number=1, source="ALERT_SNO_MISMATCH", creationDate="Wed Oct 21 15:30:33 CEST 2020"], payload=[HikariProxyResultSet@1360101480 wrapping oracle.jdbc.driver.ForwardOnlyResultSet@409d22a]} 2020-10-21 15:30:33.144 +0200 [QuartzScheduler_Worker-21] ERROR org.easybatch.core.job.BatchJob - Unable to process Record: {header=[number=2, source="ALERT_SNO_MISMATCH", creationDate="Wed Oct 21 15:30:33 CEST 2020"], payload=[HikariProxyResultSet@1360101480 wrapping oracle.jdbc.driver.ForwardOnlyResultSet@409d22a]} It does this error for each row. At the end nothing got stored in index. Any idea what I'm doing wrong? Thanks in advance, Aaron
While I am trying to set up a search head, apps for certificate, webserver and search head base config is getting deleted when I try to start Splunk service. Also, ./splunk start shows only splunk d... See more...
While I am trying to set up a search head, apps for certificate, webserver and search head base config is getting deleted when I try to start Splunk service. Also, ./splunk start shows only splunk daemon is getting started and not web. Any suggestions would be highly appreciated.
Hi, I have a problem: I have 2 timecharts- timechart1 and timchart2. i want that when i will do zoom in in timechart1, it will do the same zoom in in timechart2,  so the data will be shown in thes... See more...
Hi, I have a problem: I have 2 timecharts- timechart1 and timchart2. i want that when i will do zoom in in timechart1, it will do the same zoom in in timechart2,  so the data will be shown in these timecharts in the same timerange. it is important that you will understand my question and don't send me irrellevant solution from a different issue that was posted in this forum. The "selection solution" doesn't work, because it will do zoom in only in the second chart, and in the first chart we will only see this selection but the time range will not change.  
Reposting this in the correct area of the forum. I have made a Tech add on that polls an API. In order to perform requests against the API, an API key is required. I have used the following as the ... See more...
Reposting this in the correct area of the forum. I have made a Tech add on that polls an API. In order to perform requests against the API, an API key is required. I have used the following as the example in order to get the URL and the API key for the request saved onto Splunk: https://github.com/splunk/Developer_Guidance_Setup_View_Example The setup page is working as expected from that example, and am able to correctly populate data with it. After that, when I look at the app specific conf files ( ie /opt/splunk/etc/apps/TA-eg/local/ta_eg_settings.conf) Then I see that the api_key is starred out, ie encrypted at rest:   [additional_parameters] api_key = ******** eg_domain = <correct_plaintext_domain> disabled = 0   The following files are also present in /opt/splunk/etc/apps/TA-eg/local/: app.conf inputs.conf passwords.conf However if I attempt to use the python helper function get_global_setting("api_key") as defined in: https://docs.splunk.com/Documentation/AddonBuilder/3.0.2/UserGuide/PythonHelperFunctions Then it is always returning the string "undefined" as opposed to the correct API key that I added. In TA-eg/bin/TA_eg_rh_settings.py  I have also set encrypted = True for the correct field:   fields_additional_parameters = [ field.RestField( 'eg_domain', required=True, encrypted=False, default='', validator=validator.String( min_len=0, max_len=8192, ) ), field.RestField( 'api_key', required=True, encrypted=True, default='', validator=validator.String( min_len=0, max_len=8192, ) ) ]   Please help me figure out how to get the correct output of this encrypted at rest API key using the Splunk helper functions.  
I have 2 columns in a table each of which have 1 multivalue field. ColumnA  ColumnB abc              abc def               ghi ghi jkl I want to create a new column where i do an except bet... See more...
I have 2 columns in a table each of which have 1 multivalue field. ColumnA  ColumnB abc              abc def               ghi ghi jkl I want to create a new column where i do an except between the 2 multivalue columns - i.e  as per the above table the result should be def, jkl. I know we can use -> | eval columnC = mvfilter(!match(column, searchText)) to achieve the same but the searchText needs to be static & i can't use another fields value. How can i achieve the result i want ?
Hi, I have a clustered environment (Search Head Cluster with 1 Forwarder,  3 SHs, and 2 Indexers). I have deployed a custom-built app on the Forwarder. I would like to set schedule saved searches ... See more...
Hi, I have a clustered environment (Search Head Cluster with 1 Forwarder,  3 SHs, and 2 Indexers). I have deployed a custom-built app on the Forwarder. I would like to set schedule saved searches on Search Head. How to achieve this from Splunk UI? Or Is there any other way than Splunk UI?
I have made a Tech add on that polls an API. In order to perform requests against the API, an API key is required. I have used the following as the example in order to get the URL and the API key fo... See more...
I have made a Tech add on that polls an API. In order to perform requests against the API, an API key is required. I have used the following as the example in order to get the URL and the API key for the request saved onto Splunk: https://github.com/splunk/Developer_Guidance_Setup_View_Example When I look at the app specific conf files ( ie /opt/splunk/etc/apps/TA-eg/local/ta_eg_settings.conf) Then I see that the api_key is starred out, ie encrypted at rest:       [additional_parameters] api_key = ******** eg_domain = <correct_plaintext_domain> disabled = 0       However if I attempt to use the python helper function get_global_setting("api_key") as defined in: https://docs.splunk.com/Documentation/AddonBuilder/3.0.2/UserGuide/PythonHelperFunctions Then it is always returning the string "undefined" as opposed to the correct API key that I added.   In TA-eg/bin/TA_eg_rh_settings.py  I have also set encrypted = True for the correct field:       fields_additional_parameters = [ field.RestField( 'eg_domain', required=True, encrypted=False, default='', validator=validator.String( min_len=0, max_len=8192, ) ), field.RestField( 'api_key', required=True, encrypted=True, default='', validator=validator.String( min_len=0, max_len=8192, ) ) ]         Please help me figure out how to get the correct output using the Splunk helper functions.
I have splunk readiness 2.2.1 version installed on my splunk enetrprise 7.3.3 version.I am literally scanning a single app and still the scanning is not completed even after 8 hrs.The status is showi... See more...
I have splunk readiness 2.2.1 version installed on my splunk enetrprise 7.3.3 version.I am literally scanning a single app and still the scanning is not completed even after 8 hrs.The status is showing as below 0% complete, Starting a new scan. I even tried to hit from REST API as given in doc but even through that scanning was not completed https://docs.splunk.com/Documentation/UpgradeReadiness/2.2.0/Use/API What am i missing out in steps?Any suggestions? Thanks
Hello, I would like to create the alert that: someone login to system (event login = successful login) and I just want to check if in 5 min from this event, was any user or group was created by use... See more...
Hello, I would like to create the alert that: someone login to system (event login = successful login) and I just want to check if in 5 min from this event, was any user or group was created by user (which is not member of admin group). or another version: If X notification was triggered +  notification about new user or new group was triggered (created not by admin)- but 1h before and 1h after notification X (timestamp), then: generate alert Could you please provide some tips for this case?   Best regards, Jacob
Hello All, I have grouped some similar business transactions. Now I am returning to "All BTs" and don't want to keep the ones that I already have included in the group. After deleting those from t... See more...
Hello All, I have grouped some similar business transactions. Now I am returning to "All BTs" and don't want to keep the ones that I already have included in the group. After deleting those from the "All BT" section the BTs have disappeared from the group as well. Is there any way to keep those in the group where I can delete the same in all BT section to keep the list of BTs minimal?  Another query: When I am choosing the filter to show groups only I am not able to see the BTs under the groups. The groups are blank, but when I am clearing the filter and selecting the groups, I can find the BTs inside the group. Any idea why it is happening? Thanks in Advance
Hi, I just upgraded our Splunk server to 8.1.0 and after a while  realized some of our good old searches utilized in a couple of dashboards had stopped working. After a lot of digging around, it s... See more...
Hi, I just upgraded our Splunk server to 8.1.0 and after a while  realized some of our good old searches utilized in a couple of dashboards had stopped working. After a lot of digging around, it seems like the problem has been narrowed down to tags not working as before in 8.0.6. The code below will serve as an example.   index="myindex" summaryline | stats count(eval(STATUS_CODE=10 OR STATUS_CODE=42)) AS "No-go" count(eval(STATUS_CODE=50)) AS OK by tag::CUSTOMER_CODE   The ides is, that summaryline is just a keyword to match the appropriate lines of data. The lines will populate the fields CUSTOMER_CODE and STATUS_CODE, and tag::CUSTOMER_CODE is defined based on the value of CUSTOMER_CODE under Settings > Tags. Running the above in Verbose mode works, even though previously I didn't need to include CUSTOMER_CODE in the fields list. Running in Fast mode will produce nothing for the same data but results will be shown, if statistics are calculated "by CUSTOMER_CODE". For this reason, the dashboard using the above construct stopped producing the data we wanted. Looking at the problem, there is an obvious workaround to use lookup, and the example search looks like this   index="myindex" summaryline | lookup CustomerLookup.csv CUSTOMER_CODE OUTPUT CUSTOMER AS Customer | stats count(eval(STATUS_CODE=10 OR STATUS_CODE=42)) AS "No-go" count(eval(STATUS_CODE=50)) AS OK by Customer   CustomerLookup.csv will then hold columns CUSTOMER_CODE, CUSTOMER and possibly others separated by commas. I tested, and I could not output the CUSTOMER column as tag::CUSTOMER, or no results were produced. Anyone else with this kind of experiences and is there a way to make tags work again like they used to?
We are getting alert from our LInux team stating high swap space observed for splunkd process on the Heavy forwarder which also acts as a syslog server . Below command is said to be consuming splun... See more...
We are getting alert from our LInux team stating high swap space observed for splunkd process on the Heavy forwarder which also acts as a syslog server . Below command is said to be consuming splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd   PID      USER PR  NI  VIRT           RES        SHR     S      %CPU  %MEM   TIME+     COMMAND 5580 splunk 20 0 4966352 878220 15516  S         57.8         5.4     5028:07   splunk   SWAP TOTAL = 4617084928 bytes SWAP USED = 4617084928 bytes   Please let me know what should be done to fix this issue .
Hello, I'm using DB connect 3.1.4, Here are the errors we get 2020-10-21 11:55:00.165 +0100 [QuartzScheduler_Worker-9] ERROR org.easybatch.core.job.BatchJob - Unable to write records java.io.IOEx... See more...
Hello, I'm using DB connect 3.1.4, Here are the errors we get 2020-10-21 11:55:00.165 +0100 [QuartzScheduler_Worker-9] ERROR org.easybatch.core.job.BatchJob - Unable to write records java.io.IOException: HTTP Error 403, HEC response body: {"text":"Invalid token","code":4}, trace: HttpResponseProxy{HTTP/1.1 403 Forbidden [Date: Wed, 21 Oct 2020 10:55:00 GMT, Content-Type: application/json; charset=UTF-8, X-Content-Type-Options: nosniff, Content-Length: 33, Vary: Authorization, Connection: Keep-Alive, X-Frame-Options: SAMEORIGIN, Server: Splunkd] ResponseEntityProxy{[Content-Type: application/json; charset=UTF-8,Content-Length: 33,Chunked: false]}} at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEventBatch(HttpEventCollector.java:132) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEvents(HttpEventCollector.java:96) at com.splunk.dbx.server.dbinput.recordwriter.HecEventWriter.writeRecords(HecEventWriter.java:36) at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:203) at org.easybatch.core.job.BatchJob.call(BatchJob.java:79) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) 2020-10-21 11:55:00.165 +0100 [QuartzScheduler_Worker-9] INFO org.easybatch.core.job.BatchJob - Job 'XXXXXXXXXXXX' finished with status: FAILED   Execute SQL works, connection with DB is good The problem is that I don't see any new data coming in Already checked the Timstamp and tried to add  to db_inputs.conf token = [My_Token_Number]   Please assist. Thanks, Hen
As per https://docs.microsoft.com/en-us/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365#how-to-set-up-smtp-auth-c... See more...
As per https://docs.microsoft.com/en-us/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365#how-to-set-up-smtp-auth-client-submission ,  my email server settings are as follows: Mail host: smtp.office365.com:587 (I've also tried using port 25 or omitting it entirely) Email security: Enable TLS Username: UPN of an Exchange Online-licensed 365 account. Password: Password of the same Exchange Online-licensed 365 account. I created an alert normally and configured it to send an email: However, the log reports two problems (probably related): It's trying to use server="localhost". SMTP AUTH extension not supported by server.  The advanced settings of the alert show that it is aware of the 365 SMTP server. From https://community.splunk.com/t5/Alerting/How-to-get-Office-365-integrated-as-my-SMTP-server-for-Splunk/m-p/212800 ,  I know it's normal to have to specify the server when using sendmail in a query and the advanced setting action.email.command uses sendmail but doesn't reference the server so I appended  server=$action.email.mailserver$ and even explicitly server=smtp.office365.com:587 but neither made a difference. Can anyone help?