All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can UBA be used to collect performance data of the workstations too? E.g. CPU, Memory, etc. If yes, then can you guide me to the documentation
Hi Team, I believe Tata communication launches its services for Microsoft teams so that there would be direct calling features  from teams would be available . So do we have any possibility to inte... See more...
Hi Team, I believe Tata communication launches its services for Microsoft teams so that there would be direct calling features  from teams would be available . So do we have any possibility to integrate it with Splunk or do we have any apps or add-ons to pull the logs and ingest the same into Splunk. Since we are planning to opt for this feature in coming days so we want to have the monitoring setup for this one as well. So kindly help on my request.
Hi Team, I am trying to integrate Shodan notification (Webhook) with Splunk. I have configured HTTP Even Collector in Splunk Enterprise with a Unique Index name. I am using query string authentica... See more...
Hi Team, I am trying to integrate Shodan notification (Webhook) with Splunk. I have configured HTTP Even Collector in Splunk Enterprise with a Unique Index name. I am using query string authentication mechanism as I wanted to use the splunk URI as Webhook in shodan settings so splunk will get data in. curl -k https://127.0.0.1:8088/services/collector/raw?token=5a144245-e893-4c08-8bde-94c36c0376f5 -d "JSON_DATA_HERE"  -H "X-Content-Type: CustomjsonPayload" when I am sending curl request for testing POST JSON data, I am able to get data in Events (Search query "index=hec") Now when I was reading the shodan API documents, they have mentioned that every POST request will be having some unique HTTP Headers regarding each notification generated. I do not have control on Shodan POST Request (as it is generated by their servers). So I wanted to capture ( HTTP Headers + POST Data ) both as well in the Splunk Events. Link for reference: https://help.shodan.io/developer-fundamentals/monitor-webhooks Snippet from above Shodan link: ================================  Receiving the data You've got your web service up and running, you've registered and enabled your webhook, and now it's time to actually process the incoming data that Monitor will send. The webhook notification does a POST request to your URL where: The body of the POST request contains a JSON-encoded banner The header of the POST request contains information about the alert ............ The headers contain metadata about the alert to help you understand which alert was responsible for generating the notification. Specifically, the following headers are available in the POST request: SHODAN-ALERT-ID: unique ID for the alert SHODAN-ALERT-NAME: name for the alert SHODAN-ALERT-TRIGGER: trigger that caused the notification to get sent SHODAN-SIGNATURE-SHA1: SHA1 signature encoded using your API key to validate the notification's origin   ================================ So I wanted to capture HTTP Headers as well in the Splunk Events. How could I do it.  Or if there is any other way for getting data from Webhooks, please let me know.    Thanks in advance.
We are using the Splunk Add-on  for AWS (Version : 5.0.3) and SPLUNK version 8.0.8 . We would like to leverage the SPLUNK Add-on  to consume data from Kinesis Stream and internally send data to SPLUN... See more...
We are using the Splunk Add-on  for AWS (Version : 5.0.3) and SPLUNK version 8.0.8 . We would like to leverage the SPLUNK Add-on  to consume data from Kinesis Stream and internally send data to SPLUNK HEC end point .  When it is sending data to Internal HEC end point  ( port : 8088)  it is throwing error as below for the self signed certificate being used for SPLUNK HEC.  Does anyone know how to disable the SSL certificate validation in the add-on ?  Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunktalib/event_writer.py", line 252, in write_events data=event, http=self._http) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunktalib/rest.py", line 31, in splunkd_request data, timeout, retry) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunktalib/rest.py", line 62, in urllib3_request data, timeout, retry, urllib3_req) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunktalib/rest.py", line 97, in do_splunkd_request raise e File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunktalib/rest.py", line 93, in do_splunkd_request data, timeout) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunktalib/rest.py", line 57, in urllib3_req preload_content=True) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/urllib3/request.py", line 80, in request method, url, fields=fields, headers=headers, **urlopen_kw File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/urllib3/request.py", line 171, in request_encode_body return self.urlopen(method, url, **extra_kw) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/urllib3/poolmanager.py", line 330, in urlopen response = conn.urlopen(method, u.request_uri, **kw) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/urllib3/connectionpool.py", line 760, in urlopen **response_kw File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/urllib3/connectionpool.py", line 720, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/urllib3/util/retry.py", line 436, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='127.0.0.1', port=8088): Max retries exceeded with url: /services/collector (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)')))
I have installed the Akamai SIEM Integration Add-On (4310) on our Splunk Cloud IDM.  When I go to Settings > Data Inputs, I see no input for Akamai to configure.  What am I missing?
Hi, We have setup distributed splunk 8.1.3 cluster deployment in AWS. We have configured monitoring console as a separate search head. We often have to patch our regular web search heads and each ti... See more...
Hi, We have setup distributed splunk 8.1.3 cluster deployment in AWS. We have configured monitoring console as a separate search head. We often have to patch our regular web search heads and each time new SHs come up, they need to be manually added to monitoring console node. Rather, Is there a way to run a Rest call to monitoring console node from the search peer to add itself to monitoring console?
Hi, I'm using the following datamodel search: | datamodel Test_Ping_Access summariesonly=true search | search "Ping_Access__TEST.date"=$date$ | stats count(exchangeId) And I've confirmed its worki... See more...
Hi, I'm using the following datamodel search: | datamodel Test_Ping_Access summariesonly=true search | search "Ping_Access__TEST.date"=$date$ | stats count(exchangeId) And I've confirmed its working, but when I use it in a macro I get this error: Error in 'SearchParser': The datamodel command can only be used as the first command on a search How can I resolve this?
Brute Force and Spray attacks - use case  1- Multiple accounts failed logon from the same IP - within 1 minute 2- Single account failed logon from multiple hosts - in 1 minute 3- 1 user account fa... See more...
Brute Force and Spray attacks - use case  1- Multiple accounts failed logon from the same IP - within 1 minute 2- Single account failed logon from multiple hosts - in 1 minute 3- 1 user account failed to  log to 3 hosts destination - in 2 minutes  
Can anyone help me to write a Splunk query for when I have an outage I'd like a query executed that shows the duration of the outage.  If I check 5XX or 4XX errors, It will show some logs in  hour 5 ... See more...
Can anyone help me to write a Splunk query for when I have an outage I'd like a query executed that shows the duration of the outage.  If I check 5XX or 4XX errors, It will show some logs in  hour 5 or 10 mins period ex: I checked 500 errors 10pm to 11pm...  in that one hour, errors started from 10:15pm to 10:45pm , I want only period {10:15pm to 10:45pm } no need logs  .. for that How I need to write quarry 
I have a few devices that I cannot set the reverse DNS lookup values and so the IP address is showing up within Splunk. I'm forwarding the logs via Splunk Connect for Syslog. I am attempting to set t... See more...
I have a few devices that I cannot set the reverse DNS lookup values and so the IP address is showing up within Splunk. I'm forwarding the logs via Splunk Connect for Syslog. I am attempting to set these hostnames within the /opt/sc4s/local/context/host.csv file. I found the example CSV file within the GIT repository: 169.254.0.2,HOST,foo.example My updated version of this file isn't working as expected. I found GIT patches where the HOST was updated to SOURCEIP, however, I'm not certain how that change would impact the formatting of the host.csv file. I've tried a few different formats but nothing has worked. Any suggestions would be appreciated.
I am struggling with subsearches and getting and correlating data in a single output. I need to figure out which users are using external devices. I have two indexes: AD authentication logs (compu... See more...
I am struggling with subsearches and getting and correlating data in a single output. I need to figure out which users are using external devices. I have two indexes: AD authentication logs (computer name and user-id) Logs for device activity (computer name only) The device activity logs only reports the computer names and I want to have a single table that lists the computer name and the user names along with additional fields from the activity logs. I have the following search:   eventtype=device_activity_index sourcetype=syslog_device_control ExternalDeviceType=USB* [search index="windows_dc" Source_Workstation!="server-*" | fields Source_Workstation,user] | table _time, Tenant, EventName, DeviceName, Source_Workstation, user,ExternalDeviceType, ExternalDeviceName, ExternalDeviceVendorID, ExternalDeviceProductID, ExternalDeviceSN, ZoneNames   Each search on their own works just fine and returns results. I have specified a specific computer name (Source_Workstation for AD and DeviceName for the activity log) for both searches and confirmed that when they are individually run both indexes contains logs for the same system.  I have tried using | append [search …] as well as | where DeviceName=[ search …] and I get 0 results. As I mentioned before, I have been struggling with getting subsearches to work and despite reading the Splunk documentation, Googling, and YouTube videos something is just not clicking. I am just not sure what is not clicking. Any help on what I could try to get the above search to work would be greatly appreciated.
I have been trying to create an alert that triggers whenever the process ID of a process on linux is null. Because it is not sending data, I assume the process is not running, and if it has a process... See more...
I have been trying to create an alert that triggers whenever the process ID of a process on linux is null. Because it is not sending data, I assume the process is not running, and if it has a process ID, it is running. Working with telegraf: | mstats latest(_value) AS value WHERE metric_name="procstat.pid" AND index="telegraf" AND process_name="<process_name>"  fillnull_value=0 span=5m BY host, process_name | timechart latest(value) span=5m BY host | fillnull <hostnames> value=0 | table _time,<hostnames> Using the zero null values formatting, I can pinpoint exactly when the processes are on downtime. However, I couldn't find a way to alert when the host PID value is null (or =0 due to the fillnull function). Thanks!
Hello Community!  I am trying to get the record count by index that I am getting per month in Splunk. I am using this search with tstats, because there are millions of records per month and from I r... See more...
Hello Community!  I am trying to get the record count by index that I am getting per month in Splunk. I am using this search with tstats, because there are millions of records per month and from I read this is more efficient than the stats command.     | tstats count WHERE (index=*) BY index _time span=1mon | timechart span=1mon count   But I don't know why I am receiving this error:    Error in 'TsidxStats': WHERE clause is not an exact query   Can anyone help me to know what I am doing wrong?  Thanks     
I have couple of custom dashboards with drill down links to other custom dashboards. When the time range is changed on the parent custom dashboard and then child dashboards are clicked / drilled down... See more...
I have couple of custom dashboards with drill down links to other custom dashboards. When the time range is changed on the parent custom dashboard and then child dashboards are clicked / drilled down the time stamp / time range selected on the parent dashboard is not passed on to the child/drilled down dashboard. Is there a way to do that?
Is there a way to configure timeouts for  the adrumExtUrl and beaconUrls? If the ExtUrl or beaconUrl is unavailable I don't want to affect the monitored application.
Is there a way, that anyone is aware of, to timechart off of a field sumarry. I can break down the fieldsummary by timecharting first, I just end up with repeated field names with what looks like has... See more...
Is there a way, that anyone is aware of, to timechart off of a field sumarry. I can break down the fieldsummary by timecharting first, I just end up with repeated field names with what looks like hashes appended to them, which is weird. I am trying to detrmine all the NULL fields and present them in a timecharted graph by day. Currently, without the timechart portion, this is what I have.    ...| fieldsummary | search values=*Unknown* | rex field=values \"Unknown\"\\S\"count\":(?<null_count>\\d+)}, |eval percent_null=(null_count/count)*100 |eval Percent1=100-percent_null |fields field Percent1 null_count
I have a use case where there are over 50+ lookup files that I need to 'sync' between one app context and another. The idea is to: 1) read the lookup from context of App1 search bar 2) outputlookup... See more...
I have a use case where there are over 50+ lookup files that I need to 'sync' between one app context and another. The idea is to: 1) read the lookup from context of App1 search bar 2) outputlookup to a lookup file named 'UPDATE_<lookupname>.csv' that resides in App2 context The idea is to have the 50+ lookup file names in a lookup named myLookupFiles with App1. Then pass that filename as a field into a macro. So the gist would be to:   | inputlookup myLookupFiles | `mySyncMacro(myLookupFileNameField)`   And the macro would thus then be something like:   join type=left max=0 [| inputlookup $myLookupFileName$] | fields - myLookupFileNameField | outputlookup createinapp=true UPDATE_$myLookupFileName$ | search blarg   Which, of course, doesn't work. Thoughts on a way to iterate across all 50+ file names when they are specified as values within a table to create the 50+ named lookup files with the name "UPDATE_<lookupname>.csv"?
Hello, I am wanting to write an app for Splunk ES that can leverage the ability to integrate the investigation toolbar on the bottom and manage investigations within the context of a completely diff... See more...
Hello, I am wanting to write an app for Splunk ES that can leverage the ability to integrate the investigation toolbar on the bottom and manage investigations within the context of a completely different app.  Is there a particular app template I should reference in order to do this?
Hi Splunkers, I am trying to achieve a customized tool tip to be shown for all the available bars in the chart on mousehover on a specific legend of a specific bar in a  bar chart. I have checked d... See more...
Hi Splunkers, I am trying to achieve a customized tool tip to be shown for all the available bars in the chart on mousehover on a specific legend of a specific bar in a  bar chart. I have checked different answers, but still finding difficult crack this one. For reference please find the image attached,  1) Tool tip should be customized with only the number shown below, need to remove the other information. 2) On mousehover on any one of the bars, the tooltip with the value count(number) for other bars should also be displayed on their respective bars along with bar on which the mousehover was happening. ex: In below bar chart, the tool tip on mousehover on the left most bar should also bring the tool tip for the other 3 bars available to its right in their respective bars. Where the tooltip should only contain the passed(legend) value, which is total count/number of passed. Please suggest me the available options, which I can try,. Advance thanks for the support.    
When an indexer is restarted during a normal rolling restart, does it shutdown via the `offline` method? Or does it do a normal shutdown process? The documentation indicates that during a "searcha... See more...
When an indexer is restarted during a normal rolling restart, does it shutdown via the `offline` method? Or does it do a normal shutdown process? The documentation indicates that during a "searchable" rolling restart, each peer is actually offlined and not simply shut down. But for a normal rolling restart, how does the indexer shutdown? How does this work from the cluster masters perspective? I've been unable to find rest calls against the restart or offline endpoints during a rolling restart, but maybe the CM isn't actually making those calls (or they aren't being logged for some reason)?