All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are investigating the cause of bucket damage. In our environment, I'm guessing that the bucket was corrupted when the server with splunk was restarted. Please tell us about the following two poi... See more...
We are investigating the cause of bucket damage. In our environment, I'm guessing that the bucket was corrupted when the server with splunk was restarted. Please tell us about the following two points. ・Please let me know the case or cause of bucket damage. ・How can I avoid bucket damage?
Hi all,   I am planning to set up a splunk enterprise on google cloud platform. The logs from GCP need to be copied on to the storage bucket for further analysis, is that possible. I cant see any se... See more...
Hi all,   I am planning to set up a splunk enterprise on google cloud platform. The logs from GCP need to be copied on to the storage bucket for further analysis, is that possible. I cant see any setup / infra / arch diagrams. is this one possible ? Thanks R
I want the difference between two timestamps in epoch. My fields are like: target.received.end.timestamp 1597115254203 target.received.start.timestamp 1597115254203   So need the d... See more...
I want the difference between two timestamps in epoch. My fields are like: target.received.end.timestamp 1597115254203 target.received.start.timestamp 1597115254203   So need the difference between end and start timestamp. Any help would be helpful.   Thanks
Hello, I wanted to implement Network diagram Viz for Splunk Cloud. But is it supported by Splunk Cloud ? Thank you
I am having indexer clusters  & one of the indexer goes down due to some reason, I am unable to start splunk in that server. Its giving me below error. [root@ bin]# ./splunk start splunkd 21888 was... See more...
I am having indexer clusters  & one of the indexer goes down due to some reason, I am unable to start splunk in that server. Its giving me below error. [root@ bin]# ./splunk start splunkd 21888 was not running. Stopping splunk helpers...                                                            [  OK  ] Done. Stopped helpers. Removing stale pid file... Can't unlink pid file "/opt/splunk/var/run/splunk/splunkd.pid": Read-only file system   splunk version is 6.4.2 . kindly help me to start splunk service.
How to know on which port my syslog data is coming from 
Hi, I am having trouble getting the headers of the each column in trellis layout to change. It changes the names while it is updating the search and reverts back to dst_port name after search is c... See more...
Hi, I am having trouble getting the headers of the each column in trellis layout to change. It changes the names while it is updating the search and reverts back to dst_port name after search is complete. The table names are renamed just not the trellis headers.        index=netflow dst_port IN (21,22,23,445) | timechart span=12h count by dst_port | rename 21 as FTP, 22 as SSH, 23 as Telnet, 445 as SMB       Any help much appreciated.
Hi All, I've hunted around for answers but haven't been able to figure it out.  What I am trying to do is have multiple Single Values stacked vertically and adjust the width of them. I've tried two... See more...
Hi All, I've hunted around for answers but haven't been able to figure it out.  What I am trying to do is have multiple Single Values stacked vertically and adjust the width of them. I've tried two methods, the first using JS like so: require(['jquery', 'splunkjs/mvc/simplexml/ready!'], function($) { // Grab the DOM for the first dashboard row var firstRow = $('.dashboard-row').first(); // Get the dashboard cells (which are the parent elements of the actual panels and define the panel size) var panelCells = $(firstRow).children('.dashboard-cell'); // Adjust the cells' width $(panelCells[0]).css('width', '10%'); $(panelCells[1]).css('width', '20%'); $(panelCells[2]).css('width', '45%'); $(panelCells[3]).css('width', '25%'); }); And another using CSS like below: <style> #second_row_stats, #second_row_perc { width: 15% !important; } </style> So either method works if there is only one Single Value in the panel.  Basically the Single Value width adjusts to what the panel width adjusts to.  However if you have a more than one Single Value in the same panel, the panel width is adjusted but the Single Value doesn't expand to the length of the panel width. What I get is this: The first row is using the JS and with Single Values it will adjust properly.  Second row is using the CSS but with multiple Single Values you can see there are gaps. Switching between JS or CSS makes no difference.  I've tried using display: grid too but that didn't help, just ensured it was stacked vertically. Any help would be great.  Thanks!
Hello I'm using splunk to send notifications when cisco vpn account connect now I have to add each account to rule if I want to configure rule when the user contains specific word then splunk send... See more...
Hello I'm using splunk to send notifications when cisco vpn account connect now I have to add each account to rule if I want to configure rule when the user contains specific word then splunk send notification how can I do that? now I'm using  Device IP Address= x.x.x.x Passed Authentications UserName="firstname1.lastname1.vpn" OR "firstname2.lastname2.vpn"  | stats values(UserName) as user by UserName | table UserName vpn attribute is common between users so I want to check if any account.vpn connected then splunk run rule and send me email notification  
Hello, I have a lookup table that I've exported from another report using the fields IP_ADDRESS, CountOfUserID. I'm trying to find IP Addresses in another index, msad, using primarily the fields ... See more...
Hello, I have a lookup table that I've exported from another report using the fields IP_ADDRESS, CountOfUserID. I'm trying to find IP Addresses in another index, msad, using primarily the fields ClientIP and UserId, which do not appear in the lookup table. So, if IP_ADDRESS and ClientIP match, throw the data out, and return a list of the leftover IP_ADDRESS values. I'm running into issues, where either the search will return the opposite of what I want (IP Addresses that appear in both datasets), or nothing at all. Does anyone know how to work the logic on this? I feel like I've tried everything. Thanks,
Having trouble finding an answer for this one but is it possible to change just the cold database location to a NAS for a Windows deployment? The System Requirements page states that we shouldn't us... See more...
Having trouble finding an answer for this one but is it possible to change just the cold database location to a NAS for a Windows deployment? The System Requirements page states that we shouldn't use mapped drives "Do not index data to a mapped network drive on Windows (for example "Y:\" mapped to an external share.) Splunk Enterprise disables any index it encounters with a non-physical drive letter." If that's the case should Volume stanza the indexes.conf use the UNC path like the following?   [volume:NAS] path = \\NAS01\ [main] homePath = $SPLUNK_DB\defaultdb\db coldPath = volume:NAS\Database\coldDb     Any help would be much appreciated
Hi Everyone, This might be straight forward and I'm missing it but my current query is below and I am not able to get the correct results, any thoughts? End goals is to get all with status of Done... See more...
Hi Everyone, This might be straight forward and I'm missing it but my current query is below and I am not able to get the correct results, any thoughts? End goals is to get all with status of Done and a Resolution of blank.    | eval done_null = if(Status="Done" AND Resoloution!="*",Score,"0") | stats sum(done_null) as Done_Null by time | table time, Done_Null  
I installed version 3.0.1 of the Microsoft Azure Add-on for Splunk on one of our Heavy Forwarders. I was able to configure and get all the inputs working except "Microsoft Azure Active Directory Sign... See more...
I installed version 3.0.1 of the Microsoft Azure Add-on for Splunk on one of our Heavy Forwarders. I was able to configure and get all the inputs working except "Microsoft Azure Active Directory Sign-ins", "Microsoft Azure Active Directory Users", and "Microsoft Azure Active Directory Audit" (I'm trying to avoid using the EventHub because of the necessary firewall rules, etc.). All of our Splunk servers are Linux and we have version 8.0.5 installed. The error I'm seeing is: 2020-08-10 16:25:11,467 ERROR pid=23994 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/MS_AAD_signins.py", line 88, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/input_module_MS_AAD_signins.py", line 85, in collect_events sign_in_response = azutils.get_items_batch(helper, access_token, url) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/utils.py", line 55, in get_items_batch raise e File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/utils.py", line 49, in get_items_batch r.raise_for_status() File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/requests/models.py", line 940, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://graph.microsoft.com/beta/auditLogs/signIns?$orderby=createdDateTime&$filter=createdDateTime+gt+2020-08-09T16:25:10.822417Z+and+createdDateTime+le+2020-08-10T21:18:11.232754Z I assume this means the necessary permissions are not in place in Azure. Our Azure admin followed the "Setup an Azure AD Application Registration" documentation built-in to the app ( which is really nice btw ). He doublechecked and he did everything in the instructions. Any ideas on what we might be missing? I looked at the troubleshooting search and didn't come across anything that seemed to spell out what the problem was (beyond what the error above indicates).  
How do I use rex to extract the virus info so that I can display this info in my splunk dashboard?
Hi I have a dashboard, my requirement is like when a user will select a value Splunk in a multi-select, my pannel quey will search field=$token_of_multiselect$ or field="*report*" or field="*dashboa... See more...
Hi I have a dashboard, my requirement is like when a user will select a value Splunk in a multi-select, my pannel quey will search field=$token_of_multiselect$ or field="*report*" or field="*dashboard*". By default my panel query is taking field="*splunk*" I wanted to add an OR condition to it so that when token value will be selected as splunk, it should add report and dashboard. Example: On below, component is returning Splunk and some other process value. index=abcde sourcetype=efghi (description="*$token_of_multiselect$*" OR description_1="*$token_of_multiselect$*") above is my pannel query with some table in it, so when I will have Splunk selected on component, I want my query like below. index=abcde sourcetype=efghi (description="*splunk*" OR description_1="*splunk*" OR description="*report*" OR description_1="*dashboard*" ) Please write sample code in answer.
Is 192.168.1.111 the source or destination IP Address?
So I have a csv file that is generated while a program runs. The 1st 4 columns are same headers. DATE,TIME, LOCATION,EVENT. After this there can be 4-8 more fields with unique headers on what the eve... See more...
So I have a csv file that is generated while a program runs. The 1st 4 columns are same headers. DATE,TIME, LOCATION,EVENT. After this there can be 4-8 more fields with unique headers on what the event field is equal to. For example if the event = 1 then the field and headers would be. DATE,TIME, LOCATION,EVENT,speed,tilt For example if the event = 2 then the field and headers would be. DATE,TIME, LOCATION,EVENT,altitude,latitude,longitude. I am not fully sure how to get this data in with the proper field headers. example: 2020/03/03,13:40:23,Ohio,1,200,30 2020/03/03,13:40:23,Ohio,2,33000,40.4173,82.9071
Hi, I have created a dashboard say mydashboard. I have one custom app with search, dashboard, App health tabs . when I saved mydashboard it is falling under dashboard tab, how can I move this dashb... See more...
Hi, I have created a dashboard say mydashboard. I have one custom app with search, dashboard, App health tabs . when I saved mydashboard it is falling under dashboard tab, how can I move this dashboard to App health tabs. I have only splunk UI access(Splunk developer access only).        
I"d like to send audit data through an event hub.   However, i want my heavy fwd'r to not send all fields to splunk as 75% of is will be useless and take up all my ingesting quota.  Is there an easy... See more...
I"d like to send audit data through an event hub.   However, i want my heavy fwd'r to not send all fields to splunk as 75% of is will be useless and take up all my ingesting quota.  Is there an easy way to do this?  The data coming in is Azure SQL where i don't beleive i can change data going into the hub.
I currently have the following SPL query that generates a table, and appears as follows: Service ID Resource Name Transaction Name Priority Service Area Consumer 2020-8-10 2020-08-09 2020... See more...
I currently have the following SPL query that generates a table, and appears as follows: Service ID Resource Name Transaction Name Priority Service Area Consumer 2020-8-10 2020-08-09 2020-08-08 100 GET Transaction A 2 ServiceArea2 App1 25 37 31 200 PUT Transaction B 1 ServiceArea2 App3 13 22 14   The date columns represent today's date, as well as the previous 6 days of data. I didn't include the dates from 8/4-8/7 since would take up far too much space. The counts in the date columns represent the total count of the "Consumer" column, which captures the different application_name fields. Now, today's date column will only generate if there is at least one entry for the application_name field in the logs. I would like to modify the query logic to force today's date column to generate even when there is a zero count for transactions associated to the application_name field. This would allow for today's date column to generate at 12AM PST (midnight). At that time, there would probably be no transactions, so I would like Splunk to return a value of "NULL" rather than a blank value or "0". Based on the below existing logic, can someone provide some guidance on how the below query should be updated to force the 'today' column (i.e. 2020-08-10 in the above example) to generate at midnight rather than at a time dependent on the application_name (Consumer) transaction having at least one entry? So, for example, by tomorrow midnight (2020-8-11), I would expect to see the following assuming that no application transactions occur at midnight.   Service ID Resource Name Transaction Name Priority Service Area Consumer 2020-08-11 2020-08-10 2020-08-09 100 GET Transaction A 2 ServiceArea2 App1 NULL 25 37 200 PUT Transaction B 1 ServiceArea2 App3 NULL 13 22   Current SPL: index=new_idx_1 sourcetype=file_level_data | eval epoch_Timestamp=strptime(Timestamp, "%Y-%m-%dT%H:%M:%S.%3QZ")-14400 | rename "Transaction Name" as trans_name, "Application Name" as application_name, "File Status" as file_status | eval service_id=case(Verb="GET" AND trans_name="Transaction A" AND application_name="App1", "ServiceID1", Verb="GET" AND trans_name="Transaction B" AND application_name="App3", "ServiceID2", Verb="PUT" AND trans_name="Transaction B" AND application_name="App3", "ServiceID3", 1=1, "Unqualified") | where service_id!="Unqualified" | eval Priority=case(Verb="GET" AND trans_name="Transaction A" AND application_name="App1", "2", Verb="GET" AND trans_name="Transaction B" AND application_name="App3", "2", Verb="PUT" AND trans_name="Transaction B" AND application_name="App3", "1", 1=1, "Unqualified") | where Priority!="Unqualified" | eval service_area=case(Verb="GET" AND trans_name="Transaction A" AND application_name="App1", "ServiceArea1", Verb="GET" AND trans_name="Transaction B" AND application_name="App3", "ServiceArea2", Verb="PUT" AND trans_name="Transaction B" AND application_name="App3", "ServiceArea2", 1=1, "Unqualified") | where service_area!="Unqualified" | eval date_reference=strftime(epoch_Timestamp, "%Y-%m-%d") | stats count(eval(file_status)) as count by service_id, Verb, trans_name, Priority, service_area, application_name, date_reference | eval combined=service_id."@".Verb."@".trans_name."@".Priority."@".service_area."@".application_name."@" | xyseries combined date_reference count | rex field=combined "^(?<service_id>[^\@]+)\@(?<Verb>[^\@]+)\@(?<trans_name>[^\@]+)\@(?<Priority>[^\@]+)\@(?<service_area>[^\@]+)\@(?<application_name>[^\@]+)\@$" | fillnull value="0" | table service_id, Verb, trans_name, Priority, service_area, application_name, *-*-31, *-*-30, *-*-29, *-*-28, *-*-27, *-*-26, *-*-25, *-*-24, *-*-23, *-*-22, *-*-21, *-*-20, *-*-19, *-*-18, *-*-17, *-*-16, *-*-15, *-*-14, *-*-13, *-*-12, *-*-11, *-*-10, *-*-09, *-*-08, *-*-07, *-*-06, *-*-05, *-*-04, *-*-03, *-*-02, *-*-01 | rename service_id as "Service ID", Verb as "Resource Name", trans_name as "Transaction Name", Priority as "Priority", service_area as "Service Area", application_name as "Consumer"