All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am using the search depicted in the attached photo below to develop a viz in Dashboard Studio separating events by the field "bundleId". It appears to display events in the statistics table the way... See more...
I am using the search depicted in the attached photo below to develop a viz in Dashboard Studio separating events by the field "bundleId". It appears to display events in the statistics table the way that I want them to. However, when I save them to a dashboard via Dashboard Studio, I get an "Invalid Date" where I want the break in events (Note - this does not happen in the "Classic" version)   How can the "Invalid date" be removed? I already attempted to eval _time=" " in the appendpipe with no success. Thank you.
Hi, I have been looking to see if splunk has the capability of searching for loggins outside of a specified set time range on windows and linux systems. What I mean by this is that I am looking to ... See more...
Hi, I have been looking to see if splunk has the capability of searching for loggins outside of a specified set time range on windows and linux systems. What I mean by this is that I am looking to see loggings that only happen before, lets say, 0600 and after 1600. any information that i can get would be much appreciated.
I'm fairly new to Splunk and I am having some trouble grouping somethings they way I want I have some data which all have a certain ID and a multitude of other values. I want to be able to group th... See more...
I'm fairly new to Splunk and I am having some trouble grouping somethings they way I want I have some data which all have a certain ID and a multitude of other values. I want to be able to group this data if they have the same ID, but only group them in a maximum time interval of 24 hours. This I figured out pretty easily, however, the problem is I would also like to see the actual duration of events.  For example, say I have 10 or so events that all have the same ID and they occur within a 5 minute period, I'd want to group them together. I'd also like to be able to group 10 or so events that have the same ID and occur within a 23 hour period.  I've tried using bins, which groups them properly, but then it gives them all the exact same time, so I don't know how to find the exact duration. I've also tried using time charts and transactions with poor results. Does anyone have any ideas?
Hi, I'm trying to come up with a query to generate the count of strings in a json field in a log, across all events.  For example, say I have a search that displays say, 100,000 logs, with each log ... See more...
Hi, I'm trying to come up with a query to generate the count of strings in a json field in a log, across all events.  For example, say I have a search that displays say, 100,000 logs, with each log containing some JSON structured string [{"First Name": "Bob", "DOB":"1/1/1900", ..."Vendor":"Walmart"}] I want to generate a table that lists all the unique Vendor values, and the count of the values. Something like, Vendor | Count Walmart   5 Target       3 ToysRUs.   100 etc... Is something like this possible?
I'm making a dashboard for a customer that contains vulnerability data and some of the vulnerability names are really long causing the text on the y axis of the bar chart to be very small. Is there a... See more...
I'm making a dashboard for a customer that contains vulnerability data and some of the vulnerability names are really long causing the text on the y axis of the bar chart to be very small. Is there any way I can increase this font size so it's easier to read? Not a big deal if it truncates, but need larger font.
Good morning, I have a question. I have an ngnix proxy and I would like to monitor it with appdynamics. Since we have it configured for the agents to connect to it and the proxy to the controller... See more...
Good morning, I have a question. I have an ngnix proxy and I would like to monitor it with appdynamics. Since we have it configured for the agents to connect to it and the proxy to the controller for client security How can we integrate the controller? What agents could deal with this?  
I mean I don't even know where to start with this Error, lol Of course you can not import something that does not exist, it's like me saying I can not eat the cake that does not exist on my table. ... See more...
I mean I don't even know where to start with this Error, lol Of course you can not import something that does not exist, it's like me saying I can not eat the cake that does not exist on my table. Anyway how do I go about find this application, looks like it gave me a name of it, does it look defaultish to you all or is this something we rolled ourselves? WARN ApplicationManager [0 MainThread] - Cannot import non-existent application: __globals__
Hello,           Please help me with the below requirement.          I need to capture usernames from 90 days worth of data from a large datasets which includes multiple source types and multiple... See more...
Hello,           Please help me with the below requirement.          I need to capture usernames from 90 days worth of data from a large datasets which includes multiple source types and multiple indexes.           Search "index=* sourcetype=* earliest=-90d@d latest=now | eval LOGIN = lower(user) | stats count by LOGIN sourcetype"" is taking forever.            Is there a better way to capture the 90 days worth usernames and source types without timeout?          Note:  I can able to schedule the search to capture them and append the results. However I am not sure, what time modifiers I should use, If I want to capture all of them in a single day & that should be a continuous process every day.      
I have created a dashboard in Dashboard Studio and have configured a "Link to dashboard" drilldown. It works fine when the token value does not have any whitespace in it but does not work when there ... See more...
I have created a dashboard in Dashboard Studio and have configured a "Link to dashboard" drilldown. It works fine when the token value does not have any whitespace in it but does not work when there are spaces. The URL that is generated from the drilldown has a format of "firstword%2Bsecondword" and when I show the token value in the second dashboard, it is translated to "firstword+secondword".  This causes the search to return 0 results due to the "+" in the value. How do I configure this so that there is a space in the value instead of a "+"? Thanks
Hello experts, I am trying to integration salesforce cloud modules into splunk for security monitoring. Does anyne has any prior experience in this I want to know whether we can use Splunk Add-on for... See more...
Hello experts, I am trying to integration salesforce cloud modules into splunk for security monitoring. Does anyne has any prior experience in this I want to know whether we can use Splunk Add-on for Salesforce for the same or any customisation is required. also what kind of logs would be relevant to be collected from from the modules , espeially marketing cloud . please guide. Thanks 
Is it possible to set up a report that includes drilldown events? For example, if my search returns a field with 10 values, can the reporting feature include all 10 events in the CSV file instead of ... See more...
Is it possible to set up a report that includes drilldown events? For example, if my search returns a field with 10 values, can the reporting feature include all 10 events in the CSV file instead of  the event statistics?
Hello Splunkers, I have followed this documentation in order to configure my Splunk on my UF as a systemd managed service : https://docs.splunk.com/Documentation/Splunk/9.0.3/Admin/RunSplunkassyste... See more...
Hello Splunkers, I have followed this documentation in order to configure my Splunk on my UF as a systemd managed service : https://docs.splunk.com/Documentation/Splunk/9.0.3/Admin/RunSplunkassystemdservice  I also followed the step to make Splunk running with a non-root user, and I have check with the following command that it is indeed the case :     ps -aux | grep -i Splunk     However, it seems that Splunk is now able to read any files and folders on the machine, even no permissions or ACL were specified for the splunk user I used.  This user does not have any sudo right, so I am wondering what could be the root cause here... If I disable the systemd service and run Splunk with (as the non root user) :     /opt/splunkforwarder/bin/splunk start   Everything is working correctly and the protected files / folder are not monitored by Splunk, as excepted. I'm out of idea here! Thanks, GaetanVP      
I am ingesting data from multiple end points. The data is about 30key/value pairs. I would like to be able to chart just a subset of the keys. At the moment, I have a chart that has a drop down list... See more...
I am ingesting data from multiple end points. The data is about 30key/value pairs. I would like to be able to chart just a subset of the keys. At the moment, I have a chart that has a drop down list to select the endpoint I want to display (identified by mac address). Right now, my search is as follows: index=index mac_address=$mac_address$ | timechart span=15m values(value) by key This returns a graph with every single key/value pair on it.  I'd like to edit the search just to show specific values.   I note I don't have a source/sourcetype specified (I wasn't sure if I needed this). I've also tried to search for specific fields using the avg command but this returns no values: index=index mac_address=$mac_address$ | timechart span=15m avg(key_1) as "key_1" avg(key_2) as "key_2"   As always, any help very much appreciated.   NM
Exception: <class 'PermissionError'>, Value: [Errno 13] Permission denied: '/opt/splunk/etc/system/local/authentication.conf.migratepreview'   Unable to restart Splunk after upgrade
Hi Friends, My requirement: I want to trigger SNOW ticket from Splunk alert. Before trigger I want to check any open ticket already available for that host. If already open ticket available alert s... See more...
Hi Friends, My requirement: I want to trigger SNOW ticket from Splunk alert. Before trigger I want to check any open ticket already available for that host. If already open ticket available alert shouldn't trigger. If there is no open ticket then we need to trigger alert and create SNOW ticket.  My First query:  index="pg_idx_whse_prod_events" sourcetype IN ("cpu_mpstat") host="adlg*" | streamstats time_window=15m avg(cpu_idle) as Idle count by host | eval Idle = if(count < 30,null,round(Idle, 2)) | WHERE(Idle >= 90) | table host Idle   my 2nd query:  index=pg_idx_whse_snow sourcetype="snow:incident" source="https://pgglobalenterpriseuat.service-now.com/" | rex field=dv_short_description "^[^\-]+\-(?<Host>[^\-]+)" | rex field=dv_short_description "^[^\-]+\:(?<extracted_field>[^\-]+)" | rename Host as host |table host incident_state_name | where incident_state_name!="Closed"   Now I want to validate 1st result with 2nd result and display only which host don't have open ticket. Could you please help me how to achieve this? Thanks in advance.  
Hi, I have recently got a standalone instance of Splunk on AWS and it is not fully configured yet. I am trying to set up my server.conf on $SPLUNK_HOME/etc/system/local but I cannot locate the pas... See more...
Hi, I have recently got a standalone instance of Splunk on AWS and it is not fully configured yet. I am trying to set up my server.conf on $SPLUNK_HOME/etc/system/local but I cannot locate the pass4SymmKey. When I try and find it on the Splunk GUI, I receive the following:   Can you please help? Thanks
Hi All, thanks for clicking on the question This search works fine in Linux using grep, but I can't get it to work in Splunk. Please can you help.. I have imported a test.csv file that has many l... See more...
Hi All, thanks for clicking on the question This search works fine in Linux using grep, but I can't get it to work in Splunk. Please can you help.. I have imported a test.csv file that has many lines like the following [ERROR] 2023/01/05 16:53:05 [!] Get "https://test.co.uk/sblogin/username": context deadline exceeded (Client.Timeout exceeded while awaiting headers)   I am simply just to trying to extract the username field after sblogin/ and nothing else after the "   This is the query I have tried that gives the Error in 'SearchParser': Mismatched ']' source="test.csv" | rex field=raw_line "sblogin/([^"]+)" | eval extracted_string=substr(extracted_string, 9)  
Hi all, I have two similar words that giving the same meaning. How can I standardize them into one value to prevent inconsistencies in result but at the same time keep initial subcontent for both wo... See more...
Hi all, I have two similar words that giving the same meaning. How can I standardize them into one value to prevent inconsistencies in result but at the same time keep initial subcontent for both words? Here's the detail: app= AOutlook, Outlook..etc index=XXX app=XX...| eval Outlook=mvappend(AOutlook, Outlook)|table app action... expected result: app           |   action .... Outlook       Not found Outlook       Completed previous query for append doesn't work, any alternative will be appreciated!
Hi, I need to index  windows server logs and blacklist all the previous year logs. Inputs.conf. [monitor://E:\application\logs\server*] disabled=0 sourcetype=_error_text index=_error_file ... See more...
Hi, I need to index  windows server logs and blacklist all the previous year logs. Inputs.conf. [monitor://E:\application\logs\server*] disabled=0 sourcetype=_error_text index=_error_file Logs in the servers looks like below I refered solunk doc and came up with this stanza but it says only the last filter will be applied. Does it mean only 2019 blacklist regex will be applied? [monitor://E:\application\logs\server*] disabled=0 sourcetype=_error_text index=_error_file blacklist.1=^server-2021-\d{2}-\d{2} blacklist.2=^server-2020-\d{2}-\d{2} blacklist.3=^server-2019-\d{2}-\d{2}   Please suggest.
Query: index="web_app" (application= "abc-dxn-message-api" AND tracepoint= "START") (facility="d55075aaedc86d6577676605c0b5f3c0" OR "XYZ") | stats count as Input |append [search (application= "hum... See more...
Query: index="web_app" (application= "abc-dxn-message-api" AND tracepoint= "START") (facility="d55075aaedc86d6577676605c0b5f3c0" OR "XYZ") | stats count as Input |append [search (application= "hum-message-api" AND tracepoint= "END") (facility="d55075aaedc86d6577676605c0b5f3c0" OR "XYZ") | stats count as Processed] |append [search (facility="d55075aaedc86d6577676605c0b5f3c0" OR "XYZ") "ERROR" | stats count as Error] | transpose column_name="Bundle" Current Result: 4 columns * 3 rows   Expected Result: 2 columns * 3 rows Bundle    Count Input           x Error            x Processed x