All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello guys, how do you handle missing forwarders (deleted VMs for instance)? Do you go to "Forwarder management" then "Delete record" or you "rebuild forwarder assets" in DMC (this last one seems e... See more...
Hello guys, how do you handle missing forwarders (deleted VMs for instance)? Do you go to "Forwarder management" then "Delete record" or you "rebuild forwarder assets" in DMC (this last one seems enough)? Thanks.  
I am trying to change color of a one row of a panel ONLY if it is found in the lookup table. For example, if I have a lookup table with websites not allowed on the company, and a panel that has all w... See more...
I am trying to change color of a one row of a panel ONLY if it is found in the lookup table. For example, if I have a lookup table with websites not allowed on the company, and a panel that has all websites accessed. Then I would like to see the color Red be for the rows where the website is part of the lookup table AND on the panel.  I would still like to see all the results as well, just the changed color for the ones that are in the lookup table. Is there any way for me to do this?  
I need to run a check on my Indexes making sure they are healthy. Where & how do I do it? Thank u very much in advance. PS I do have monitoring console installed. 
What health check items would you configure for Ent. Security app. for general purpose of for Security watch purposes please? Thank u for a reply.  
Hi - looking for a more efficient way to do this, if anyone has any tips:   index=xyz sourcetype=abc NOT user_email=unauthenticated (user_email=*) | eval day=strftime(_time, "%Y%m%d") | search day=... See more...
Hi - looking for a more efficient way to do this, if anyone has any tips:   index=xyz sourcetype=abc NOT user_email=unauthenticated (user_email=*) | eval day=strftime(_time, "%Y%m%d") | search day=20210723 | ...   Basically, can I filter on _time for a specific day without doing the eval then filter, this seems like an inefficient way to query if I can somehow say dayOf(_time)='20201010' or something like that...
Encountering a very odd issue where I have a daily summary index that has pretty simple key=value pairings for fields, but I can no longer search on the fields specifically.  For instance an event mi... See more...
Encountering a very odd issue where I have a daily summary index that has pretty simple key=value pairings for fields, but I can no longer search on the fields specifically.  For instance an event might have a field included that reads cluster=cluster1A and if I search for cluster=cluster1A, I get no results, but if I search for just the text cluster1A, I get results.  What might I be able to look into here?
I need to set the dashboard so that numbers equal to 0 and above are highlighted in green and all others in red. Splunk automatically marks numbers less than and equal to 0 in red. How do I change th... See more...
I need to set the dashboard so that numbers equal to 0 and above are highlighted in green and all others in red. Splunk automatically marks numbers less than and equal to 0 in red. How do I change this in the "other way"?  
hello all I am fairly new to using Splunk and would like some help with searching for locked accounts and to Setup an search that checks for failed password on daily basis. I want to check for ids wh... See more...
hello all I am fairly new to using Splunk and would like some help with searching for locked accounts and to Setup an search that checks for failed password on daily basis. I want to check for ids which are constantly appearing on daily basis for x number of times. If the pattern continues then i may  know if a hacker is trying to break into a particular id using a slow password attack. I have been searching on event ID 4740 but returning no hits even though I have a user that has been locked out, why would this be happening?
Hello guys, do you advice this log format: key=value instead of key="value" ? Thanks.  
Hi! My task is as follows: I want to compare the increment of a certain type of errors: the average value of each type of errors for the last 12 hours to the value for the last hour. If the differenc... See more...
Hi! My task is as follows: I want to compare the increment of a certain type of errors: the average value of each type of errors for the last 12 hours to the value for the last hour. If the difference exceeds the acceptable threshold , I would like to add the error type to the result. Now, my query finds the difference for the total number of errors in the last half hour to 5 minutes without taking into account their type Query:    Query to search errorType + count   Please, help
I have scanned two Splunk packages which is less than 2mb using Splunk AppInspect CLI but when I try to scan another Splunk package i.e more than 100mb size  .It went like an hour then gave me a mess... See more...
I have scanned two Splunk packages which is less than 2mb using Splunk AppInspect CLI but when I try to scan another Splunk package i.e more than 100mb size  .It went like an hour then gave me a message like this LEVEL="CRITICAL" TIME="**********" NAME="root" FILENAME="main.py" MODULE="main" MESSAGE="An unexpected error occurred during the run-time of Splunk App Inspect". is there any size criteria for the Splunk AppInspect CLI  that should be followed?     
I have a non numerical field (text), and I want to create an enum field.  Meaning that I will have a new field with numerical values that match the text values of the original field.  Thanks
In earlier versions, we were able to modify/customize and remove the Splunk version information shown on the browser tabs by editing the common.js file in the path - $SPLUNK_HOME/share/splunk/search_... See more...
In earlier versions, we were able to modify/customize and remove the Splunk version information shown on the browser tabs by editing the common.js file in the path - $SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/build/pages/enterprise/common.js. By changing -document.title=splunkUtils.sprintf(_(title).t()+" | "+_("Splunk %s %s")…  To -  document.title=splunkUtils.sprintf(_(title).t()+" | "+_("Splunk_Test %s")…  However in the latest version 8.2.1 the change is not taking effect across all the apps instead it works only for launcher app and search app(just to note, by default these two are the only apps that will be shipped with Splunk fresh installation). You may refer to the attached screenshot. Here  'Splunk_Test'   is a custom value which I want the browser tabs to display instead of Splunk version. Is there any way that this can be fixed so that the changes reflect in the browser tabs for all applications.  
Hello everyone, I am collecting Windows Event Logs and Sysmon Logs from my Windows Domain to my WEF. From WEF using a UF I am forwarding everything to my Splunk Indexer. My question is: How can I ... See more...
Hello everyone, I am collecting Windows Event Logs and Sysmon Logs from my Windows Domain to my WEF. From WEF using a UF I am forwarding everything to my Splunk Indexer. My question is: How can I split my data collected to [WinEventLog://ForwardedEvents] to two different indexes (wineventlog, sysmon)? I don't want Sysmon to get into wineventlog index. Shall I use props.conf and transform.conf modification to achieve that? If yes, can you please guide me on how this shall be formatted? The next step would be to properly configure inputs.conf on Splunk_TA_Windows and TA-microsoft-sysmon so that I don't have to index unneeded stuff that will cause performance issues. For example on TA-microsoft-sysmon's  inputs.conf I will have to put: [WinEventLog://ForwardedEvents] disabled = 0 index = sysmon start_from = oldest currently_only = 0 checkpointInterval = 5 renderXml = true but this will also index wineventlogs which are not needed in this index. Is that correct? Thanks Chris
Has anyone integrated splunk with siemplify? I am planning to do so, need some ideas to start with.
Hi everyone! Maybe someone faced such a problem: I want to build a Layer 2 network topology, I have enough data for this. I am working with the Network Diagram Viz app. And I have a table of links,... See more...
Hi everyone! Maybe someone faced such a problem: I want to build a Layer 2 network topology, I have enough data for this. I am working with the Network Diagram Viz app. And I have a table of links, something like this: from to local_int remote_int linkcolor type linktext value AIC-switch-2960.aic.kz SW9300test.aic.kz Gi0/1 Gi1/0/23 green deployment-server Gi0/1 to Gi1/0/23 AIC-switch-2960.aic.kz SW9300test.aic.kz AIC-switch-2960.aic.kz Gi1/0/23 Gi0/1 green deployment-server Gi1/0/23 to Gi0/1 SW9300test.aic.kz SW9300test.aic.kz SW3850test.aic.kz Gi1/0/9 Gi1/0/9  green deployment-server Gi1/0/9 to Gi1/0/9 SW9300test.aic.kz SW9300test.aic.kz SW3850test.aic.kz Gi1/0/10 Gi1/0/10  green deployment-server Gi1/0/10 to Gi1/0/10 SW9300test.aic.kz SW3850test.aic.kz SW9300test.aic.kz Gi1/0/9 Gi1/0/9  green deployment-server Gi1/0/9 to Gi1/0/9 SW3850test.aic.kz SW3850test.aic.kz SW9300test.aic.kz Gi1/0/10 Gi1/0/10  green deployment-server Gi1/0/10 to Gi1/0/10 SW3850test.aic.kz AIC-switch-2960.aic.kz SIP-W60B Gi0/12 WAN PORT green phone-square Gi0/12 to WAN PORT AIC-switch-2960.aic.kz   And, accordingly, in the topology, this is: I took information about connected devices from AIC-switch-2960.aic.kz, SW9300test.aic.kz and SW3850test.aic.kz. I just need to remove non-redundant links from the table. What solution can you advise to delete such entries automatically or some other way? Thanks!    
Hi all, Hope you can assist.  I am having issues with connecting to my SolarWinds  App server via the Splunk add on. I have tested my connection via Curl from my heavy forwarder, and I can get it t... See more...
Hi all, Hope you can assist.  I am having issues with connecting to my SolarWinds  App server via the Splunk add on. I have tested my connection via Curl from my heavy forwarder, and I can get it to connect, and pull back the query .     curl -k -v -u MyUserName https://mysolarwindsserver.local:17778/Solarwinds/InformationService/V3/JSOn/Query?query=SELECT+IPAddress+FROM+Orion.Nodes       The issue is, if I take take the -k out, I get this error "curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above" See below.  Has any one seen this issue before? I have read over loads via this community, and can't seem to locate the fix. Also, when I log in to the HF and look at the Splunk Add-on for SolarWinds under the account tab it is says "loading"      Enter host password for user 'MyUserName': * Trying x.x.x.x.... * TCP_NODELAY set * Connected to mysolarwindsserver.local (x.x.x.x) port 17778 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (OUT), TLS alert, Server hello (2): * SSL certificate problem: self signed certificate * stopped the pause stream! * Closing connection 0 curl: (60) SSL certificate problem: self signed certificate More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.     Thanks    
I installed and configure successfully "Splunk Add-on for GCP" version 3.0.2 to access data xml files stored in a bucket. I use it for 2 GCP bucket (DEV and PROD). It's works well in DEV with a ded... See more...
I installed and configure successfully "Splunk Add-on for GCP" version 3.0.2 to access data xml files stored in a bucket. I use it for 2 GCP bucket (DEV and PROD). It's works well in DEV with a dedicated bucket with hundreds files directly in root But it didn't work well with PROD bucket (a larger one with thousands files in a tree). It seems to be continuously reading sames files in first directory and don't index them because of unsupported type. I don't understand why it didn't scan the entire tree and didn't throw error in the process. Why message is always "Files to be ingested: 978" since there are 1916 files in first directory called cdp ?  I didn't find a way to filter by example by specifying a path to analyze just that path and not the complete bucket.   Does somebody have ideas ?   Thanks by advance.   Following is extract of log file splunk_ta_google_cloudplatform_google_cloud_bucket_metadata__1.log 2021-07-26 10:53:10,700 level=INFO pid=34200 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:ingest_data:107 |  | message="-----Data Ingestion begins-----" 2021-07-26 10:53:36,829 level=INFO pid=15708 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:ingest_data:107 |  | message="-----Data Ingestion begins-----" 2021-07-26 10:53:45,848 level=WARNING pid=15708 tid=MainThread logger=googleapiclient.discovery_cache pos=__init__.py:autodetect:44 | file_cache is unavailable when using oauth2client >= 4.0.0 Traceback (most recent call last):   File "D:\SPLUNK\etc\apps\Splunk_TA_google-cloudplatform\bin\3rdparty\googleapiclient\discovery_cache\__init__.py", line 41, in autodetect     from . import file_cache   File "D:\SPLUNK\etc\apps\Splunk_TA_google-cloudplatform\bin\3rdparty\googleapiclient\discovery_cache\file_cache.py", line 41, in <module>     'file_cache is unavailable when using oauth2client >= 4.0.0') ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 2021-07-26 10:53:46,118 level=INFO pid=15708 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:get_metadata:264 |  | message="Successfully obtained bucket metadata for prd-europe-west1-archiving" 2021-07-26 10:53:46,259 level=INFO pid=15708 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:get_metadata:269 |  | message="Successfully obtained object information present in the bucket prd-europe-west1-archiving." 2021-07-26 10:53:47,107 level=INFO pid=15708 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:get_list_of_files_to_be_ingested:352 |  | message="Files to be ingested: 978 files" 2021-07-26 10:53:47,224 level=INFO pid=15708 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:ingest_file_content:396 |  | message="Cannot ingest contents of cdp/f006006102/processing/InternalTranscodifications_f006006102_161839.avro, file with this extention is not yet supported in the TA" 2021-07-26 10:53:47,361 level=INFO pid=15708 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:ingest_file_content:396 |  | message="Cannot ingest contents of cdp/f006006102/processing/InternalTranscodifications_f006006102_161916.avro, file with this extention is not yet supported in the TA"
Hello, I have 2 queries related to a chart I am creating in relation to showing company email addresses part of data breaches represented as a stacked column chart. The first issue is the conversio... See more...
Hello, I have 2 queries related to a chart I am creating in relation to showing company email addresses part of data breaches represented as a stacked column chart. The first issue is the conversion of the year and month breakdown using regex: index=all_breaches company_email=* breach=* | rex field="breach_date" "^(?<year>[^-]+)-(?<month>[^-]+)-(?<day>.+)" | eval month=strftime("month","%b") | chart count by year, month The issue with the above query is that I want to convert the month to abbreviated month e.g. Jan, Feb rather than 01,02. This query only shows the first month (Jan) rather than all months (changing eval month is "mon" results in Null value field). How do show all months represented in abbreviated form? **See the chart below that uses this query index=all_breaches company_email=* breach=* | rex field="breach_date" "^(?<year>[^-]+)-(?<month>[^-]+)-(?<day>.+)" | chart count by year, month Second issue, how do I edit the above search  to show the "company_email" as part of the "breach" that is already broken down into months? Ideally the company_email forms part of the breach column chart per month. Thanks in advance!
How to add group widgets/panel in a dashboard with a common border? Eg group1 :  panel1,  panel 2  - combined border for blue color group2 : panel3,  panel 4  and panel5 - combined border for red ... See more...
How to add group widgets/panel in a dashboard with a common border? Eg group1 :  panel1,  panel 2  - combined border for blue color group2 : panel3,  panel 4  and panel5 - combined border for red color