All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have scanned two Splunk packages which is less than 2mb using Splunk AppInspect CLI but when I try to scan another Splunk package i.e more than 100mb size  .It went like an hour then gave me a mess... See more...
I have scanned two Splunk packages which is less than 2mb using Splunk AppInspect CLI but when I try to scan another Splunk package i.e more than 100mb size  .It went like an hour then gave me a message like this LEVEL="CRITICAL" TIME="**********" NAME="root" FILENAME="main.py" MODULE="main" MESSAGE="An unexpected error occurred during the run-time of Splunk App Inspect". is there any size criteria for the Splunk AppInspect CLI  that should be followed?     
I have a non numerical field (text), and I want to create an enum field.  Meaning that I will have a new field with numerical values that match the text values of the original field.  Thanks
In earlier versions, we were able to modify/customize and remove the Splunk version information shown on the browser tabs by editing the common.js file in the path - $SPLUNK_HOME/share/splunk/search_... See more...
In earlier versions, we were able to modify/customize and remove the Splunk version information shown on the browser tabs by editing the common.js file in the path - $SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/build/pages/enterprise/common.js. By changing -document.title=splunkUtils.sprintf(_(title).t()+" | "+_("Splunk %s %s")…  To -  document.title=splunkUtils.sprintf(_(title).t()+" | "+_("Splunk_Test %s")…  However in the latest version 8.2.1 the change is not taking effect across all the apps instead it works only for launcher app and search app(just to note, by default these two are the only apps that will be shipped with Splunk fresh installation). You may refer to the attached screenshot. Here  'Splunk_Test'   is a custom value which I want the browser tabs to display instead of Splunk version. Is there any way that this can be fixed so that the changes reflect in the browser tabs for all applications.  
Hello everyone, I am collecting Windows Event Logs and Sysmon Logs from my Windows Domain to my WEF. From WEF using a UF I am forwarding everything to my Splunk Indexer. My question is: How can I ... See more...
Hello everyone, I am collecting Windows Event Logs and Sysmon Logs from my Windows Domain to my WEF. From WEF using a UF I am forwarding everything to my Splunk Indexer. My question is: How can I split my data collected to [WinEventLog://ForwardedEvents] to two different indexes (wineventlog, sysmon)? I don't want Sysmon to get into wineventlog index. Shall I use props.conf and transform.conf modification to achieve that? If yes, can you please guide me on how this shall be formatted? The next step would be to properly configure inputs.conf on Splunk_TA_Windows and TA-microsoft-sysmon so that I don't have to index unneeded stuff that will cause performance issues. For example on TA-microsoft-sysmon's  inputs.conf I will have to put: [WinEventLog://ForwardedEvents] disabled = 0 index = sysmon start_from = oldest currently_only = 0 checkpointInterval = 5 renderXml = true but this will also index wineventlogs which are not needed in this index. Is that correct? Thanks Chris
Has anyone integrated splunk with siemplify? I am planning to do so, need some ideas to start with.
Hi everyone! Maybe someone faced such a problem: I want to build a Layer 2 network topology, I have enough data for this. I am working with the Network Diagram Viz app. And I have a table of links,... See more...
Hi everyone! Maybe someone faced such a problem: I want to build a Layer 2 network topology, I have enough data for this. I am working with the Network Diagram Viz app. And I have a table of links, something like this: from to local_int remote_int linkcolor type linktext value AIC-switch-2960.aic.kz SW9300test.aic.kz Gi0/1 Gi1/0/23 green deployment-server Gi0/1 to Gi1/0/23 AIC-switch-2960.aic.kz SW9300test.aic.kz AIC-switch-2960.aic.kz Gi1/0/23 Gi0/1 green deployment-server Gi1/0/23 to Gi0/1 SW9300test.aic.kz SW9300test.aic.kz SW3850test.aic.kz Gi1/0/9 Gi1/0/9  green deployment-server Gi1/0/9 to Gi1/0/9 SW9300test.aic.kz SW9300test.aic.kz SW3850test.aic.kz Gi1/0/10 Gi1/0/10  green deployment-server Gi1/0/10 to Gi1/0/10 SW9300test.aic.kz SW3850test.aic.kz SW9300test.aic.kz Gi1/0/9 Gi1/0/9  green deployment-server Gi1/0/9 to Gi1/0/9 SW3850test.aic.kz SW3850test.aic.kz SW9300test.aic.kz Gi1/0/10 Gi1/0/10  green deployment-server Gi1/0/10 to Gi1/0/10 SW3850test.aic.kz AIC-switch-2960.aic.kz SIP-W60B Gi0/12 WAN PORT green phone-square Gi0/12 to WAN PORT AIC-switch-2960.aic.kz   And, accordingly, in the topology, this is: I took information about connected devices from AIC-switch-2960.aic.kz, SW9300test.aic.kz and SW3850test.aic.kz. I just need to remove non-redundant links from the table. What solution can you advise to delete such entries automatically or some other way? Thanks!    
Hi all, Hope you can assist.  I am having issues with connecting to my SolarWinds  App server via the Splunk add on. I have tested my connection via Curl from my heavy forwarder, and I can get it t... See more...
Hi all, Hope you can assist.  I am having issues with connecting to my SolarWinds  App server via the Splunk add on. I have tested my connection via Curl from my heavy forwarder, and I can get it to connect, and pull back the query .     curl -k -v -u MyUserName https://mysolarwindsserver.local:17778/Solarwinds/InformationService/V3/JSOn/Query?query=SELECT+IPAddress+FROM+Orion.Nodes       The issue is, if I take take the -k out, I get this error "curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above" See below.  Has any one seen this issue before? I have read over loads via this community, and can't seem to locate the fix. Also, when I log in to the HF and look at the Splunk Add-on for SolarWinds under the account tab it is says "loading"      Enter host password for user 'MyUserName': * Trying x.x.x.x.... * TCP_NODELAY set * Connected to mysolarwindsserver.local (x.x.x.x) port 17778 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (OUT), TLS alert, Server hello (2): * SSL certificate problem: self signed certificate * stopped the pause stream! * Closing connection 0 curl: (60) SSL certificate problem: self signed certificate More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.     Thanks    
I installed and configure successfully "Splunk Add-on for GCP" version 3.0.2 to access data xml files stored in a bucket. I use it for 2 GCP bucket (DEV and PROD). It's works well in DEV with a ded... See more...
I installed and configure successfully "Splunk Add-on for GCP" version 3.0.2 to access data xml files stored in a bucket. I use it for 2 GCP bucket (DEV and PROD). It's works well in DEV with a dedicated bucket with hundreds files directly in root But it didn't work well with PROD bucket (a larger one with thousands files in a tree). It seems to be continuously reading sames files in first directory and don't index them because of unsupported type. I don't understand why it didn't scan the entire tree and didn't throw error in the process. Why message is always "Files to be ingested: 978" since there are 1916 files in first directory called cdp ?  I didn't find a way to filter by example by specifying a path to analyze just that path and not the complete bucket.   Does somebody have ideas ?   Thanks by advance.   Following is extract of log file splunk_ta_google_cloudplatform_google_cloud_bucket_metadata__1.log 2021-07-26 10:53:10,700 level=INFO pid=34200 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:ingest_data:107 |  | message="-----Data Ingestion begins-----" 2021-07-26 10:53:36,829 level=INFO pid=15708 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:ingest_data:107 |  | message="-----Data Ingestion begins-----" 2021-07-26 10:53:45,848 level=WARNING pid=15708 tid=MainThread logger=googleapiclient.discovery_cache pos=__init__.py:autodetect:44 | file_cache is unavailable when using oauth2client >= 4.0.0 Traceback (most recent call last):   File "D:\SPLUNK\etc\apps\Splunk_TA_google-cloudplatform\bin\3rdparty\googleapiclient\discovery_cache\__init__.py", line 41, in autodetect     from . import file_cache   File "D:\SPLUNK\etc\apps\Splunk_TA_google-cloudplatform\bin\3rdparty\googleapiclient\discovery_cache\file_cache.py", line 41, in <module>     'file_cache is unavailable when using oauth2client >= 4.0.0') ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 2021-07-26 10:53:46,118 level=INFO pid=15708 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:get_metadata:264 |  | message="Successfully obtained bucket metadata for prd-europe-west1-archiving" 2021-07-26 10:53:46,259 level=INFO pid=15708 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:get_metadata:269 |  | message="Successfully obtained object information present in the bucket prd-europe-west1-archiving." 2021-07-26 10:53:47,107 level=INFO pid=15708 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:get_list_of_files_to_be_ingested:352 |  | message="Files to be ingested: 978 files" 2021-07-26 10:53:47,224 level=INFO pid=15708 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:ingest_file_content:396 |  | message="Cannot ingest contents of cdp/f006006102/processing/InternalTranscodifications_f006006102_161839.avro, file with this extention is not yet supported in the TA" 2021-07-26 10:53:47,361 level=INFO pid=15708 tid=MainThread logger=splunk_ta_gcp.modinputs.bucket_metadata pos=bucket_metadata.py:ingest_file_content:396 |  | message="Cannot ingest contents of cdp/f006006102/processing/InternalTranscodifications_f006006102_161916.avro, file with this extention is not yet supported in the TA"
Hello, I have 2 queries related to a chart I am creating in relation to showing company email addresses part of data breaches represented as a stacked column chart. The first issue is the conversio... See more...
Hello, I have 2 queries related to a chart I am creating in relation to showing company email addresses part of data breaches represented as a stacked column chart. The first issue is the conversion of the year and month breakdown using regex: index=all_breaches company_email=* breach=* | rex field="breach_date" "^(?<year>[^-]+)-(?<month>[^-]+)-(?<day>.+)" | eval month=strftime("month","%b") | chart count by year, month The issue with the above query is that I want to convert the month to abbreviated month e.g. Jan, Feb rather than 01,02. This query only shows the first month (Jan) rather than all months (changing eval month is "mon" results in Null value field). How do show all months represented in abbreviated form? **See the chart below that uses this query index=all_breaches company_email=* breach=* | rex field="breach_date" "^(?<year>[^-]+)-(?<month>[^-]+)-(?<day>.+)" | chart count by year, month Second issue, how do I edit the above search  to show the "company_email" as part of the "breach" that is already broken down into months? Ideally the company_email forms part of the breach column chart per month. Thanks in advance!
How to add group widgets/panel in a dashboard with a common border? Eg group1 :  panel1,  panel 2  - combined border for blue color group2 : panel3,  panel 4  and panel5 - combined border for red ... See more...
How to add group widgets/panel in a dashboard with a common border? Eg group1 :  panel1,  panel 2  - combined border for blue color group2 : panel3,  panel 4  and panel5 - combined border for red color  
Hello, In the video provided on the Splunk site, there is a portion that shows a 3D scatter plot visualization. https://www.splunk.com/en_us/resources/videos/explore-your-data.html Can you please p... See more...
Hello, In the video provided on the Splunk site, there is a portion that shows a 3D scatter plot visualization. https://www.splunk.com/en_us/resources/videos/explore-your-data.html Can you please provide the app where this can be found/ details on how to build this? Thanks  
Hello all, Configured custom ssl certificates on Deployment Server (both splunkd and splunk web), and deployment clients are connecting to DS fine on this setup. But, command line login started fai... See more...
Hello all, Configured custom ssl certificates on Deployment Server (both splunkd and splunk web), and deployment clients are connecting to DS fine on this setup. But, command line login started failing after this while running any commands on DS, Need help in resolving this please ? (we have requireClientCert=false). Thanks in Advance Chetu
Hello, I'm currently exploring Splunk Phantom or Splunk SOAR. When I try to create a new playbook or copy and save any existing playbook I'm getting the following error. Please advise. failed to co... See more...
Hello, I'm currently exploring Splunk Phantom or Splunk SOAR. When I try to create a new playbook or copy and save any existing playbook I'm getting the following error. Please advise. failed to communicate with platform component: phantom_decided   Thanks.
In my current setup, I want to forward only internal logs to Indexers in myOrg, whereas, some non-internal logs to Indexers of an external Org. Below is my current outputs.conf, however, its not wor... See more...
In my current setup, I want to forward only internal logs to Indexers in myOrg, whereas, some non-internal logs to Indexers of an external Org. Below is my current outputs.conf, however, its not working as intended. I am seeing forwarder attempting to forward non-internal logs to myOrg's indexers as well.     [tcpout] defaultGroup = Internal_indexers #disable default filters forwardedindex.0.whitelist = forwardedindex.1.blacklist = forwardedindex.2.whitelist = forwardedindex.3.whitelist = #Enable these forwardedindex.4.whitelist = (_audit|_introspection|_internal|_telemetry) [tcpout:Internal_indexers] server = index01:9997 [tcpout:OrgA_indexer] server = y.y.y.y:9997   Update: Below is inputs.conf for non-internal log [monitor://some_source.log] index = abc sourcetype = syslog _TCP_ROUTING = OrgA_indexer  
In my current setup, I am routing  some data (only non-internal indexes) from our current environment to two different Indexers outside of my Org and I dont have access to them. Is there a way to fi... See more...
In my current setup, I am routing  some data (only non-internal indexes) from our current environment to two different Indexers outside of my Org and I dont have access to them. Is there a way to figure out what stream of data is going to which indexer ?  
Hello , I need to frame the search query for <drilldown_search> for the following type : "drilldown_search": "| from datamodel:\"Authentication\".\"Authentication\" | search src=$src|s$" Current... See more...
Hello , I need to frame the search query for <drilldown_search> for the following type : "drilldown_search": "| from datamodel:\"Authentication\".\"Authentication\" | search src=$src|s$" Currently in my results have value for src, how Do I escape this '|s' in the query string.   Thanks, Mahalaxmi   
Hello friends,   Suppose I install Microsoft Sysmon on a Windows server.   I then go install the Universal Forwarder on the Windows server with the default settings.  A deployment server is in the... See more...
Hello friends,   Suppose I install Microsoft Sysmon on a Windows server.   I then go install the Universal Forwarder on the Windows server with the default settings.  A deployment server is in the mix too if that matters.   My question is this.  Will the Universal Forwarder know to pick up the Syslog events if using all default settings? Is that defined on the Deployment server?
Hi, I have a field value 2021-07-26T00:30:51.411 UTC which I got from | eval strftime(_time,"%Y-%m-%dT%H:%M:%S.%Q %Z")  how can I turn this field into the Brisbane timezone (+10 hrs)?
Hello. I have an input lookup csv file with a single column named “Domain” that has a list of domain names in that column. I would like to loop through all those domain names and check if there are a... See more...
Hello. I have an input lookup csv file with a single column named “Domain” that has a list of domain names in that column. I would like to loop through all those domain names and check if there are any events (from multiple indexes where I don’t want to worry about finding what Splunk field matches to “domain”) that include any of the domain names from my inputlookup csv. How would I build this search? 
Hello, Here is the whole context and question: https://community.splunk.com/t5/Splunk-Search/Aggregate-query-help/m-p/560663/highlight/true#M159340 As a next step from the search query would like ... See more...
Hello, Here is the whole context and question: https://community.splunk.com/t5/Splunk-Search/Aggregate-query-help/m-p/560663/highlight/true#M159340 As a next step from the search query would like to showcase the result on dashboard, where from a drop down when we select a particular attribute it will show the count of total and RecordOutRange on y-axis in time span of every15min on x-axis. Thanks,