All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How to perform splunk search for local account in the openstack tenant (and audit) logs ? Thanks
Using Splunk enterprise 8.2.5 and trying to match a string of repeating characters in my Events. For example of the log file I'm ingesting       INFO - Service Started DEBUG - Service suspend... See more...
Using Splunk enterprise 8.2.5 and trying to match a string of repeating characters in my Events. For example of the log file I'm ingesting       INFO - Service Started DEBUG - Service suspended       So I was testing this as follows but the field mylevel is not extracted        | makeresults | eval msg="info"| rex field=msg "(?<mylevel>\w{4-5})" | table mylevel       This works though       | makeresults | eval msg="info"| rex field=msg "(?<mylevel>(\w{4})|(\w{5}))" | table mylevel       What is incorrect/wrong with my usage of this ?       \w{4-5}        
Hi All I have a couple of questions regarding embedded reports, I'm looking to use them to provide an iframe to teams that want to include the service status of IT systems into their pages (e.g. we... See more...
Hi All I have a couple of questions regarding embedded reports, I'm looking to use them to provide an iframe to teams that want to include the service status of IT systems into their pages (e.g. websites, Service Management tools, digital signage), so I'm looking to have one report as it will cover all the requirements. I'm having two challenges though We are Splunk Cloud and the 20 row table limit is a pain, as we have more than 20 IT Services, does anyone know if this can be increased? When you disable embedding and after making a change re-enable it, the URL is different, does anyone know if you can stop this or map it to a friendly URL, if I'm going to provide it to multiple teams it will be a pain to give them a new URL whenever we have to make a change? Cheers in advance Andy
Hello, I've seen in the documentation that default MetricsSets have a standard set of metrics. And that these include `workflows` metrics, for example, those shown here in the above linked document... See more...
Hello, I've seen in the documentation that default MetricsSets have a standard set of metrics. And that these include `workflows` metrics, for example, those shown here in the above linked documentation:   I've searched metrics in our new Splunk Observability, and I don't see any workflows metrics. Is this normal? Is there anything I need to enable? I'm using an Opentelemtry jekins plugin, and other metrics are being received, but I don't see any workflows metrics, even though other docs I've seen seen that use the same plugin seem to utilise these workflows metrics.     
Greetings! We are trying to integrate Splunk Cloud with Flexera SaaS Manager, we saw directly in Flexera and there isn't a direct integration, is there a way/process that we can follow to do the in... See more...
Greetings! We are trying to integrate Splunk Cloud with Flexera SaaS Manager, we saw directly in Flexera and there isn't a direct integration, is there a way/process that we can follow to do the integration? Thanks in advanced!
Hello Splunkers , I am trying to find the up time of hosts by calculating the difference between the latest event for that host and last time it booted . The following event describes that partic... See more...
Hello Splunkers , I am trying to find the up time of hosts by calculating the difference between the latest event for that host and last time it booted . The following event describes that particular host has been booted. 2023-02-24T08:58:38.796336-08:00 hostabc kernel: [ 0.000000] Linux version 5.15.0-58-generic (buildd@lcy02-amd64-101) (gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 (Ubuntu 5.15.0-58.64-generic 5.15.74) The following event is the latest event of that host 2023-02-24T14:04:51.115717-08:00 hostabc sssd_nss[248054]: Starting up  Firstly I want to get the difference between 2023-02-24T14:04:51.115717-08:00 - 2023-02-24T08:58:38.796336-08:00  Secondly If the difference is greater than 60 minutes create a new file called status and say it as down Thanks in Advance 
Does anyone know of a way that I can check if a system is reporting into my log server  
Hi Community,   When the panels are loaded in dashboards, I find this error as an exclamation in the panel. I initially thought the error was due to the permission and granted all the folders a... See more...
Hi Community,   When the panels are loaded in dashboards, I find this error as an exclamation in the panel. I initially thought the error was due to the permission and granted all the folders access, but still the issue exists. Not sure what is the issue, could someone provide me some insights on how to fix the issue?   Regards, Pravin
Hi Community,   I have upgraded the Splunk cluster to version 9.0.2 and noticed high CPU usage in the cluster search head. This almost causes memory usage to go high as 90%. The same works fine i... See more...
Hi Community,   I have upgraded the Splunk cluster to version 9.0.2 and noticed high CPU usage in the cluster search head. This almost causes memory usage to go high as 90%. The same works fine in 8.1.0 version.   Just checking if someone has noticed similar issues when migrating to 9.x version. Was there a version that didn't have issues? What did you do to fix the issue?   Regards, Pravin
When testing the JDK8+ agent installer using the latest version of Docker Desktop locally... Docker flags up some vulnerabilities within the installer packages. Have included screenshots of those id... See more...
When testing the JDK8+ agent installer using the latest version of Docker Desktop locally... Docker flags up some vulnerabilities within the installer packages. Have included screenshots of those identifed for the last two releases. Are there plans to remove these from upcoming releases please? Latest version December 2022
I am trying to pair down the list of ciphers we are using.  When I remove AES256-GCM-SHA384 I begin to get the below errors on our Search Head Cluster.   02-24-2023 16:17:35.187 +0000 WARN SSLCom... See more...
I am trying to pair down the list of ciphers we are using.  When I remove AES256-GCM-SHA384 I begin to get the below errors on our Search Head Cluster.   02-24-2023 16:17:35.187 +0000 WARN SSLCommon [121742 TcpOutEloop] - Received fatal SSL3 alert. ssl_state='SSLv2/v3 read server hello A', alert_description='handshake failure'. 02-24-2023 16:17:35.187 +0000 ERROR TcpOutputFd [121742 TcpOutEloop] - Connection to host=SH_IP_REMOVED:8999 failed. sock_error = 0. SSL Error = error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure   In server.conf, web.conf, inputs.conf and outputs.conf I have the below ciphers.  Once I remove AES256-GCM-SHA384.  The errors begin.   cipherSuite = ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:AES256-GCM-SHA384
When one configures the indexer cluster for SmartStore, does each indexer get its own S3 bucket?  Or is there just one very large S3 bucket and all indexers write into the same S3 bucket (separated b... See more...
When one configures the indexer cluster for SmartStore, does each indexer get its own S3 bucket?  Or is there just one very large S3 bucket and all indexers write into the same S3 bucket (separated by indexer GUID or something like that)?
Using ingestion actions, one can write a copy of events to an S3 bucket prior to indexing.  Can one search these S3 buckets with Splunk even though they were not ingested (it'd be slow, but could be ... See more...
Using ingestion actions, one can write a copy of events to an S3 bucket prior to indexing.  Can one search these S3 buckets with Splunk even though they were not ingested (it'd be slow, but could be useful for historical searches)?
I need to migrate my current ES installation from a VM to a physical host, due to performance issues in the virtual instance.  Because of internal policies, I cannot simply clone the system via rsy... See more...
I need to migrate my current ES installation from a VM to a physical host, due to performance issues in the virtual instance.  Because of internal policies, I cannot simply clone the system via rsync, as the new physical box must have a new name to indicate it isn't a VM. I tried copying the /opt/splunk/etc/system subdirectory of the new server to a backup location, then using rsync to replicate the /opt/splunk/etc subdirectory structure from the functional VM to the new server. I copied the backup of system back into place, except for the server.conf which I merged the two together. Tons of errors. Tons of missing data in the ES dashboards. What am I missing? Thanks in advance for any suggestions.
      index=mail | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS Recip... See more...
      index=mail | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS RecipientDomain output domain as domain_match2 | where isnotnull(domain_match2) | stats values(recipient) as recipient values(subject) as subject earliest(_time) AS "Earliest" latest(_time) AS "Latest" by RecipientDomain sender | where mvcount(recipient)=1 | eval subject_count=mvcount(subject) | sort - subject_count | convert ctime("Latest") | convert ctime("Earliest")     i would like to include in the results if there are any attachments in the email, show me the attachment name and size of the attachment in MB/GB.   Is this possible ?   Adding on , also i have list of suspicious keywords to in a list in lookup editor called suspicoussubject_keywords.   can you include the query to lookup for this keyword in subject and then display results?  
Hi, When I inherited this deployment, there were a lot of skipped searches. The 3 node SHC was under resourced, but with some cron skewing, tuning the limits, reducing zombie scheduled searches, an... See more...
Hi, When I inherited this deployment, there were a lot of skipped searches. The 3 node SHC was under resourced, but with some cron skewing, tuning the limits, reducing zombie scheduled searches, and optimizing some searches... I reduced a lot.  However some intensive apps were still causing skipped searches. So we added a 4th node to the SHC, and it was running smoothly without a skipped search. Now recently, I started seeing a persistent skipped search warning.  Nothing new was added (scheduled searches), resource usage looked good,  but I kept seeing >>"The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached ". I could see the jobs that were skipped, but I am not finding a way to see which jobs piled up during a time interval that caused the skipped search and the warning. I did notice some of the skipped searches were throwing warnings and errors.  I am wondering if it caused a hanging job so it added to the count, and created a skipping loop. IF any one has a way to see the scheduled searches that accumulate and cause this error and skipping, PLEASE advise. Thank you!
I have the below code <input type="checkbox" token="clear_button" searchWhenChanged="true"> <label></label> <change> <unset token="form.clear_button"></unset> <unset token="form.Name_Token"></... See more...
I have the below code <input type="checkbox" token="clear_button" searchWhenChanged="true"> <label></label> <change> <unset token="form.clear_button"></unset> <unset token="form.Name_Token"></unset> <unset token="form.SO_Token"></unset> <unset token="form.L2SO_Token"></unset> <unset token="form.LOB_Token"></unset> <unset token="form.Func_Token"></unset> </change> <delimiter> </delimiter> <choice value="clear">Clear</choice> </input> When hovering over the word "Clear" to the right of the checkbox it is clickable. I just want the checkbox to be clickable - not the text and some whitespace to the right. Is this possible? Thanks
hi all, how to extract  this  message  bgp_connect_start: connect 2403:df40:0:16::3 (Internal AS 14630) (instance master): No route to host   as new fields as BGP connection fields      BGP_CONNE... See more...
hi all, how to extract  this  message  bgp_connect_start: connect 2403:df40:0:16::3 (Internal AS 14630) (instance master): No route to host   as new fields as BGP connection fields      BGP_CONNECT_FAILED: bgp_connect_start: connect 2403:df40:0:16::3 (Internal AS 14630) (instance master): No route to host
Hi. We have a use case where we would like to maintain a single information field on some of our entities. The field name is Application Service, and it is fairly static - but not quite, so it mi... See more...
Hi. We have a use case where we would like to maintain a single information field on some of our entities. The field name is Application Service, and it is fairly static - but not quite, so it might change over time. This field can logically only have one value, and the problem with the scheduled import, is, that if the value changes, there will be two values for this field name. So we are trying to create a Python script, that will maintain this value, so if it changes the still will only be one value - the old one is overwritten.  The problem is that when we try to change this vallue and posts to Splunk, we get a code 200 returnkode, but the value has not changed.   This is an example code: (Please don't kill me, this is just a test, and I'm not really a Python developer)   def update_splunk_rest(key, jsonDict): url = base_url + "/servicesNS/nobody/SA-ITOA/itoa_interface/entity/" + key + "?is_partial_data=0" authHeader = {'Authorization': token} r = requests.post(url, headers=authHeader, json=jsonDict, verify=False) print(r.text) return r splk_entity = get_splunk_rest("2b4566fb-367e-44ec-b068-d6541a2024e6") print(splk_entity.status_code) entity = splk_entity.json() print(entity) title = entity['title'] info = entity['informational'] print(info) keys = info['fields'] values = info['values'] print(keys) print(values) i=0 for field in keys: if field == 'Application Service': break i=i+1 values[i] = "Dette er Las test" print("=============================================================") print(entity) #payload = {"_key":"821bd2f7-83d6-47a9-a753-60c04523d57e","title":title,"informational":{"fields":keys, "values":values}} #print(payload) response = update_splunk_rest("2b4566fb-367e-44ec-b068-d6541a2024e6", entity) print(response.status_code)   The entity is changed just before the post (update_splunk_rest), that does a post with is_partiel_data=0, as we are changing the entire record from ITSI. Has anyone else had this problem and found a solution? Kind regards Las
   i have two searches in different indexes index itsi_grp* and Index B, and these two searches will have common fieldvalues in hostname and problemdesc fields name index itsi_grp* using these two ... See more...
   i have two searches in different indexes index itsi_grp* and Index B, and these two searches will have common fieldvalues in hostname and problemdesc fields name index itsi_grp* using these two fields i have to get a "id" field from  index B   For example Index itsi_grp* Hostname   Problemdesc   name AAA                CPU issue          abc Index B Host  Problem ID AAA   CPU          555   My result should like below Host            problem               ID           incindetnumber     name AAA              CPU                       555       *******                       abc   Am using the join command to get the results but its not returning any values , even the data is available in both indexes, used below query, index=itsi_grp*  |search name="abc"|rename Hostname as Host, Problemdesc as Problem |join Hostname Problem [search index=B sourcetype=abc] |table Host, Problem, incindetnumber,ID, name   For fetching incindet, i will be using the lookup. The only problem is while using the join command am not getting the results, its returning 0 statistics.  Could you please help me if am using the join command properly